We need to bring together people with diverse competencies (theology, ethics, and technology) to explore the ethical ramifications of AI in everyday life, discover what uses are ethically permissible, and create simple frameworks for everyday Christians to both see and evaluate their own uses of AI.
It took Twitter two years to reach 1 million users. Spotify? 5 months. Instagram? 2.5 months.
ChatGPT? Five days.
In the span of five days, AI broke into the conscious awareness of everyday people. For the first time, people played ChatGPT’s linguistic slot machine: tough questions in, surprisingly good answers out. White-collar workers experienced exactly what blue-collar workers did decades earlier: Here’s a machine that can do what I can do at a fraction of the cost.
Alarm bells clanged across culture with a ferocity that, in some cases, bordered on panic. Serious thinkers who knew nothing about AI before ChatGPT felt a sudden need to share their hot takes on social media and podcasts. But another set of thinkers took a different tack: they relished the generative possibility of AI, launching a cottage industry of new AI products promising to change the world.
In the span of a few months, Christians have divided mostly into two camps about the place of AI in the church: (1) critics who fear generative AI will take jobs and sabotage spiritual formation and (2) pragmatists who hope AI will free ministry leaders to do more.
The rapid technological polarization didn’t surprise me, but I didn’t find it helpful. After several years of writing about AI, I struck a mostly cautious tone. Yet, despite my fears, I became increasingly convinced that generative AI—used ethically—could serve kingdom ends.
Now is the time to pause, converse, and think, not choose sides in a war about technology most of us still know little about. The wise man is correct: “It is dangerous to have zeal without knowledge” (Prov. 19:2, NET). The risks associated with pure critique and pure pragmatism are dangerous because both leave us far more susceptible to the unethical use of AI than we would be otherwise.
Danger of AI Critics
Let’s start with the fearful. Generative AI (i.e., algorithms that can generate text, images, code, videos, etc.) can do sermon research, create sermon graphics, generate small group questions, and write sermons, blogs, and podcast scripts. Ordinary Christians can bypass pastors and mentors (and Google, for that matter) when they have spiritual questions. Instead, they may ask an AI, which happily dispenses “wisdom.”
Where does this all-knowing computer get its information and how does it produce it? All large language models (LLMs) are trained using a specific data set. For example, ChatGPT trained on the pre-2021 internet. When you ask it a question, it predicts an answer you’ll find satisfactory given the parameters of your inquiry and its own training on what counts as satisfactory. LLMs give crowdsourced answers, calibrated to be crowd-pleasers.
If you ask ChatGPT for Christian life advice, it gives only the most conventional wisdom—highly individualistic, self-expressive, rote answers. But the mediocrity of ChatGPT’s answers isn’t the only problem.
Quick, easy access to seemingly infinite information can hijack discipleship. Why do the hard work to learn the Bible and grow in wisdom when a bot can do it for you? LLMs like ChatGPT offer the promise of mastery without work.
So when people say the sky is falling, they’re not totally wrong. AI is a technological shift so titanic that it’ll make the widespread adoption of the internet look like a skiff.
Subscribe to Free “Top 10 Stories” Email
Get the top 10 stories from The Aquila Report in your inbox every Tuesday morning.