If you depend on AI to perform certain tasks, you might dull your potential to grow and flourish as an individual endowed with unique strengths and interests. I offer an example from my area of work. I’m a pastor, so at least once every week I’m responsible for researching, writing, and delivering a sermon based on a text of Scripture. This is a massive effort that takes hours each week, and I pour my most vigorous spiritual and intellectual energy into it. With ChatGPT, however, I could enter the Scripture text, and ask it to generate an expository sermon, complete with an attention-grabbing introduction, a compelling rhetorical flow, vivid illustrations, and a moving conclusion. The question I must ask myself is this: If I did this for the next fifty weeks, what kind of person would I be?
Generative AI is breathtaking in the scope of its abilities, but it is no different from any other technology in that its inventors cannot predict or control the benefits and consequences.
Along with technologies of writing, currency, transportation, and food production, AI confronts us with this quandary: how can we use this and not be corrupted by it? It’s a tension explored in works of literature, including J. R. R. Tolkien’s The Lord of the Rings and Mary Shelley’s Frankenstein, and it goes as far back as the book of Genesis—with Noah’s winepress and Babel’s bricks and mortar.
This tension cannot be resolved just with the slogan “Do no harm,” because in order to know whether we have harmed something, we must know that thing’s purpose and proper function.
Suppose I thought an iPad was a cutting board. I might dice potatoes on it, and toss it in the dishwasher—as I once saw, in a video, an elderly man doing. If the iPad were a cutting board, no harm has been done. But the iPad is not a cutting board. It’s a slim computer with a colorful touch-sensitive display meant to be used for entertainment and communication, so harm has been done. Likewise, to know whether humans are being harmed or helped by AI, we must know what it means to be a human.
I hold to the Biblical teaching that a human is a being created in God’s image, which means that we exist to relate to God, using all our faculties of mind and body to love him and cultivate the world for his glory. The Bible also teaches that humans, due to sin, have an intractable bent away from God, a distortion that taints every aspect of life.
These convictions undergird three principles I seek to adopt for myself regarding my use of AI: (1) the responsibility principle, (2) the human development principle, and (3) the truth and honesty principle.
1. Responsibility Principle
If I choose to use AI, I am responsible for its effect on me and others.
This is the simplest point. The next is more difficult.
2. Human Development Principle
I will not let AI thwart the development of my character or the joy of being human.
This is the most challenging principle to follow because it’s not easy to tell whether a technology is stunting the development of one’s character. So one helpful way to address this question is to consider: “In my use of AI, what sort of person am I becoming?”
On the one hand, you can use AI to enrich your experience of being human and develop your skills and character. For example, AI can help you . . .
- learn a new language,
- help you use leftovers in the fridge more responsibly and enjoyably
- summarize complex ideas
- memorize or write a poem
- evaluate an email before you send it
- write code
- help your kids with their homework
- plan a road trip
- come up with ideas for Christmas gifts
. . . and a host of other tasks. This doesn’t even touch on the use of AI in the areas of law, medicine, economy, and climate issues.
On the other hand, if you depend on AI to perform certain tasks, you might dull your potential to grow and flourish as an individual endowed with unique strengths and interests.
I offer an example from my area of work. I’m a pastor, so at least once every week I’m responsible for researching, writing, and delivering a sermon based on a text of Scripture. This is a massive effort that takes hours each week, and I pour my most vigorous spiritual and intellectual energy into it.
With ChatGPT, however, I could enter the Scripture text, and ask it to generate an expository sermon, complete with an attention-grabbing introduction, a compelling rhetorical flow, vivid illustrations, and a moving conclusion. The sermon would be exegetically and theologically sound, as well as pastorally sensitive. I could then take an hour or so to internalize this sermon and preach it on Sunday. To avoid any dishonesty, I would tell my congregation exactly where the sermon came from: although I was preaching it, the material was generated by artificial intelligence. Finally, suppose my congregation doesn’t care exactly how the sermon was developed, so long as it was Biblically sound, which it is.
The question I must ask myself is this: If I did this for the next fifty weeks, what kind of person would I be?
Subscribe to Free “Top 10 Stories” Email
Get the top 10 stories from The Aquila Report in your inbox every Tuesday morning.