Skip to main content
Listen to Acton content on the go by downloading the Radio Free Acton podcast! Listen Now

AU 2025 Mobile Banner


text block float right top
button right top below
text block float right top

Religion & Liberty: Volume 34, Number 1

AI and the Discipline of Human Flourishing

    Artificial intelligence (AI) is for the birds. Or at least that’s what the preamble to the “Blueprint for an AI Bill of Rights” seems to suggest. Prepared in October 2022 under the auspices of the White House Office of Science and Technology Policy, this statement begins with an accusation against artificial intelligence (AI): “Among the great challenges posed to democracy today is the use of technology, data, and automated systems in ways that threaten the rights of the American public.” 

    Is artificial intelligence an aid or a threat to humanity? It all depends on whether it works with us or instead of us.

    Meanwhile, the American public is already using AI on a daily basis. Although it may seem futuristic and complex, AI is essentially a machine capable of performing a task that would otherwise require human intelligence. Ordinary consumer products such as Siri, Alexa, and Google are all examples of AI. You use AI for depositing a check with a banking app or using the speech-to-text function to send a text message. Yet AI goes beyond these ordinary consumer products to include innovations such as facial recognition, brain-implanted computer chips, and content-creating generative AI.

    According to this AI Bill of Rights, AI poses many threats to society, which include latent biases, breaches of privacy, and violations against humanity as a result of rendering false information. After declaring American independence from AI, the statement proposes ways to mitigate these threats through the responsible design and use of this technology. This includes proposals for safe and effective systems, protections against algorithmic discrimination, and human alternatives and safeguards.

    First physical magazine cover created by AI

    Similar policy is being enacted by the European Union. The European Commission is seeking to regulate this technology through the AI Act, a proposal for categorizing various AI systems. The AI Act would establish various categories of AI ranging from unacceptable risk, high risk, limited risk, or minimal risk. It would ban AI systems that pose such unacceptable risks as social scoring systems and facial recognition. It would tightly regulate high-risk AI systems like robot-assisted surgery and computer verification of travel documents. And limited or minimal risk AI systems ranging from chatbots to spam filters would have minimal or no regulations. While it has already been years in the works, the AI Act will not take effect until 2025 at the earliest. 

    This gaggle of new policies seeking to regulate AI comes as the result of major new developments in this technology. Over six decades ago, computer scientists began hatching ideas for offloading human intelligence onto machines. Now this fledgling field has soared to new heights—especially with a new class of AI known as generative AI. 

    Generative AI uses machine learning to create new content such as text, images, videos, and sounds. Popular examples of generative AI applications include ChatGPT, Google’s Bard, Dall-E, and Murf. As more people use generative AI applications, this technology is now everywhere—work, school, home, and church.

    Generative AI uses machine learning to create new content such as text, images, videos, and sounds.

    This article will not argue that artificial intelligence is for the birds, however. Treating AI like an albatross that must be banned is not a tenable path forward for society. Rather, this article will explore how the increase of AI—and generative AI in particular—raises the stakes for humans to build countervailing disciplines, skills, and communities. A robust human flourishing must counterbalance the rise of machine learning. 

    Sparrows, Owls, and Supercomputers

    Nick Bostrom, director of the Future of Humanity Institute at Oxford University, offers an ornithological parable in his book Superintelligence: Paths, Dangers, Strategies. Bostrom’s unfinished parable of the sparrows goes like this: Several sparrows were hard at work building their nests. After days of long and tiresome work, they began to lament about how small and weak they are. Then one of them had an idea: “What if we had an owl who could help us build our nests?” This idea generated excitement about other ways that an owl could be useful to the sparrows. It could look after the young and elderly. It could offer advice. It could guard against the neighborhood cat. 

    With great enthusiasm, they embarked on finding an abandoned owlet or an unhatched owl egg. But a surly sparrow named Scronkfinkle warned that baby owls become big owls. He argued that they should first learn the art of owl taming before bringing an owl into their nest. Several others objected to this warning on the basis that simply finding an owl egg would be more than enough work. These sparrows decided they would begin by getting a baby owl—and then afterward they would consider the challenge of taming it. With unbridled excitement, they ventured off to find a baby owl. 

    Superintelligence by Nick Bostrom, published in 2014

    Meanwhile, only a few sparrows remained in the nest to begin the work of figuring out how sparrows might tame an owl. 

    As with most parables, this story is about more than sparrows and owls. Bostrom offers this unfinished parable as a way to think about the risks of bringing superintelligence such as AI into our midst. Humanity is the sparrows; machine learning is the owl. 

    How does the parable end? In the absence of a conclusion, one must guess what happens to the sparrows. The most gruesome—and unimaginative—ending to the parable is that the owl hatches and eats all the sparrows. For our technological society, this is the notion of an impending AI apocalypse. 

    Might there be another possible ending to this parable? Suppose it ends like this: The owl hatches and does not eat the sparrows. By living with the sparrows, the owl begins to act and think like a sparrow. Instead of eating the sparrows, the owlet learns the sparrow art of nest-building and food-gathering. As more skills and practices shift from the sparrows to the owl, the former get weaker and the latter gets stronger. The only perceptible change is that the sparrows forget the feel of twigs, the air and lift of flight. The adventure of avoiding predators subsides for the sparrows. The craft and technique of nestmaking moves from the sparrows to the domain of owls. 

    A less obvious—but still tragic—ending to this parable is that the owl leads to weaker sparrows with diminished abilities and atrophied discipline, skill, and community. Generative AI will do the same to us unless we pair it with a robust human flourishing. 

    Is Generative AI a Threat to Humanity? 

    Not unlike this parable, generative AI is like an eager young owl ready and willing to serve us. Consumer applications such as ChatGPT and Google’s Bard offer immediate benefits. Yet these powerful devices nevertheless can be deleterious to human users.

    The most immediate benefit of generative AI is its ability to complete time-consuming tasks. Generative AI applications can create a detailed travel itinerary based on a set of prompts supplied by a user. Or a homeowner can use these applications to draft an email to a contractor requesting a quote for a household project. Generative AI can write and debug computer programs, create business pitches, and translate text into different languages. These are just a few of the immediate benefits that come from this emerging technology. 

    How does generative AI work? Generative AI is part of a new field of AI known as large language models. Drawing on previous iterations of AI, this new paradigm uses something called “foundation models.” Massive amounts of data serve as the foundation for machine learning. Generative AI is a supercomputer fed with terabytes of data in the form of words, language, and text—hence the “large” in large language models. While owls feed upon worms and mice, large language models feed upon language data scraped from the internet. 

    Generative AI may empower humans to pursue new heights of knowledge by freeing them from monotonous tasks. 

    The computer takes all this data, analyzes it, and organizes it into categories and connections called neural networks. The supercomputer uses these neural networks to solve language problems such as text classification, question answering, document summarization, and text generation. Generative AI functions like a very sophisticated autocomplete or chatbot. This technology uses machine learning to “chat” cogent responses to our questions or prompts. 

    This basic overview of generative AI allows us to pursue the question at the heart of this article: How might this pose a threat to humanity? Like the sparrows in the fable, generative AI can weaken human discipline, skill, and community. This emerging technology has the power to undermine human flourishing. The more that humans rely on this technology, the greater the risk of atrophy. Without any counterbalances, generative AI will weaken the human capacity for composing literature, poetry, music, and computer programs. This technology can diminish human hermeneutical skills such as literary interpretation or judicial decision-making. As human reliance on these devices increases, the unaided human capacity to compose, interpret, and think may decrease. 

    Pres. Jimmy Carter and his daughter Amy participating in a speed reading course (1977)
    (Photo courtesy U.S. National Archives and Records Administration)

    For example, ChatGPT can create a literature review summarizing books and articles on a particular subject. With owl-like speed, this generative AI application can read, digest, and regurgitate a wealth of information on a given topic. This technology surpasses human speed-reading abilities. As humans offload the work of literature reviews to supercomputers, our skills and abilities in this regard will atrophy. Reading large amounts of text, organizing it into themes, and summarizing the main points will become an antiquated practice. In this regard, generative AI may empower humans to pursue new heights of knowledge by freeing them from monotonous tasks. On the other hand, the owl-like speed of this technology does not necessarily include wisdom or truth. It may thrust us into a “post-truth” future in which we are awash in facts and information but lack guides for what is true or wise.

    There are other furtive dangers in losing the human capacity to do this sort of work. Humans will come to depend on these tools for help with composition, interpretation, and translation. Humans will still be able to compose but only with the help of Google’s Bard. Humans will still be able to interpret but only with ChatGPT to do the heavy lifting. Humans will still be able to translate but only with the assistance of Google Translate. These powerful devices make us simultaneously smarter and dumber, stronger and weaker, more human and less human. We will be able to soar to new heights, yet only with the aid of these tools. But like the bird-man Icarus, it all comes crashing down if our artificial tools fail us.  

    Mural advertising Wonder Bread on Beale Street in Memphis (1939)
    (Photo by Marion Post Wolcott/ United States Library of Congress)

    Generative AI, however, is not alone in posing this threat to humanity. It’s already part of a long line of devices eroding human skills, discipline, and community. Before generative AI, smartphone apps, for example, were already helping us navigate roadways and augmenting our view of the nighttime sky. While this technology has provided immediate benefits to travelers and stargazers, it has also incapacitated our ability to determine cardinal directions or find Polaris amid a sea of stars. For that matter, long before generative AI or phone apps, mass-produced Wonder Bread liberated humanity from the toil of endless baking. This development was the best thing since sliced bread, but it brought a profound cooling to the home hearth and the practice of breadmaking. As society progresses with supercomputers, smartphones, and other technological developments, we regress into a state where we cannot write or think, navigate or bake our own bread, without the help of devices.    

    Devices, Focal Things, and Counterbalances

    Long before the advent of generative AI, Albert Borgmann was writing about technological devices. Borgmann is a recently deceased philosophy of technology scholar. In his book Technology and the Character of Contemporary Life,Borgmann argued that technology has shaped contemporary life around its peculiar pattern. Borgmann suggested that the pattern of technology becomes particularly harmful when there are no means by which one can “prune back the excesses of technology and restrict it to a supporting role.”

    Borgmann makes a distinction between “focal things” and “devices.” A focal thing requires focus, skill, bodily and social engagement, and context. According to Borgmann, a focal thing is “inseparable from its context, namely its world, and from our commerce with the thing and its world, namely, engagement. The experience of a thing is always and also a bodily and social engagement with the thing’s world. In calling forth a manifold engagement, a thing necessarily provides more than one commodity.”

    A wood-burning stove is a focal thing: it requires skill and bodily engagement through woodcutting, seasoning wood, and fire building. This thing exists within a context of forest, home, family, and community. It leads to social engagement and focus as multiple people contribute to the process and becomes a focal point in the home. 

    A device stands in stark contrast to a focal thing. Devices make no demands of skill, strength, or attention. Devices provide commodities for enjoyment without encumbrance or context. The lack of encumbrance makes the commodious consumption of devices thoughtless and disposable. Technological devices produce a commodity without burdening us in any way. Devices are quick, easy, foolproof, and safe. A furnace or central-heating system is a device. These devices provide warmth without any demand from the recipient. ChatGPT is also an example of a device. This device provides a commodity—summaries, essays, answers—without any skill, preparation, or demand on the user. 

    Things require skilled and active human engagement; devices require no focus, engagement, or context. Things require practice; devices invite consumption. Things constitute commanding reality; devices procure disposable reality. Although technological devices ostensibly liberate humanity from toil, poverty, and suffering, this liberation comes with disengagement, distraction, commodification, and isolation. The move from things to devices—or from human creativity to generative AI—is not without consequence. 

    Wood-burning kitchen stove in a log cabin at Grey Roots, Ontario
    (Photo by Robert Taylor / Wikipedia)

    While devices such as ChatGPT and Google’s Bard make no demand of our skills, strength, or attention, focal things do. Focal things such as books, violins, paintbrushes, and fly-fishing rods demand our skills, strength, and attention. Focal things are concrete, tangible, and engaging entities that require a practice to prosper within: “It sponsors discipline and skill which are exercised in a unity of achievement and enjoyment, of mind, body, and the world, of myself and others, and in a social union.” 

    Focal things are related to focal practices. Borgmann argues that corporate worship, table fellowship, reading aloud, and live music are a few of the focal practices that humans might pursue. Attending to these practices will foster discipline and skill, strength and attention, engagement and community. 

    Human flourishing and generative AI devices can coexist with the help of focal things and practices. Chatbot recipes need the counterbalance of human conversation and the culture of the table. Effortless AI summaries of The Brothers Karamazov must be matched with the human effort of listening to Dostoevsky read aloud. Artificially generated images that are a chimera of reality need equal attention to viewing human works of art or venturing outdoors. Living well in a world of chatbots and generative AI requires focal things and practices. Humans will need to pursue countervailing disciplines, skills, and communities. A robust human flourishing must counterbalance the rise of machine learning and generation. 

    A Bird Story with a Different Ending

    Inviting artificial intelligence into our midst does not have to end in tragedy. The novel Watership Down by Richard Adams helps us imagine how superintelligence and flourishing can coexist. The novel tells the story of an intrepid group of rabbits displaced from their warren. As they embark on an adventure of survival, these rabbits conscript the help of a seagull named Kehaar. 

    When the rabbits meet Kehaar, he is recovering from an injury. They feed the bird and bring him into their makeshift warren. As the bird recovers and prepares to leave, a rabbit named Hazel has an idea: What if the bird could search for other warrens and rabbits? Hazel shares his plan with the other rabbits, saying, “The bird will go and search for us!” One of the other rabbits, Blackberry, loves the idea and tells the others, “What a marvelous idea! That bird can find out in a day what we couldn’t discovery for ourselves in a thousand!”

    Devices make no demands of skill, strength, or attention.

    The rabbits enact their plan in a clever way. They hint to the bird that they have a predicament—a warren of buck rabbits without any does—and need help. Kehaar offers his power of flight as a way to help the rabbits search for other warrens. And so the rabbits partner with this bird in their adventure of survival.

    Conscripting the help of this bird does not leave the rabbits weaker or with diminished abilities. This band of bunnies flourishes amid an adventure that requires discipline, skill, and community. The bird’s power does not create an effortless existence for the rabbits. The things and practices needed for rabbits to flourish balance the superintelligence of the bird. Although they employ the bird’s help, the rabbits continue their adventure of survival, which fosters discipline and skill, strength and attention, engagement and community. 

    AI is not simply for the birds. Rejecting this technology out of fear or a desire to preserve the status quo is untenable. Nevertheless, this technology can work against human flourishing and leave us weaker, dumber, and dependent. Flourishing in a world of chatbots will require us to live like rabbits, not sparrows. The sparrows in the unfinished parable sought an owl to work for them. The rabbits of Watership Down sought a seagull to work with them on their adventure of discipline, skill, and community. These are similar stories with very different endings. 

    How will our story end as we bring artificial intelligence into our nests and warrens, homes and schools, churches and communities? That all depends on how well we cultivate disciplines, skills, and communities as we adventure into this brave new world. 


    Rev. A. Trevor Sutton is senior pastor at St. Luke Lutheran Church in Lansing, Mich., and a Ph.D. candidate at Concordia Seminary, St. Louis. He also teaches in the digital humanities graduate program at Concordia University Ann Arbor. Sutton has written several books, including Redeeming Technology (coauthored with Brian Smith, M.D.) and Authentic Christianity (coauthored with Gene Edward Veith Jr.), and his writing on technology has appeared in the Washington Post, Religion News Service, the Christian Century, and elsewhere.