O
Over the past decade, most of our public debates about the role of technology in human affairs have focused on a set of interrelated technologies: the internet, social media, and the smartphone. The questions and challenges these technologies present are hardly resolved, but with the emergence of AI chatbots such as ChatGPT in late 2022, attention suddenly turned to artificial intelligence and the large language models (LLMs) that underwrite them. These new technologies engender utopian hopes and apocalyptic fears. Our expectations and fears introduce a great deal of noise into the information environment, and they can make it difficult to think clearly and make sound judgments about AI.
The Way We View Technology
First, it is important to challenge a prevailing narrative in tech circles, one which, if adopted uncritically, will warp our thinking and our judgments: the narrative of technological inevitability. This narrative frames the development of technology as a deterministic process to which human beings have no choice but to adapt, and to do so on the terms dictated by the emerging technological apparatus. In this narrative, technological progress unfolds inexorably and resistance is futile.
The truth, of course, is more complicated. As historians of technology have demonstrated, historical contingencies abound, and there are always choices to be made. The appearance of inevitability is a trick played by our tendency to make a neat story out of the past and project it onto the future. And this tendency is one that tech companies are clearly prepared to exploit. But as the historian Thomas Misa has written in Leonardo to the Internet:
Or, in the more concise words of the eminent Catholic media theorist Marshall McLuhan, “There is no inevitability as long as there is a willingness to contemplate what is happening.”
But narratives of inevitability continue to be spun about the impact of AI, typically by those who have a vested financial interest in the large-scale adoption of AI products. The narrative is useful precisely to the degree that it is the rhetorical equivalent of washing one’s hands in the face of events you have the power to sway but would rather not. Joseph Weizenbaum, the computer scientist best known for creating the first chatbot, put the matter with stark moral clarity: “The myth of technological and political and social inevitability is a powerful tranquilizer of the conscience. Its service is to remove responsibility from the shoulders of everyone who truly believes in it. But in fact there are actors.” And, I would add, by implication, there is responsibility.
Narratives of inevitability have the effect not only of off-loading responsibility for those in positions of power and influence but also, conveniently for the aforementioned, of foreclosing thought and deliberation by all of us who have a range of choices even if they are relatively constrained and potentially costly. If outcomes are inevitable, then there’s nothing to do but to assimilate to this predetermined future, to go along for the ride prepared for us whatever the consequences. Beware narratives of technological inevitability. Resistance, if it be necessary, is not necessarily futile. As the writer and entrepreneur Margaret Heffernan reminds us, “Anyone claiming to know the future is just trying to own it.”
Second, consider the possibility that the most radical, immoderate, and seemingly irrational actions and responses to AI, or to any technology, may be appropriate and wise even if they are costly and seemingly foolish. It may be that abiding by and honouring our moral principles and spiritual convictions may mean not making the thing. Or, if it is made, refusing to use it. We should not be cowed by the demand to be practical and sensible in matters where such dispositions are morally disastrous. And I say this specifically to those who are committed to the way of life offered in the Sermon on the Mount. This way is foolishness to the Greeks. It is anything but sensible by the moral logic of the present age. We must at least entertain the possibility that the appropriate response to certain technologies at certain times is simply outright refusal. We do not need to water down our conviction with a myriad of qualifiers about how there are undoubtedly good and proper uses.
None of this, so far, tells us anything that is specific to AI. But we need to stiffen our resolve a bit before we consider AI within larger historical and cultural trends. And we need to entertain the possibility of resistance if AI is indeed not only an alternative religion but a kind of apocalyptic, Christian heresy.
The (False) Religion of AI
There are few books about the history of technology that I turn to more frequently than The Religion of Technology: The Divinity of Man and the Spirit of Invention by David Noble, which was first published in 1997. Noble was adamant about how readers should understand his phrase “the religion of technology.” “Modern technology and modern faith,” he argues in the book, “are merged, and always have been, the technological enterprise being, at the same time, an essentially religious endeavor.” “This is not meant in a merely metaphorical sense,” he goes on to explain. “It is meant literally and historically, to indicate that modern technology and religion have evolved together and that, as a result, the technological enterprise has been and remains suffused with religious belief.”
In a late chapter where he examines AI, Noble summarizes the religious dimensions of AI research as follows:
Characterizing the views of AI pioneer Marvin Minsky, Noble argues that “Minsky described the human brain as nothing more than a ‘meat machine’ and regarded the body, that ‘bloody mess of organic matter,’ as a ‘teleoperator for the brain.’”
Noble cites how another notable technologist of the era, Daniel Crevier, recounted discussions that began to surface among the AI community in the 1980s, in particular the idea of “downloading” the mind into a machine. AI enthusiast Pamela McCorduck describes the search for AI in this way: “The enterprise is a god-like one, the invention—the finding within—of gods represents our reach for the transcendent.” Finally, the computer scientist and author Rudy Rucker claims, “The manifest destiny of mankind is to pass the torch of life and intelligence on to the computer.”
If you attend at all to the rhetoric that emanates from Silicon Valley, you will immediately recognize that while these sentiments are certainly not universally espoused, they remain an important and influential part of the intellectual milieu.
Build a techno-social system which demands that humans act like machines and it turns out that machines can eventually be made to displace humans with relative ease.
Now, I think it is important to be a bit more specific and to classify what Noble termed “the religion of technology” more precisely as a Christian heresy. It is, after all, in Western Christianity that Noble found the roots of the religion of technology, and it is in the context of the post-Christian world that it has presently flourished. The family resemblance to Christianity can be discerned in technology’s religious pursuit of immortality, transcendence, and reunion with the divine in its assumption that the core of consciousness can be distinguished from its material embodiment and in its linear and teleological view of history. But, of course, technology deviates from orthodox Christianity by positing a thoroughly immanent, secular, and graceless path to securing its spiritually inflected aspirations.
This perspective on the quest for AI does not tell us everything we need to know—it is, after all, a perspective—but it does tell us something of consequence about the history of AI and the spirit in which it has been pursued. It also gives useful context for some of the fears and hopes that are often articulated about AI from within Silicon Valley.
What AI Reveals About Us
The second perspective on AI is more sociological than historical. AI is apocalyptic in exactly one narrow sense: It is not causing but rather disclosing the end of a world. That is, the manner in which AI is developed, deployed, and marketed reveals the degree to which our modern institutions are failing and expiring. From this perspective, AI in its present mode can be understood as a fundamentally conservative rather than radically disruptive force to the degree that its function is to preserve modernity’s core commitments to scale, efficiency, rationality, control, and prediction.
This is, incidentally, exactly how Weizenbaum described the impact of computing on society in his 1976 book Computer Power and Human Reason:
“If the triumph of a revolution is to be measured in terms of the profundity of the social revisions it entrained,” Weizenbaum declared, “then there has been no computer revolution.” So today we continue to reap the consequences of a failure to address the problems of growth and complexity in a manner that would serve the human person and human communities. And AI is exposing the failures of the previous application of bureaucratic and computational measures to shore up the old practices and institutions.
We are discovering, for example, that AI is especially adept at displacing or, from the techno-optimist’s perspective, liberating us from human labour in contexts wherein humans had already conformed, willfully or otherwise, to the pattern of a machine. Build a techno-social system which demands that humans act like machines and it turns out that machines can eventually be made to displace humans with relative ease. The claim or fear, then, that AI will displace human beings becomes plausible to the degree that we have already been complicit in a deep deskilling that has unfolded over the past few generations. It is easier to imagine that we are replaceable when we have already outsourced many of our core human competencies. The message of the medium we are presently calling AI is the realization that modern institutions and technologies have been schooling people toward their own future obsolescence.
In the 1960s and ’70s, the social critic Ivan Illich offered a scathing critique of industrial age institutions that can help us see how they prepared the way for modern applications of AI across contemporary society. What these older institutions chiefly taught us, Illich argued, is that we are, in ourselves, inadequate to the task of living together as human beings in the world—that we cannot get on without the products and services that they alone can supply. The institutions thus stripped us of our confidence in judging, acting, and taking responsibility, which is to say they stripped us of our agency. Such institutions are not interested in equipping or empowering us, Illich believed, only in confirming us in an indefinite state of dependence in a consumerist mode. The professions associated with such institutions Illich called “disabling professions.”
“People need new tools to work with rather than tools that ‘work’ for them,” Illich argued. But he concluded that “the institutions of industrial society do just the opposite. As the power of machines increases, the role of persons more and more decreases to that of mere consumers.” In this regard, the present development of AI is of a piece with previous patterns of technological and institutional growth. The trend line is consistent; only now a new class and range of roles and activities are being outsourced for the sake of a system that was already “unaligned” with human values, as it is now fashionable to say, because it demanded the conformity of human beings to inhuman standards of scale, speed, efficiency, and profitability.
This line of institutional and technological development will proceed apace without some account, provisional and contested as it may necessarily be, of what it is good for people to do regardless of whether a machine can do it better according to certain parameters (faster, cheaper, etc.). And if our institutions—be they political, cultural, or corporate—will not entertain such a conversation, we should at least have it for ourselves and with whatever community to which we might be fortunate enough to belong.
The Response-ability of Humans
It is, after all, against some account of the human person that we must weigh, measure, and judge our technologies and the techno-economic systems in which they are embedded.
Discussions around the human in relation to technology, specifically AI, parallel older debates about science and religion. In that discursive context, there was a form of argument known as “God of the gaps.” The idea was that God was simply a name for the gaps in our scientific understanding of the world. Of course, once those gaps were filled, God would be effectively squeezed out of the metaphysical picture. Similarly, we are operating with a “human of the gaps” model when we try to locate the essence of the human creature by pointing to what cannot yet be accomplished by a machine, whether these be matters of physical prowess, cognitive ability, or creativity. Such an approach to the human is misguided, just as it was when it was applied to God.
Without suggesting that this is an exhaustive and definitive account of the human person, I would invite us to consider the possibility that what is distinctive about the human should be sought in the quality of our capacity to respond to our Creator, the Alpha and Omega of our existence. Indeed, this would be an equally fruitful way of articulating what is distinctive about any of God’s creatures, who all, in a manner proper to their being, likewise respond to their Creator. Or as the seventeenth-century Welsh poet Henry Vaughan told us,
The rising winds
And falling springs,
Birds, beasts, all things
Adore him in their kinds.
This response is a matter of answering the call of God, and thus of our human vocation. While the language of calling and vocation suggests auditory dynamics, I use it simply as shorthand for the multitude of ways that our Creator’s presence is manifest to us. The Creator beckons diverse and multifarious modes of response, which can be described by a wide array of rich descriptive terms including awe, gratitude, wonder, silence, obedience, service, delight, and praise.
What would it mean to render to the machine what is the machine’s?
From this perspective, our flourishing is conditioned not so much on the accomplishment of certain feats or tasks, many of which, in any case, exclude the youngest and oldest and most vulnerable among us. Rather, it is conditioned on our capacity to respond to the call of God on us as unique individuals made in his image and thus made to resonate with his presence as it is manifest to us throughout creation. As Brian Brock, in his recently published reading of Genesis, Joining Creation’s Praise, reminds us,
“Modern Christians,” he adds, “need to be shown what it looks like to live with certainty that our true form comes in being responsive to a living and speaking God.” Even in our fallen state, the call of God comes to us as it did to Adam: “Where are you?”
Hearing, Perceiving, Responding
Where might we encounter God’s presence in a manner that invites our response? We hear the call of God in the Holy Scriptures, in the created order, in the other who is made in the image of God, and within the self. The last is not to deify the self but to acknowledge the call of God in times of silence, solitude, and contemplation.
Are we able, then, in these times and places, to perceive the call of God, to discern his presence, to hear the question he is putting to us? And how will AI tools, devices, and systems shape this capacity to hear? As is the case with all technology, novel or prosaic, AI will mediate our experience of the world. Will it make us more attuned to the presence of God, or will it further promote the enclosure of the psyche? Will it encourage us to encounter the other in the fullness of their humanity, to attend to them with patience and care? Will it create the space for the kind of silence and solitude that is required for my spiritual growth?
And if we are able to hear the call of God, how will we respond? Not only will AI mediate our perception, but it will also mediate our action in the world. Will AI, in this mediating role, empower us to respond well and appropriately to the call of God, or will it leave us inarticulate and unresponsive? This latter question is particularly pressing with regard to LLMs, which promise to outsource the labour of articulation.
The argument implicit in the series of questions and exploration above, and in the framing of the human being primarily as a creature whose flourishing entails its capacity to appropriately respond to the call of its Creator, is essentially an argument for the recovery of leisure and contemplation. As they have been understood in the Christian tradition, leisure and contemplation are essential and constituent elements of the form of flourishing that is proper to the human being. In Leisure: The Basis of Culture, the twentieth-century German philosopher Josef Pieper observes, “Leisure is a form of that stillness that is the necessary preparation for accepting reality; only the person who is still can hear, and whoever is not still, cannot hear.” “Such stillness as this,” he argues, “is not mere soundlessness or a dead muteness; it means, rather, that the soul’s power, as real, of responding to the real . . . has not yet descended into words.” He concludes that “leisure is the disposition of receptive understanding, of contemplative beholding, and immersion—in the real.”
To the degree that we can presently discern the way in which the most pervasive AI tools are likely to shape our experience, it appears that they will reinforce our immersion not in the real, as Pieper put it, but in simulations of the real. AI tools have a pronounced tendency to further deskill the human person not only in the realm of labour but also in the realms of thought, judgment, and interpersonal relationships. Because they emerge from a socio-economic milieu that prizes efficiency, optimization, and production, they invite us to structure our lives and imagine our good as a matter of achieving these same ends. In this way, they do not offer us a more humane alternative to the anti-human logic of the market and the machine. They promise instead to better equip us to compete and find fulfillment within that logic. But this is a false promise, one that would have us double down on a mode of life that undermines our capacity to flourish as creatures rather than machines. As a result, we will find ourselves demoralized by the unrelenting erosion of our capacity to engage meaningfully with the world and by the realization that our experience is increasingly structured and populated by uncanny simulacra of the real.
In the Gospels, there is a brief but memorable scene best known for its political ramifications. The story begins with religious leaders seeking to entrap Jesus with a question that would force him either to implicitly deny that he was the expected Messiah or to open himself up to the charge of treason against the empire: “Is it lawful to pay taxes to Caesar, or not?” Jesus, conscious of their motives, asks for a coin. When they bring him the coin, Jesus asks, “Whose likeness and inscription is this?” They say, “Caesar’s.” Then he says to them, “Render therefore to Caesar the things that are Caesar’s, and to God the things that are God’s.” In this way, the snare is avoided, and the demands of Caesar are utterly subverted. What is Caesar’s? A piece of metal with his image. Give it to him. I imagine Jesus flicking the coin back at them. But what is God’s? Everything. Everything that matters. The life of the whole person. Just as the coin bears the image of Caesar, so in the Jewish tradition the human being bears the image and likeness of God.
I find myself thinking—or sensing, feeling, intuiting—that something of this spirit might guide us well in the present moment as a very different totalizing force demands our resources, our attention, and our unwavering loyalty. What would it mean to render to the machine what is the machine’s? To regain a sense of what it is to be a person, coupled with a subversive practice of the same, within a techno-economic system whose default settings incline us to forget this vital fact about ourselves and our neighbours? To reclaim a confidence in what we can do ourselves and for one another in the face of an array of technologies, services, and institutions that market themselves under the implicit sign of our ostensible helplessness and the banner of a debilitating liberation? Let the machine have everything that is stamped with its spirit. Let us keep everything else.





