I
I have been reading a lot of books on AI in recent months, and most of them are quite bad. Some, like Salman Khan’s Brave New Words: How AI Will Revolutionize Education (and Why That’s a Good Thing), are hilariously terrible hype narratives. Some are tedious and feel like they were written by ChatGPT. Others are a bit better but do little more than recite potted histories of the development of AI, describe how these technologies work, and conclude that we should use them for good and not for evil. Fair enough, but also not very helpful. I had higher hopes for Shannon Vallor’s The AI Mirror: How to Reclaim Our Humanity in an Age of Machine Thinking, mostly because Vallor’s first book, Technology and the Virtues: A Philosophical Guide to a Future Worth Wanting (which I reviewed when it came out), demonstrates her careful application of the virtue ethics tradition to new technological challenges. Her new book also gets a lot right: Vallor identifies the real threats posed by AI, and she offers a helpful critique of effective altruism. By the end of the book, however, I wished for greater analytic clarity and the kinds of responses that, by its own admission, we most need.
As the title indicates, the book’s basic claim is that popular forms of AI function as lossy, distorted mirrors of human thinking: “Today’s most advanced AI systems are constructed as immense mirrors of human intelligence. They do not think for themselves; instead, they generate complex reflections cast by our recorded thoughts, judgments, desires, needs, perceptions, expectations, and imaginings.” These AI models are most likely to result in cultural and intellectual stasis and even, to the extent that AI displaces human thinking, degeneration. Hence Vallor rightly argues that it’s a mistake to deploy these technologies as substitutes for moral and intellectual virtues: we are “building and using AI in ways that disable the humane virtues by treating them as unnecessary for the efficient operation of our societies and the steering of our personal lives.” The way that industries and government agencies are rolling out AI products to replace (expensive) human care and discernment with digital substitutes is “misaligned,” to use today’s buzzword, with human flourishing but all too well aligned with the dominant cultural narratives that prioritize quantifiable efficiency over the cultivation of human virtues.
Understanding AI as a mirror enables Vallor to critique the hyped fears of robot genocide as dangerous distractions from the real threats AI poses. “Corporate titans and even political leaders,” she argues, “have taken to warning of the ‘existential risk’ posed by powerful new AI systems, [but] these warnings profoundly misunderstand the risk AI poses to humanity.” The real threat is not that some computer optimized to build paper clips will turn the globe into a pile of bent wire but that many humans will miss out on the opportunities to cultivate the moral and intellectual virtues that enable us to be excellent humans. “We acquire virtues by doing,” Vallor insists, and “if automated reasoning crowds out our own opportunities to practice moral and political reasoning, it leads to what I have called ‘moral deskilling.’” But to the extent that we welcome the opportunity to outsource the efforts and practices that cultivate human excellencies, we become complicit in AI’s real harms. Hence Vallor concludes that “the danger to our humanity from AI is not really coming from AI itself. The call is coming from inside the house; AI can devalue our humanity only because we already devalued it ourselves.”
The most likely result of AI’s widespread adoption will be to create a drastic bifurcation between those humans forced to interact with computers and those who can relate with other persons.
In particular, Vallor shows how AI devalues human persons by obscuring their nuanced, particular, embodied reality. Mirrors that reflect the digital data trail left by humans will inevitably miss aspects of human intelligence and show us distorted versions of others and ourselves. Vallor warns, “If AI mirrors become the dominant way that we see who we are, then whatever they miss will sink further into invisibility.” In many ways AI represents an intensification of the problems caused by social media, which, as Chris Bail has argued, acts as a “prism” that distorts our perception of other people and ourselves. In both cases we accept compressed digital substitutes for embodied relationships with other persons. Such incarnate relationships—with human teachers, human coaches, human mentors, human caretakers—may increasingly come to be the preserve of those who can afford to pay a premium: “Knowing others, and having others come to know us, is expensive, and we aren’t investing in it anymore. In many settings, it is already a luxury of the privileged few. AI mirrors are what the rest of us are being offered as a substitute.” Although many corporations promise that AI will make society more equitable, the most likely result of its widespread adoption will be to create a drastic bifurcation between those humans forced to interact with computers and those who can relate with other persons.
Working from within the virtue ethics tradition, Vallor recognizes both the false promises of AI and the misguided priorities of effective altruists, who are among the most prominent AI critics. Effective altruism—a type of longtermism, which is itself derived from utilitarianism—holds that the greatest risk of AI is an artificial general intelligence (AGI) that is “misaligned” with human interests and so causes great harm. Because the number of humans who may live in the future is so much greater than the number of humans alive right now, and because AGI has the potential to drastically degrade or improve future human lives, effective altruists argue that we should redirect immense energy and funding to mitigate AGI risk. Vallor notes that many effective altruists do in fact invest in good charities and exhibit a commendable concern for others, but she warns that their underlying calculations fail to account for AI’s real threats. “On [their] view,” she explains, “far future threats, even highly speculative ones like AGI, simply dwarf even the most urgent moral claims of presently living or soon-to-be-born humans.” Vallor finds this both laughable and dangerous. Laughable because today’s AI systems “can’t think their way out of a paper bag. . . . They are hard to build and hard to keep working well. They are subject to model drift, the rapid performance decay that happens when a model is exposed to real-world data that diverges from its training data.” And dangerous because focusing on this threat distracts us from the present dangers of AI systems that seek to mimic human capabilities and then replace humans or force humans to adapt themselves to a world ordered around digital replicas of human intelligence.
So far so good. Vallor’s book helpfully diagnoses many of the limitations that mark contemporary approaches to AI. But when she turns her attention to articulating how AI might be integrated into our society in more restorative, healthy ways, the results are decidedly mixed.
I did find two of her proposals in this regard somewhat helpful. First, Vallor argues that the most promising uses for AI will be forms of intelligence that feel alien rather than those that try to replicate human intelligence. Because of the massive data trove that human intelligence has created, it’s relatively easy to train AI to mimic humans. But Vallor points out that “we already have people to be people.” Perhaps there is some value in training machines to perform “dull, dirty, and dangerous labor,” but in general we should prioritize “AI systems that augment human capacities rather than mirror them. [Erik] Brynjolfsson points out that AI tools often yield the highest performance when they are paired with a human agent in a complementary way.” So we don’t particularly need chatbot therapists or teachers, but narrow AIs that can perform tasks humans aren’t particularly good at might offer real assistance. Vallor gives the example of an AI that matches suicidal callers with volunteer counsellors, but one might also point to AIs that read X-ray reconstructions of old scrolls or AIs that model protein folds. These are examples of AI that don’t try to pass the Turing test or mimic human speech; instead, they do things humans aren’t good at and leave human thinking and work for the humans.
Second, Vallor invites us to reimagine technology according to one of its original purposes: to assist us in the “art of making and keeping a home for others.” Evaluating technologies by their effectiveness in making hospitable, caring households could be a useful standard. It’s in this vein that she proposes we “valorize the virtues of restoration and repair just as much or more than we valorize creation.” The renewed cultural interest in repair and homemaking is indeed an encouraging sign, but it remains unclear how AI might serve these goals. Vallor asserts that “AI has a vital role to play in this process,” but she gives no details or examples to indicate what this role might be. This is a particularly fraught issue because, as Vallor herself has argued, when we offload care to robots, one of the costs is that we lose real moral goods found only in actively caring for others.
As in this case, Vallor’s proposals for orienting AI toward human goods often remain frustratingly vague. For instance, in one chapter she contrasts AI’s mirroring of past human thinking with the virtue of prudence, which is often depicted as a woman gazing into a mirror pointed behind her. As Vallor explains, “Learning from experience is not the same as repeating it.” Indeed, but she does not specify how we might distinguish between these different backward gazes. Her conclusion in this chapter doesn’t really clarify things; she simply proclaims that we “need to challenge the way we use today’s AI tools. Increasingly they function in our hands not as mirrors of prudence, but as engines of automated decision-making that engrave ever deeper into the world the patterns of our past failures. . . . We must seek other roads into the future, charted with the aid of our best technologies—but most of all, with the virtues of moral imagination, courage, and shared wisdom. This is our new world; not a looking-glass projection of the old.” This all sounds great, but Vallor gives little guidance on how we might use AI prudently instead of recursively, or how we can develop the virtues we need to use these tools well.
It’s trivial to pronounce that we should use AI wisely and not foolishly, to learn from the past and not to repeat the past, yet Vallor’s perorations in each chapter repeatedly leave readers with no clarity about how exactly we’re supposed to subordinate AI to human wisdom. Time and again, Vallor falls back on opaque bromides about AI: “We should reject those designs and applications that unjustly and irresponsibly endanger, impoverish, and diminish us and our communities.” Instead, “our AI mirrors can be tools of self-illumination and self-making. They can’t liberate us or care for us. But we can still use them to liberate and care for ourselves and one another.” It’s hard to disagree with statements that are this abstract and banal. Is it possible to justly and responsibly endanger, impoverish, and diminish people and their communities? What would it mean for an AI mirror to be a tool of self-making? Further, there are no examples to accompany these vague assertions, so they offer no real guidance as we seek to discern how AI might be deployed in ways that help us care for one another.
Perhaps because she wants to avoid being pigeonholed as some kind of Luddite, Vallor repeatedly insists that AI can be beneficial, but she ultimately hedges this claim in ways that make it hard to take the possibility seriously. For instance, she writes, “As long as the objectives and outputs of AI models are transparent and contestable, their benefits, costs, and risks justly distributed, and their environmental footprint justifiable, AI systems have a place in a sustainable future.” Besides the lack of clarity here (What would “justly distributed” risks entail? What kind of environmental footprint is justifiable?), Vallor seems to suggest that she can’t imagine a sustainable AI in our current economy and culture. None of the current, powerful AI models have objectives and outputs that are transparent and contestable. This conditional statement is rather like saying that as long as wars don’t kill anyone, they are just, or as long as agricultural herbicides don’t run off the fields, they are sustainable.
I’m not sure it’s possible to give an adequate critique of artificial intelligence without an account of human consciousness and relationships that is attuned to transcendence.
Ultimately, I found Vallor’s gestures toward the possible benefits of AI unsatisfying because her anthropology—her understanding of human persons and human intelligence—remains underdeveloped. Lacking a coherent and robust account, Vallor seems to move between different definitions of human persons and the purposes that our intellectual work should serve. On the one hand, she notes the “gradual erosion of human moral and political confidence in ourselves and one another,” but it’s not clear what standard she’s using to judge this decline. And then, a page later, she undercuts much of her argument by declaiming that “we are indeed machines of a biological sort, like all living things. But we are among those rare machines who make ourselves.” Vallor is drawing here on José Ortega y Gasset’s understanding of autofabrication, or self-making, as central to human existence. A few pages later, however, she approvingly cites Alasdair MacIntyre’s Dependent Rational Animals: Why Human Beings Need the Virtues, which, as the title indicates, works from a quite different understanding of human persons. Should we look to AI to help us “make ourselves”? Or should we look to AI to help us become more virtuously and responsibly dependent? These are very different projects, and it’s hard to reclaim our humanity when its nature remains elusive.
In particular, Vallor may need religious categories and vocabulary to adequately name her concerns with the dangers of AI. Without understanding humans as creatures of a particular sort, creatures whose origin and telos are defined by their relationship with a Creator, she struggles to name the moral threats that AI poses to individual human development and our broader society. I’m not sure it’s possible to give an adequate critique of artificial intelligence without an account of human consciousness and relationships that is attuned to transcendence and so recognizes that the mind exceeds the brain and that our dependence on one another is a feature rather than a bug of our existence.
Any clarifying analysis of AI must build on a coherent anthropology, a robust understanding of who humans are and what our telos is. If Vallor assumes that humans are themselves machines, it’s hard to see what the problem might be with offloading difficult or unpleasant tasks to other machines. Near the end of the book, Vallor cites Ortega y Gasset’s claim that “the vital program is pretechnical,” one that articulates the “final aims” our technologies should serve. She concurs, asserting that “it’s this vital pretechnical program for AI that we haven’t yet written and that we urgently need.” Indeed. Unfortunately, that is not the book that Vallor has written. So I’m still waiting for this pre-technical book on AI, one that articulates a theological anthropology, distinguishes clearly between technologies that erode our relationships and those that sustain them, and helps readers imagine how to live well as humans in a world shaped by digital replicas of human intelligence.