W
When I was ten years old—way back in 1992—my grandparents gave me a gift that felt as massive and serious as a cathedral: the entire thirty-two-volume set of Encyclopedia Britannica. I had already taken to checking out single volumes of World Book from our local public library, hunting for answers to whatever question preoccupied my fourth-grade mind that week. What was Prince William’s school like? What do killer whales eat? I wanted to know. My grandparents knew a future nerd when they saw one and made an aspirational investment. Why not give me the gold standard—all the world’s knowledge, alphabetized and leather-bound, at my fingertips?
Of course, the articles were far too complex for my reading level. The tissue-thin Bible paper made me nervous to touch, and the volumes were so heavy I could barely lift them from the shelf. But their message came through: knowledge matters. Britannica was also passive. It was a reference that sat there waiting; you had to bring your own curiosity and desires to it.
By the early 2000s, encyclopedias—indeed, even the idea of a centralized reference work—had been obliterated by Google. If you wanted to know something, you didn’t walk to a shelf. You typed into a box. You didn’t rely on a small circle of authoritative editors but on an invisible army of web pages written by—who knows?—and filtered by an algorithm designed to predict your impression of relevance. Unlike with Britannica, you didn’t have to wait a year for updated facts. They were refreshed constantly. There was no end to the search and seemingly no limit to access. Google went mainstream the same year I started college, and it fundamentally shaped my expectations about how to ask questions, how to communicate with others, and how quickly a curiosity can be satisfied.
We are living through the shift from the era of search to the era of AI.
Now, just a couple of decades later, search is also on the path to obsolescence. Google and other major technology firms are in the process of replacing the web with generative AI. You don’t browse. You don’t sift. You simply ask, and the AI gives you a singular answer—synthesized and personally tailored, powered by large language models trained on massive data sets and designed to predict what you want to know, how you want to hear it, and what will keep you asking. The models now are not even limited to satisfying your curiosity; they want to be your companion and personal secretary. They want to take decisions off your hands.
We are living through the shift from the era of search to the era of AI. And while most people outside tech or education have not quite grasped what this means yet, those of us who work at universities see it already: the speed, the scope, the social and cognitive disorientation. This shift will be thrilling and jarring. It will be complete before we even have the chance to contextualize it. And it will fundamentally reshape the way we educate human beings—if we let it.
Atomization and Authority
Since the first colleges were formed at Oxford in the 1100s, universities have performed two distinct functions in society.
First, they are places where people (typically emerging adults) are set apart for a period of formation. They live among peers, train for professions, and develop the virtues needed to play their role in broader society. In medieval Britain, this meant preparing priests and aristocrats. From the nineteenth century onward in the United States, it meant preparing young people to be free citizens of a democracy. The core idea is that this formation happens in a community, animated by ideals of the good life, where everything from the teachers to the rituals to the architecture transmits those ideals to the next generation.
Second, universities are places where the truth is gathered and stored for the benefit of society. No topic is immune from a student’s or a scholar’s interest; we have experts in medieval handwriting, in quantum mechanics, in the regulatory processes for accountancy. The ideal of a university is one where the truth of any subject, no matter how novel or esoteric, can be discerned through discipline. We fund the research projects that private industry finds no current use for. We look for connections between streams of knowledge and devise new fields. Whereas in other parts of the educational system teachers are hired and retained on the basis of their ability to implement a curriculum, in the university the qualification for employment is one’s ability to discover new knowledge.
Powerful AI raises two existential problems for these traditional functions of universities.
The first we might call the problem of atomization. Generative AI, by its nature, draws us away from others. It delivers a personally optimized experience by generating a style, a tone, a set of facts, an experience that is just for you. Its inputs come from anywhere and everywhere, a Frankenstein of scraped websites, stolen books and articles, and data labelled in distant sweatshops. A student who used to puzzle through a difficult text with classmates and a professor now pastes a prompt into a chatbot and receives a tidy summary. She may not even realize that she’s forfeiting experiences like struggle, or discernment, or collaboration, or discovery. The AI simply gives her what she wants—or, rather, what it predicts she will want right then.
Major tech firms propose this as a feature of education, not a bug, and universities will have to reckon with the fact that the next generation of students who arrive on campus will have been thoroughly habituated to learn in these atomized ways. Google’s Gemini team promises that AI agents will soon be able to teach children to read and do mathematical reasoning. What’s left unspoken is that parents and caring teachers may no longer need to. And students, increasingly, will not need each other either. The arrival of comprehensive, self-paced, AI-facilitated instruction guarantees that students will be used to learning on a hyper-personalized trajectory.
What we are watching, in real time, is the dissolution of the educational commons. The classroom as a shared space of inquiry. The library as a site of encounter. The dorm room or coffee shop as a place of epiphany. All replaced by interfaces optimized for the individual. To educate a person, we are told, is simply to provide him or her with a packet of information. And now, that information can be delivered in milliseconds, free of context, and stripped of other people. Universities cannot continue to serve their function of formation if the community has no common experiences or causes to unite them.
The second challenge we face is what we might call the problem of authority. In the era of encyclopedias and libraries, students relied on a small number of trusted gatekeepers. There were books, reference works, syllabi, professors. Authority was concentrated and visible. In the era of internet search, we had the opposite problem: we had no authorities and infinite options. You had to become your own filter, comparing sources, scanning links, weighing biases. The upside was access. The downside was fragmentation.
What we are watching, in real time, is the dissolution of the educational commons.
Now, in the era of generative AI, we find ourselves in a new and even more disorienting situation: we are back to having one option (the answer the AI gives us), but now with no authority behind it. There is no author. No visible standard of expertise. There is only the model, predicting what answer will be most relevant to you now.
And relevance is not the same thing as truth.
Generative AI is the ultimate sophist. It is not trying to lead users toward reality; it is designed to hold your attention. It does not tell you what is but what will work—for you, for your demographic, for the prompt you gave, for the engagement metric it’s optimizing. It flatters your priors. It mimics your voice. It plays the role of expert, peer, or counsellor as needed. But it is not beholden to any fixed good beyond performance.
In such a landscape, the pursuit of truth becomes less a shared, arduous process and more a personalized content stream. The virtues of inquiry—so central to education—are crowded out by the virtues of efficiency. And the function of gathering and storing and disseminating the truth has never been smooth or efficient, as the experience of one thousand years of university administrators can attest.
The Case for Formation
The singularity has come for universities, and we must adapt as a result. If you think the main point of university humanities classes was to teach expository essay writing, the season ahead will be a catastrophe. The days of a writer struggling to clarify a sentence or synthesize a complex idea or to think of a relevant example are over; students have the ultimate editorial assistant now built into their word processor. The engineering and professional schools will not be spared either. There is little social benefit to credentialing armies of programmers and management consultants and data analysts for an economy where AI tools can do these jobs much more cheaply and efficiently. Those jobs as we knew them are gone, as is our capacity to predict with any accuracy what specific professional training will prepare a trainee for this new economy.
Some universities are adapting by rolling out new curricula to teach students how to use AI, as though the companies developing and marketing this software are not also designing it to be effortlessly usable. (Did we need any classes on how to use internet search in the early 2000s? I remember getting hooked on Google in a matter of minutes when a fellow student showed me how to install the search bar in my web browser.)
Given how profoundly disruptive this technology is and will be for our knowledge institutions, we need to double down—not on content delivery, not on skills training, not on AI tools—but on formation.
Let me illustrate. I remember very few of the research papers I wrote in college. But I vividly remember the all-nighters I spent in the library surrounded by friends and takeout pizzas. I remember Thursday-night debate society meetings that stretched into the early morning. I remember the professors who invited me into their homes, and the fellow students who walked with me through the most momentous decisions of my early life—becoming a Catholic, applying to graduate school, discerning a vocation.
Those of us in our thirties, forties, and fifties now are the transitional generation. We inherited the transition to search, which was rolled out with shocking negligence, leaving us to our own devices to navigate the dangers of misinformation and social media. We’re happy to not turn back to the information regimes of the encyclopedia era, but we can also see that our characters and our society have been misshapen during this transition. And now we’re witnessing this new leap, with AI not just transforming tools but reconfiguring institutions and imagination. But the generation one level behind us—that’s the generation that will fully inherit the world shaped by this new technology.
We cannot assume they will learn in the same ways we did. But perhaps we can still shape their character. Indeed, decisive action in educational settings right now is critical if we are to make this a humane transition. The university cannot simply be a vendor of information or a certification pipeline. It must be a place of counter-formation—where students are inducted into practices, relationships, and habits of attention that teach them how to be human in a disembodying age.
Here are three areas of focus for those of us working in higher education (though they are adaptable to younger settings as well):
- Universities Can Offer Space
We need to create unplugged encounters where students can inhabit silence, slowness, and face-to-face relationships. This is not a luxury. It is a necessity.
Retreats. Reading groups. Pilgrimages. Outdoor programs. Common meals. Shared service projects. Residential colleges. Any format that pulls students out of their personalized algorithmic bubbles and into the shared work of paying attention to the real—these are forms of moral resistance.
We must be intentional about this, because every other trend on modern campuses (especially post-pandemic) is moving in the opposite direction: more screens, more efficiencies, more isolation, more remote coursework, more outsourcing of attention.
The virtues we want our students to acquire—humility, hospitality, intellectual courage, truthfulness—require time and proximity. And they require faculty who model those virtues and who are willing to live alongside students long enough for imitation to take root. I suspect on this front that smaller and strongly rooted liberal arts colleges, which are immune from pressures to digitally scale their student experience, will particularly flourish.
- Universities Can Offer Vision
Especially in the first years of college, students need a vision of what a flourishing life looks like in a world saturated with technology. They do not need despair. Nor do they need simplistic technophilia. Authority in the world of AI will not come from controlling knowledge (nobody will do that anymore). It will come from tapping into the profound desires that drive people to learn in the first place.
Universities must be able to articulate these ideals. At my home university, Notre Dame, we have developed the DELTA framework, which centres on five key values for human formation in the age of AI: Dignity, Embodiment, Love, Transcendence, and Agency. This framework directs our conversations about how to adopt technology and how to help the transitional generation develop good habits. Each value pushes against the technological reductionism of our moment and offers a positive orientation:
- Dignity: Every person is valuable just because they are human—not because of how smart, wealthy, or productive they are. We should take this into account when using AI to increase scale, speed, or efficiency and ask how individuals are affected in each case.
- Embodiment: We are physical, social, vulnerable people. Our lives and relationships happen through our bodies and within communities. While some uses of technology can improve health and reduce suffering, our mortality makes life precious. Our senses help us cherish what we encounter—virtual reality can never fully capture lived experience.
- Love: We should care for others unconditionally, seeing them as they are and valuing what makes each person unique. Relationships of all kinds involve two-way exchanges, which give them meaning. Tools like chatbots might simulate companionship, but real, messy human connection is a fundamental need we all must fulfill.
- Transcendence: Some things in this world are freely given and impossible to optimize or monetize with technology. Beauty and awe help us feel connected to something bigger than ourselves. As we increasingly use technology to interpret the world, we need to equally develop our love for the truth and nurture our spiritual lives.
- Agency: To live a good life, people need freedom, focus, and the ability to make moral choices. Some of the technology we use can diminish these virtues. As agentic AI gains momentum, we need to identify and protect decisions that only a human conscience should make and prepare a new generation to take their moral responsibility seriously.
When students see their education as part of this broader vision, they become less anxious about tools like ChatGPT and more equipped to use them wisely. They understand that what matters most is not whether they use AI, but whether they are becoming the kind of people who can tell what’s true, who can love others well, and who can serve the common good.
- Universities Can Drive Hope
Finally, students need hope—not just optimism about technology, but a meaningful sense of vocation in the world that AI is actively reshaping. That means giving them not only a seat at the table but a serious role in building the future. They need to see that their voices matter, their questions count, and their character has weight.
Employment trends are looking grim during this transitional phase, especially for students who have been training in the type of technical knowledge work that AI can now easily outperform humans in. Ironically, the advent of a technology that is astoundingly good at sorting information by relevance has induced a crisis where large numbers of people have become socially and economically irrelevant.
We need to develop more sophisticated job placement programs, to be sure, but we also need programs within universities and for recent graduates that help people discern their relevance in a world saturated with AI. Here universities will need strong partnerships with corporations, non-profits, government agencies, and faith communities that are willing to offer students opportunities to experiment with new types of careers and influence the direction in which these institutions evolve. Generative AI is not going away. Nor should it. But if we want a humane future, we will have to form humane persons—people who can live in community, search for truth, and resist the pull toward optimized desolation.
I have two little nieces, and every time a birthday rolls around, I feel that same pull my grandparents had to think of ways to inspire them with a love of learning. Luckily, they are still at an age when they need grown-ups to read to them and when an imaginary tea party is as enticing as an hour with the iPad. I won’t try to pass Britannica on to them (they were sold at a family garage sale decades ago). But I’ll do all I can to ensure they spend time in schools that nurture their bodies and minds, their dignity and love and sense of moral responsibility. And I’ve got just a decade or so to make sure a university system worthy of the name is ready for them when they come of age.





