I
In the 1950s and ’60s, the discipline of psychology made an intellectual breakthrough. The “cognitive revolution” marked a paradigm shift in which it was realized that internal mental processes like memory, perception, and problem-solving (and not just external behaviour) were necessary for understanding human psychology. Genius. Likewise in 2017, an economist won a Nobel Prize for showing that people’s economic decisions do not always reflect rational self-interest but are prone to error and shaped by psychological factors like concerns with fairness—for which the prize winner was hailed as “making economics more human.” Brilliant. The preposterousness of some of the models of humanity in the human sciences is underscored by the embarrassing obviousness of their required corrections. Did anyone think to ask: Why or how could such scholarly accounts be so disconnected from reality that they needed these kinds of no-duh corrections?
Every science, Aristotle taught, must attune its methods and explanations to fit its particular subjects of study. Subjects of investigation must determine how they are to be studied, not vice versa. “We must try to investigate each type in accordance with its nature,” he says in The Nicomachean Ethics, and “not look for the same degree of exactness in all our studies, but only for as much as the subject matter in each case allows, and so far as it is appropriate to the investigation.” Thus, “it is the mark of the trained mind never to expect more precision in the treatment of any subject than the nature of that subject permits.”
Any science of human life must, by this account, begin by considering seriously the actual nature of human persons—as best as that can be understood through combined personal, social, and historical experience—and then construct its methods, theories, and explanations to be appropriate to actual humanity. Human reality must govern the nature of its science, not be forced into some predetermined idea of what science is supposed to be.
Early modern science rejected Aristotle. Beginning with the repudiation of his ideas about teleology in nature—that a tiny acorn, for example, had the proper natural telos, or end, of becoming an oak tree—leaders of the seventeenth century’s scientific revolution jettisoned Aristotle wholesale. Most were confident that modernity could and should sweep away all things ancient and medieval. With that was lost Aristotle’s crucial insight about scientific practice.
The ideas of thinkers like Bacon, Newton, Locke, and Hume culminated in the early twentieth century in a widespread belief in the existence of the Scientific Method, a single normative procedure of investigation that applies to every subject of study. That idea is no doubt still taught in high schools today.
Also during the twentieth century, the modern human sciences—economics, psychology, sociology, anthropology, political science—were professionalizing academically, and the era’s influential ideas about science stamped an indelible imprint on those disciplines, in many ways for the worse. One of the results was the human sciences’ persistent tendency to develop one-dimensional models of human beings.
Examples are legion. Economics said (and says) to treat humans as rationally calculating utility maximizers, reducing all of human life to the making of choices the same way sensible people shop for auto insurance. Rational choice theory influenced other social sciences, especially political science, with the same utilitarian model. Skinnerian behaviourist psychology pronounced that humans are equivalent to pigeons and rats insofar as all of our activity can be explained as deterministically learned behavioural responses to the “operant conditioning” of external rewarding and punishing stimuli, rendering the lived experience of human freedom and dignity delusional.
My own discipline of sociology has generated a menagerie of impoverished models of humanity, both in “grand” and “middle-range” theory. For example:
Structural functionalism: Humans are components of social systems that as “cultural dopes” are over-socialized to perform their functions to maintain system equilibria.
Collective behaviour and social control theories: Humans are irrational animals susceptible to herd-like fads, panics, manipulations, and mobs that must be socially contained by internalized and external controls.
Marxist sociology: Humans in their “species being” are expressive fabricators of materiality who exploit each other through private property and power, yet are destined through the inherent contradictions of historical dialectics and revolution to achieve equality and stateless mutuality.
Social exchange theory: Humans are reward seekers who live to make optimal exchanges in social “markets” to enhance their relational, material, and emotional benefits.
The litany continues.
Dramaturgical symbolic interactionism: Humans are play-actors performing public presentations-of-self on interactive stages in order to validate their identity claims.
Social situationism: Humans are malleable social chameleons that continually modify their values, interests, and behaviours to fit ever-changing immediate social situations.
Social network theory: Humans as “nodes” are the composite sums of the influences of their social-network ties.
The “agency-structure debate”: Humans are self-directing individual agents of free action who must overcome externally constraining social structures that obstruct their wills.
Vocabularies-of-motives theory: Human actions are not caused by internal ex ante motivations but driven by conformity to social norms, yet justified post hoc through culturally plausible social rhetorics as motivational accounts.
Et cetera, ad nauseam.
Each of these accounts of course contains some element of truth and insight. The problem is not their absolute falsehood. The problem is the underlying compulsion of imperialistic reductionism. It would be too modest merely to offer a theory that captures some significant feature of human experience. Instead, those in the human sciences recurrently feel the need to advance “new and field-changing” paradigms showing that “humans are really essentially nothing but [some reductionist X],” which so happens to ignore or explain away much of everyone’s irreducible lived experience (including of the human science scholars themselves). That kind of sensationalism gets publications, name recognition in academic circles, and the enjoyment of destabilizing wide-eyed sophomores in classrooms. But it is not good science.
Nor does it explain how the human scientists themselves somehow transcend the constraints of their own theoretical systems. Was Skinner a conditioned pigeon? Were structural-functionalists cultural dopes? Was Foucault just another relative power construct? If so, why should we take any of them seriously? If not, what explains their magic trick of transcending their own humanity?
Most problematic theories in the human sciences tend to fragment and flatten, sometimes even dehumanize humanity. Some other accounts, however, vaunt humans to nearly godlike status. Early pioneers of sociology promised that their discipline could reform away social problems and engineer great societies. Proponents of strong social constructionism insist that humans “socially construct reality” itself. Postmodernist social theory (in its optimistic version) imagines that emancipated humans are free to create and re–create their selves and identities through deliberate and playful reconfigurations of narratives, discourses, body art, sexual presentation, and so on. Such accounts vest humanity with almost divine creative powers.
Modern views of humanity thus tend toward either flights of credulously optimistic humanism or descents into dark pessimism, misanthropy, and nihilism.
In his 1943 book The Nature and Destiny of Man, Reinhold Niebuhr argued that Christian anthropology effectively holds together both the brilliantly light and the dreadfully dark sides of humanity. Neither “made in God’s image” and “a little lower than the angels” nor “dead in trespasses and sins” and “hearts desperately wicked” can be omitted from the Christian account. Post-Christian secular culture, however, Niebuhr observed, proves unable to sustain the necessary strain between these contrasting truths. The dynamic tension of complex reality gives way to simplified opposites. Modern views of humanity thus tend toward either flights of credulously optimistic humanism or descents into dark pessimism, misanthropy, and nihilism. It’s either Kant, Rousseau, and Mill or Schopenhauer, Nietzsche, and Lenin. The same polarized tendencies show up in accounts of humanity in the human sciences.
Conventional objections to this critique reply that the human sciences never claim to represent experience or reality accurately or adequately. “That is what the humanities are about,” some will say. Others will insist on the primacy of parsimony—that is, maximizing simplification for the sake of clarity and efficiency. Yet others readily admit that their models of human actors are wildly inaccurate abstractions, but at least they predict well, or that they are simply useful heuristics for seeing interesting things. Still others will say the human sciences don’t need to make any assumptions about human nature to work, that we can operate without that “ontological baggage”—just collect and analyze data.
The correct reply to such objections is this: They are wrong, based on misguided ideas of what science is about. Science is about describing and explaining as accurately as possible what exists in reality, and how and why it works the way it does. And reality is complex. Doing science requires accounting for major complexity—including, for the human sciences, the real complexity of human beings.
Let us briefly reply to each of the above objections in turn. And here I lay on the table the cards of my critical realist philosophy of science—the version developed by Roy Bhaskar, Margaret Archer, Doug Porpora, and others—which provides, to my mind, the only adequate metatheory for the job.
Of course, theories simplify. But the purpose of theory is not to butcher and compress reality into one-dimensional crudities, like the body parts psycho killers pack away in their freezers. We theorize in order to identify and conceptualize the crucial, real elements and causal forces that operate beneath and within the “blooming, buzzing confusion,” in William James’s phrase, of the observable world that are needed to explain important empirical patterns. Rather than reducing reality to mangled, misrepresenting fragments, good theory draws out from what we can observe the constituent and explanatory elements needed to understand the conditions and events that reality generates.
We are, in other words, after what is most important in reality, not theorizing fictions that (supposedly) predict or wow the credulous. That goal, of course, presupposes that human inquiry enjoys some genuine-if-fallible connection to reality. If anyone disagrees, they have no business in science, and I suggest instead they take up a career in creative fiction or digital animation.
When our theories only account for parts of reality, as all always do, we must explicitly acknowledge their limits and conditions of applicability. Confidence about reality itself (ontology), combined with appropriate modesty about our understanding of it (epistemology): that is what we need—not the reverse or debilitating doubts about both.
Belief in a chasm presumed to partition the human sciences from the humanities is mindless. The very “divisional” categories assumed in this belief are the product of nineteenth-century German research universities, which were caught up in their own philosophy-of-science confusions and struggles. The human sciences cannot write off the humanities as irrelevant to their concerns.
For one thing, science cannot even get started without some operative philosophy of science—whether acknowledged or not—that guides what it is doing, why, and how. Much of social science also relies on historical knowledge of the past for its work—that is, humanities scholarship. For another, at bottom, the human sciences are motivated by normative concerns about desirable human lives and societies grounded ultimately in moral visions of the good. Humanities questions again. The concerns of the humanities are, in fact, the inescapably enveloping context for the human sciences, whether or not anyone realizes or admits it. Failure to see that (along with myriad profound confusions in the contemporary humanities, too, unfortunately) helps explain the human sciences’ many claptrap models of the human.
Moreover, William of Occam (of “Occam’s razor” fame) never said to hack reality down to as few simple parts as possible. He actually said, “Entities must not be multiplied beyond necessity” (Entia non sunt multiplicanda praeter necessitatem). That is a totally different directive. The complexities of human personhood, it turns out, necessitate multiplying features in our accounts of the human, not reducing them. Yet the fact that few human scientists actually understand Occam’s razor—merrily operating instead with a specious popular idea about parsimony—again reflects a widespread and harmful ignorance of what should be essential history and philosophy.
The goal of science—especially the human sciences—is not to predict empirical events. Students of election polling and sports betting are in the business of predicting. But the purpose of science is to describe and understand reality as accurately as possible, what it consists of and causally how and why it works as it does. That purpose means it is about discovery and explanation, not prediction. Science can sometimes generate reliable expectations about future events in certain realms (mostly the natural sciences) that are amenable to it (Aristotle again). But prediction is the weakest card in the hand of those studying human doings—our track record in forecasting the future is embarrassing. Complexity upends prediction.
If human science theories are intended as mere heuristics to highlight important and interesting parts of social life, great. Then they should clearly advertise themselves as just that, not anything grander. And they should keep their place in the larger array of complex accounts and insights. But that is not, in fact, how most tend to operate.
Finally, to say that a human science can operate without presupposing some working anthropology is fatuous. There is simply no way to leave behind that “ontological baggage.” When scholars think they have done so, they have not left behind anything: the baggage they are lugging around is simply invisible to them. Those who insist otherwise are just not paying attention or thinking very deeply. Suppositions about persistent human capacities and tendencies are always at work in the human sciences. When they are taken for granted and ignored, their influence on scholarship is just as powerful yet beyond questioning.
To be fair, some streams in the human sciences offer better accounts of humanity than others. They are those that take cultural meanings (and therefore philosophy, history, ethics, and art) seriously. Cultural anthropology thus often (though not always—think structuralism) proves more adequate to the human experience than, say, standard economics. And those identifying with “humanistic sociology” less often mangle humanity than, for instance, functional neopositivists. But those are minorities. The dominant tendency is theoretically to caricature the human.
What explains these problematic scholarly proclivities? I have mentioned modern science’s disregard for Aristotle’s rule to adjust the nature of scientific investigation to the character of the reality being investigated, as well as the twentieth-century belief in the Scientific Method.
The dogma of positivism, too, has been pernicious in the human sciences. Positivism insists that only the methods of natural science produce reliable knowledge, and that knowledge consists in registering lawlike regularities in empirical doings. (Positivism was philosophically discredited ages ago, but its ghost still haunts the halls of the human sciences and popular thinking.)
The doctrine of empiricism—namely, that true knowledge or justification comes only from sensory experience and empirical evidence—compounds the problem. Empiricism confuses the difference between science being empirical (correct) with the dogma that only empirically observable entities can exist in reality, such that only the empirics of science present the road to valid knowledge (erroneous).
Since ideological purism here is impossible (for starters, no empirical evidence could possibly validate empiricist claims—the idea is obviously self-defeating), most philosophically oblivious human scientists are content to accept but not commit to some fuzzy version of empiricism-ish. That makes trouble. For example, since we cannot directly empirically observe people’s meanings, values, beliefs, emotions, moral commitments, or other experiential subjectivities, it recurrently appears necessary to cram humanity into theories focused on behaviours, choices, rewards, costs, performances, accounts, and so on (never minding that items like rewards and costs require unobservable subjective processing to have any causal effects).
Contra empiricism, Aristotle correctly said 2,500 years ago, “We have to use the evidence of visible facts to throw light on those that are invisible.” Try, however, talking in academia about invisibilities as being real. Colleagues break out in hives, because that sounds like the ghosts and gods modernity was supposed to have banished (the anxiety nerve it touches is itself revealing). Precious few, however, ever reflect on how their reactions disclose highly particular ontological and moral commitments grounded in their highly specific location in historical development. We analyze everything but ourselves. We notice everyone else’s cognitive flaws and cultural biases, but our own remain (ahem) invisible.
Deeper in the background, various influences of nineteenth-century utilitarianism and evolutionism applied to human life have sent the human sciences down many wrong paths. Even further back, the early modern (and still contemporary) faith that quantification per se makes anything “more scientific” also lends itself to contorting the human in inappropriate methods and models.
Disciplinary status politics also matter. Insecurities about the identities of the human sciences relative to the natural sciences, for example, exacerbate the problematic role positivism and empiricism play. “Physics envy” is powerful, even when not admitted (and made worse by human scientists often aspiring to emulate outdated images of how physics, not to mention cosmology, works now). Then, between the human sciences, a similar disciplinary envy also operates—namely, resentment of economics by other disciplines for its greater public esteem. Economists are the high priests in the holy of holies of the temple of what is truly sacred, Growth of the Economy, and are thus consulted by financial investors and politicians and revered by the public.
The concerns of the humanities are, in fact, the inescapably enveloping context for the human sciences, whether or not anyone realizes or admits it.
Specific institutional dynamics also influence outcomes. Many universities live on external research funding and press faculty to obtain grants. But almost no research-grant programs incentivize big and broad thinking (Templeton being an exception). Meanwhile, talking heads in higher education yammer on about the importance of interdisciplinary work, while in fact every incentive presented to research faculty rewards hyper-specialization. Expertise and leadership in a subfield, which requires sustaining tight focus for mastery, is what is always rewarded in the end.
So, to ask human scientists during the first thirty years of their careers, starting day one of graduate school, to spend time and effort learning the philosophy of social science (and history, ethics, or—gasp—theology) would be to bid them intentionally to disadvantage themselves against their competitive, specializing peers. In the long run they would as a result become better scholars, teachers, and intellectuals. But every institutional disincentive stands in the way, and for most academics the long run is way too far away. The outcome? Our hyper-fragmented multiversities rather than so-called universities.
Does any of this problematic theory matter? Do misguided anthropologies generated by and operating in the human sciences have any effect on how people and societies live?
Yes, I think so. If not, the next obvious question would be why the human sciences deserve to exist. These models of humanity are taught across decades to countless millions of college students, who presumably absorb something from them. They generate myriad terms that ordinary people use to make sense of the world, like “socialization,” “game theory,” “imposter syndrome,” “cognitive dissonance,” and “triggering,” among hundreds of others. They filter out through popular books and articles—by people like Malcolm Gladwell and Steven Pinker—to shape social imaginaries and public discourse. And they form defining backgrounds of popular movements and policy initiatives, from parenting trends and higher-education marketing to communist revolutions and neo-liberal globalization.
The human sciences, like the humanities, do not simply describe and explain reality. They shape it. They advance not only “models of” reality, to use anthropologist Clifford Geertz’s phrase, but also “models for” reality. Their accounts of what supposedly is also specify what is normal, implying what should be the case. Economists may claim their models simply describe how things work, for instance, but over time they actually socialize people into their very own image: properly functioning humans in life are more or less rationally calculating utility maximizers. Are we then surprised to end up with the troubled world we face today?
The human sciences are both explanatory and formative enterprises because humans are, in sociological lingo, “self-reflexive” creatures. What we think we know about ourselves and the world around us itself somewhat shapes what we become—through our self-understandings, working categories, daily practices, life projects, and social institutions. Scholarship backed by the (even limited) authority of university scientific expertise disseminates various ideas that humans are “really just” X, Y, or Z (what C.S. Lewis called “nothing buttery”). Over time, the messages sink in, creating conditions for slippages and adjustments in self-understandings and related behaviours.
Those conditions then solidify into laws, regulations, norms, and institutions—which in turn help to shape people in certain directions and not others. Social orders, in other words, reflect, even if roughly, how their occupants understand the nature and point of human life. So good human science can be enlightening and constructive, and bad human science dangerous. (None of the above, it must be added, justifies the current right-wing political assaults on higher education—a travesty that will only make matters worse.)
The more technological powers humans generate, the more crucial becomes sanity on basic questions about the character and ends of humanity. We are told that humans stand on the cusp of huge breakthroughs in artificial intelligence, nanotechnologies, brain-computer interfacing, bioengineering, climate engineering, even transhumanism and posthumanism. If we stand any chance of sustaining some kind of human future, we need—against the overwhelming drives of nation-state competition and corporate profit motives—an adequate understanding of humanity.
The human sciences will not be decisive in this regard, but they do exert some influence. They therefore bear responsibility to get it right—as right as reasonably possible—when it comes to our own humanity. When we theoretically promulgate fragmenting or dehumanizing accounts of the human, we contribute to fragmentation and dehumanization in real life. Heaven knows we do not need more of that. Much more is at stake, then, than mere disciplinary statuses and minor academic career celebrity.
Might any constructive steps help repair this situation? In theory we could imagine a host of intellectual revisions and institutional reforms that might produce more realistic and complex treatments of humanity in the human sciences.
My (unrealistic) list: We need middle schools and high schools that, prior to college, actually educate students decently in history, philosophical basics, and critical thinking skills (which is not the same as being critical of everything). Among all scientists, we need attention and clarity on key issues in the philosophy of science. We need finally to put to rest some popular ideas about science that are flawed and misleading. We need faculty in all disciplines to become more broadly knowledgeable about areas beyond their specialties, from ancient philosophy to quantum physics. We need philanthropic foundations to support all this with funding. We need to imagine alternative forms of higher education and research that do not just mimic the old German research university model. We need a revival in the public appreciation of the value of a liberal arts education as a contextualizing backdrop to more specialized research. We need university administrations that value more in higher education than institutional status, revenues, and sports success. In short, we need research and educational enterprises that are most highly committed to truth about reality and to broadly educating people in ways that lead to flourishing lives and the common good.
Something like that is the start of what I think it would take. For that reason, I have zero hope that mainstream human sciences will learn any time soon to account for humanity in ways that reflect and honour the complex, mysterious realities of human personhood. The forces of status, careerism, conformity, and social reproduction that drive the world today are far too powerful. Even so, these problems are worth naming.
I do believe the human sciences can make invaluable contributions to our self-understandings, which is why I have devoted my career to sociology. But often they do not live up to their potential. Perhaps sometime in the future, tides will turn and we will enjoy realistic human sciences.
Meanwhile, pockets of countercultural resistance that champion more adequate understandings of human persons can and do carry on—religious liberal arts colleges (at least what of integrity remains of them) being examples. And, for an ultimate backstop, we have to rely on our own personal phenomenological experiences—contrary human science theories notwithstanding—that we are indeed much more than mere cultural dopes, utility maximizers, reactors to stimuli, network nodes, exchangers of benefits, situationist chameleons, dramaturgical performers, variables carriers, and the dehumanizing rest. As James Baldwin noted, “Not everything that is faced can be changed, but nothing can be changed until it is faced.”





