Data come in many forms. A survey might capture a snapshot of a person’s political views. A long-form interview might gather insights into parenting strategies. Ethnographic work by an embedded researcher might uncover patterns in a subculture that take months to piece together. (Consider Evicted, by Matthew Desmond.) Census data may capture information on someone’s level of education and income. Despite the distinct insights and strengths of each kind of data, it’s hard to deny that quantitative data are the crème de la crème of today’s data hierarchy. Some call it “hard” data, comparing it to “soft” data in a way that inevitably makes it sounds like the sensible older (perhaps colder?) brother. There is something about a number that seems so, well, certain. When a statement contains a statistic, it contributes to the “evidence base.”
It’s ironic, then, that the study of statistics is actually built on the notion that everything is uncertain. As I teach it to students interested in public service and policy analysis, I must work to help them realize that statistical analysis is both a powerful lens for understanding a situation and, simultaneously, rarely able to definitively answer our most interesting questions. It turns out that answering questions with some degree of uncertainty is often the best we can do. While this may disappoint, it should not surprise us! In fact, I believe this quality puts statistics squarely in step with the social sciences and humanities, recognizing that human questions are complex and few outcomes are inevitable.
But statistics gives us an important distinctive that, on its better days, can make it particularly compelling: in statistics, the expression of the degree of uncertainty is actually made explicit. Do you have a limited sample size? Our models will account for that and provide a commensurately broader range of estimates to reflect that. Are you uncertain as to whether something is measured accurately? There are tools to test the sensitivity of your estimates to such errors. Might there be bias in who appears in your (perhaps non-random) sample? This is a hard problem, but statistics puts up a valiant fight to identify the possible consequences of trying to draw conclusions from such a sample. While any situation could still have “unknown unknowns,” statistics does its best to be precise about our level of uncertainty.
In the age of COVID-19, amid profound uncertainty, camps have formed that lack both the acknowledgement of trade-offs and the virtue of prudence.
Uncertainty in statistics is not a nuisance or a side issue, but a central parameter with which we concern ourselves. Although most people may see a statistic—say, an “average”—as providing information, my attention turns quickly not to the question that is definitively answered by the statistic, but those more interesting questions I would wish to answer with it, and the degree of uncertainty in their answers.
For example, suppose I poll a perfectly random sample of one hundred prospective American voters, asking if they prefer Donald Trump or Joe Biden, and I find forty-five for Trump and fifty-five for Biden. In a case like this, the statistic we’re likely to lean on is a “percent” or “proportion” (or perhaps “odds”). Like an average, each of these is just a way of combining information into a form we find useful or intuitive. The question definitively answered by the statistic is this: in that group of one hundred prospective voters, 55 percent say they prefer Biden. Unless you are very interested in those one hundred people—they are your classmates, or extended family, or members of your church—having this information is of limited usefulness. In fact, I find it to be a rule that in statistics, the things that can be known with certainty are, without fail, much less interesting than the things that can only be established with bounds of uncertainty. What we’d like to know is what this sample can tell us about the population of interest: What fraction of all prospective voters prefer Biden? And even more: Who will win? This is the million-dollar question—and indeed, millions of dollars are poured into answering it for at least two reasons. First, it would be helpful to know this in advance of the election as we try to anticipate future policy and leadership, and begin preparing for it. And second? We just wish we knew. We are desperately curious to know both what is really true now and what will happen next. We bristle under uncertainty.
And isn’t that always the case with humans? We struggle with uncertainty, and particularly in today’s polarized climate we seem unable to reckon with it. In the age of COVID-19, amid profound uncertainty, camps have formed that lack both the acknowledgement of trade-offs and the virtue of prudence. As my governor held his daily press briefings, which were on balance excellent, there were moments when I would cringe. At one point, he tried to encourage us in our commitment to stay-at-home orders by saying, “If all of this that we’re doing saves even just one life, it will have been worth it.” Well, it saved thousands of lives, and it no doubt was worth it; but if it had just been one? While the tragedy of lost imago Dei is in some very real sense incalculable, it’s a plain fact that we allow greater losses for lesser reasons regularly.
The work of statistical analysis in the world of policy is actually to help us do what can be hard to do on our own: identify trade-offs, recognize the uncertainty, and use (formal) inference to help make a prudent judgment.
At minimum, if my governor holds this belief, he ought to issue stay-at-home orders much more frequently, and probably close down highways that facilitate car accidents and restaurants that facilitate heart disease. Which is to say, I don’t think he holds this belief. He knows—and we all know, actually—that there are trade-offs. What we don’t know is how to talk about them. Similarly, there are few things as obviously prudent as wearing a face mask to prevent spread of an airborne disease, even if the precise level of protection afforded is uncertain. And yet, here we are.
The work of statistical analysis in the world of policy is actually to help us do what can be hard to do on our own: identify trade-offs, recognize the uncertainty, and use (formal) inference to help make a prudent judgment. Let me be quick to say that it can’t do these for us, but is a tool, a source of counsel. Statistics is instead sometimes enlisted to predict the future, like a crystal ball. Turns out it’s not up to the task—but not because of any failing of its own. It is because predicting the future is not possible—even with numbers!
But statistics has at least two very helpful features. First, it can give us some insight into population characteristics by acknowledging a margin of error based on limited data. While it’s unlikely that exactly 55 percent of prospective voters prefer Biden, we can calculate a margin of error to conclude that it is highly likely that a wider range of possibilities—say, 52–58 percent—likely includes the true level of public support for Biden. This “confidence interval” is determined based on the number of people in the survey (more people = narrower interval) and the level of confidence we judge to be necessary in the context at hand (the more confident we want to be that the true value is in the interval, the wider we need to make it). Suppose we conclude that based on those calculations, we have 95 percent confidence that our interval contains the true value. Crucially, though, this still doesn’t mean it must be in that range. It means that the failure rate of this “interval” process is 5 percent; if we gathered one hundred samples and made intervals with each of them, five of those intervals would fail to contain the true value. It’s possible, but very unlikely, that our own sample could be one of those outliers (see, e.g., the 2016 election). Second, with much less confidence, statistics can tell us which outcomes are most and least likely given our sample. This, however, will reflect not one but two sources of uncertainty: in our example, it will reflect our uncertainty about the true public opinion now, and our uncertainty about what might change before election day. If a statistical analyst today suggests, say, an 80 percent likelihood that Biden will win, they will not have been “wrong” if he loses. Their model suggested a 20 percent probability that he would lose; losing was very much possible. The challenge for any form of forecasting boils down to this: a prediction (rightly) assigns probabilities to a variety of outcomes, but reality only happens once.
The challenge for any form of forecasting boils down to this: a prediction (rightly) assigns probabilities to a variety of outcomes, but reality only happens once.
I’d be remiss if I didn’t mention a third thing that statistics can do, though the opportunity is too rarely seized: it can make you marvel. The way that a relatively small sample can be used to draw quite reliable inference about a whole population, with only a band of uncertainty, is stunning. While there is remaining uncertainty—indeed, statistical analysis always maintains that outliers are rare but not impossible—the difference in our knowledge from before to after a statistical analysis is profound. I must admit, when I teach my students the central limit theorem—the field’s most profound discovery (yes, discovery, not invention)—I cannot help but frame it as a kind of miracle. Chaos and randomness, harnessed to create a perfectly symmetric distribution. It’s like a word spoken over the deep.
Strangely, with so much to offer to the mind and perhaps even the soul, I find people often expect much more of statistics than it can deliver. Why not let it do what it does best? Economists like myself and other policy analysts make liberal use of a treasure trove of statistical techniques to try to help answer important policy questions like: Is this anti-poverty program successfully moving people out of poverty? How much does an increase in unemployment benefits affect the length of time people claim them? How many COVID-19 cases were prevented by stay-at-home orders? (Since you might be interested, about 9 million in the United States by the end of April, give or take.)
And yet after offering nearly limitless possibilities for learning about the world around us, and helping us make thoughtful decisions grounded in both what we can observe and what we know we can’t observe, statistical analysis is pressured to become instead a source of certainty. “Hard” numbers are demanded from analysts who know in their heads (and their hearts, if they were well-trained) that their report should acknowledge nuance and uncertainty. These numbers are demanded by the voter, the newspaper reader, the manager, the university president, the policy-maker, the communications director in every politician’s office, the president, and the prime minister. Everyone wants the numbers to tell them what is going on, what makes sense, what to do next. Intelligent, capable people feign an inability to understand nuance that is actually just an unwillingness to tolerate it. While the data need statistical analysis to make them informative, even that analysis still requires an interpreter—and everyone would like that interpreter to be certain. The use of numbers instead of words can give the illusion that she is.
In the realm of federal policy advice, numbers get attention. The Bureau of Labor Statistics reports that there are 80 sociologists and 320 historians (not including professors) working in the Washington, DC, metro area, dwarfed by a fleet of 4,270 political scientists. Surely it’s understandable that we would need many political scientists at the seat of our federal government; no doubt many of them are doing quantitative social science. What’s remarkable to me is that the area also employs 4,650 statisticians and an armada of 8,060 economists! There is much good work to do, of course, as I just described—but one senses that the uncertainty in the analyses by these “numbers people” sometimes gets lost in translation, or even left out of the message entirely. Let’s face it: uncertainty just doesn’t sell. Uncertainty is, well, unpopular. Maybe because it’s scary.
And so we find ourselves back at our fragile humanity, imagining we would be stronger if we just knew for certain. We just want the answer, without a wait, without a confidence interval. We want the truth: truth without grace—no “grace period” for learning, no “grace interval” for uncertainty. And yet, as we search the Scriptures, we find precious little support for this kind of obsessive quest. “Certainty” is not a defining characteristic of Christians or even part of our telos. It is not on the short list of virtues—faith, hope, and love. It’s not even on the long list—which adds joy, peace, patience, kindness, goodness, gentleness, and self-control.
I delight to think what we can learn if instead of idolizing certainty, we apply a bit more patience and self-control to our use of data.
I delight to think what we can learn if instead of idolizing certainty, we apply a bit more patience and self-control to our use of data. Proper use of statistical analysis does just that—acknowledging future unknowns and conveying even what we learn within careful bounds of uncertainty. If we can acknowledge the uncertainty that is ever present, we will better appreciate the insights that statistical analysis has for us. There is beauty in the work that it does to bring order to the chaos of raw data, even if there are still some fuzzy edges. Statistics takes speechless data and makes it talk. What a wonder! We ought not be disappointed that, having been gifted with speech, it lacks omniscience. It is, in the end, another reflection of the intrinsically uncertain world in which we live, where we see in a mirror darkly—even when there are numbers.
“Now faith is confidence in what we hope for and assurance about what we do not see. This is what the ancients were commended for” (Hebrews 12:1–2 NIV).