Bertrand Russell once wrote that “[w]hatever knowledge is attainable, must be attained by scientific methods; and what science cannot discover, mankind cannot know.” If it were possible, one would love to go back and ask Russel through what scientific means he came to such knowledge, but there it is. A more blatant statement of a common idea would be hard to find. It ought to be laughable if it weren’t so pervasive. Somehow we have come to believe that if something can not be quantified and repeatedly tested, it can not be known. There has, of course, been a significant backlash to this of late, but the nature of that backlash has hardly been helpful. Rather than suggest that all knowledge (including science) has always been the result of reasoned argument from a set of presuppositions, the modern backlash has been to trash the idea of objective knowledge outright, thereby treating scientific knowledge with the same skepticism that the popularizers of science, like Russell, had shown towards everything outside of science. Nevertheless, there still remains a vestigial respect for what is purportedly verified through quantitative methods without any real understanding of the true nature of scientific endeavour. My own observations of the fallout from this belief come from the field of health care management where there is a growing desire amongst policy makers to improve practice through the application of quantitative analysis while few managers have any real training in the proper assessment of such methods. It is an excellent endeavor, but its application, to date, leaves much to be desired.
I ought, from the start, to confess a natural bias towards quantitative analysis. I did my undergraduate in pure mathematics, an MSc in applied mathematics and a PhD in Operations and Logistics. I now spend my days building mathematical models to solve health care management problems. One might therefore assume I would welcome a reverence for quantitative analysis; however, it is not that simple. Make no mistake, I am firmly convinced that quantitative analysis has a lot to offer in terms of improving the way in which we run our health system (or any business for that matter), but the manner in which it is currently touted is troubling.
In my dealings with health care managers, I have often encountered those who are incredibly enthusiastic about the potential for quantitative analysis to improve health care operations. They wax eloquently about the power of mathematics to transform the way that they run their business, and while I don’t disagree with them, their fervency often troubles me. What particularly discomforts me is that they often dismissively, brush aside any discussion about the inevitable caveats that attend any modeling process. Such people, who want to see the results without the method, are the “uncritical enthusiasts”.
There are a number of reasons why these uncritical enthusiasts make me uncomfortable. First, every mathematical model is an abstraction or simplification of the real system. Indeed, as Einstein once said: “As far as the propositions of mathematics refer to reality, they are not certain, and as far as they are certain, they do not refer to reality”. As such, our models can only approximate reality to a degree; the more variable the parts to a system, the less amenable that system is to mathematical analysis. One would have thought such an idea would be so self-evident as to not require stating and yet it does for it is so often ignored. The more strident voices in the global warming debate on both sides are classic examples. How often have you heard the words “research has shown. . .” used to justify divergent opinions? Rarely are these claims followed up with “but only under conditions A, B and C” and rarely are they spoken by someone who actually understands what was done. Such language fails to recognize the tenuous nature of research and thus fails to move forward with the appropriate circumspection.
Nor is it good enough to recognize that any quantitative analysis necessarily ignores the qualitative aspects of a decision. Worker satisfaction, quality of life and well-being are all but impossible to adequately incorporate into a mathematical model; though there are plenty of my colleagues who misguidedly attempt to do precisely that. One example of this attempt is the measure adopted in many cost-effectiveness analyses called “quality-adjusted life years”—QALYs for short. In comparing multiple potential treatments, a natural measure of performance is the average number of life years gained through the treatment. But suppose one has two treatments—one that would allow you to live nineteen additional years without side effects and a second that would allow you to live twenty additional years, but confined to a wheelchair. Simply measuring life years gained would lead you to value the second treatment option over the first but most people would presumably disagree with that assessment. In an effort to overcome this shortcoming, the modeler will instead compare the treatments based on QALYs, in which a year in complete health might be deemed equivalent to three years in a wheel chair. Somehow, it is believed, that through this process of “quantifying” the quality of any given condition (often justified through surveys or thought experiments), the subjective nature of the problem has been removed. Of course, that is nonsense. The subjective aspect of the problematic decision has not been removed; rather, it has simply been incorporated into the quantitative model and given a veneer of objectivity. The reader, if not careful, may completely miss this.
Describing the process as such, might make it seem somewhat underhanded on the part of the modeler, but I think such a perspective would be unfair. The modeler recognizes a genuine defect in the model and seeks to address that through the only means he knows how—by turning what is qualitative into something quantitative, something measurable. Most troubling of all, he genuinely believes that he has succeeded and is thus offering an objective means of addressing these apparently subjective questions. Like Russell, he is convinced that science is the only means to knowledge and thus the only way to address subjective questions is to quantify them. The manager or policy maker on the other hand are simply content that they have a means of determining what treatment is the better option. The less ambiguity they have to deal with the better. Hence the complete disinterest in the method. The modeler has provided the policy maker or manager with what they want and thus they are content to use the model as justification.
Except when it doesn’t; for there are of course those times when the quantitative analysis does not support the intuitive or expected answer. Instead, it calls for costly and messy change that nobody wants. Suddenly the “uncritical enthusiast” gives way to the “unwarranted skeptic”. Now it is the method that is under scrutiny and every little deviation from reality, every minor simplification from how the system truly functions, is brought forward as evidence as to why the results of the model simply cannot be trusted. “Our system is too complicated and the human element too pervasive for any mathematical model to possibly provide us with useful information. We are better off trusting our own experience and intuition.” While the uncritical enthusiast may see all the benefits ignoring the conditions, the unwarranted skeptic sees all the conditions and therefore dismisses the benefits. This type of manager is the ultimate doubting Thomas, unwilling to be led further than he himself can see. ‘I don’t understand your model and therefore I don’t trust it’. Perhaps recognizing the tenuous nature of research, he dismisses such “knowledge” as not certain enough to be trusted.
And there the two types are left, one dismissing the model out of hand because it fails to confirm his own beliefs and convictions while the other upholds the model with an equal lack of comprehension simply because it confirms his beliefs and convictions. Nor can one truly blame either for there is no way that they can enter into a meaningful discussion of the merits of the model simply because critiquing the model involves an expertise that lies beyond their abilities. They are left in the unenviable position of either trusting the experts or else ignoring the analysis in favour of their own experience and intuition. Richard Lewontin, an evolutionary zoologist, in a piece he did for the New York Review put it this way:
“When scientists [or researchers] transgress the bounds of their own specialty they have no choice but to accept the claims of authority, even though they do not know how solid the grounds of those claims may be. Who am I to believe about quantum physics if not Steven Weinburg, or about the solar system if not Carl Sagan? What worries me is that they may believe what Dawkins and Wilson tell them about evolution. . .”
And therein lies the rub. Having been burned in the past by quantitative analysis that was poorly done or inadequately explained or poorly executed, the (un)warranted skeptic is likely to think twice before trusting such modeling exercises again. Yet without them he is left to make decisions that are complex enough that without quantitative tools finding the optimal solution is clearly impossible; as Lewontin goes on to say:
“On the one hand, science is urged on us as a model of rational deduction from publicly verifiable facts, freed from the tyranny of unreasoning authority. On the other hand, given the immense extent, inherent complexity, and counterintuitive nature of scientific knowledge it is impossible for anyone, including non-specialist scientists, to retrace the intellectual paths that lead to scientific conclusions about nature. In the end, we must trust the experts and they, in turn, exploit their authority as experts and their rhetorical skills to secure our attention and our belief in things that we do not really understand [. . .] Conscientious and wholly admirable popularizers of science like Carl Sagan use both rhetoric and expertise to form the minds of the masses because they believe, like the Evangelist John, that the truth shall make you free. But they are wrong. It is not the truth that makes you free. It is your possession of the power to discover the truth. Our dilemma is that we do not know how to provide that power.”
Let us grant that it is unlikely that managers or policy makers will be required in the future to undertake the kind of training that would be necessary for them to understand the analyses they ought to use in their decision making processes. In other words, they never will have “the power to discover the truth” that Lewontin praises so highly. Nonetheless, it is quite possible for managers to develop a radar for detecting poorly done analysis while simultaneously developing a comfort level with models that do not exactly mimic reality. In fact I teach a course in our Masters in Health Administration program wherein the goal is to steer future managers away from becoming either the uncritical enthusiasts (by helping them to detect the limitations and assumptions inherent in the methodology) or the unwarranted skeptics (by familiarizing them to the concept of abstract modeling and its potential application). Such training will hopefully help minimize the strident shouts of “research has shown” that do nothing but damage policy debate.
And yet, even with such abilities at their disposal, these future managers will still be faced with the difficult task of assessing analyses they do not truly grasp or muddling through all the resultant mishandling of available resources using only their intuition. Thus, even more than some additional training, what is needed in a manager is a commitment to pursuing the truth even if it goes contrary to popular opinion and an ability to assess the trustworthiness of those who have the skills necessary for discovering that truth. Sadly, these are two characteristics that are rare since the first is so often shoved aside under the name of political expediency (couched as “necessity”) and the second involves both a leap of faith that our modern culture teaches us to reject and the humility to recognize when our own expertise is insufficient. Instead, we focus on the number of letters after a consultant’s name, the size of their company or even the “professionalism” of their webpage. Perhaps we ought to be asking for a character reference. As Lewontin rightly points out, the truth is that we are all dependent on the authority of others when we step outside of our own narrow sphere of expertise. The question of whom we should trust is therefore an unavoidable one and not one that we can slough off on peer-reviewed academic journals that are sadly complicit in allowing poor research to save face. The sheer proliferation of academic journals is enough evidence of the ease with which research can now find an outlet if the researcher knows how and is willing to play the game. Ideally, an organization ought to have employees who are trained to properly assess quantitative analyses and who have gained the trust of their employers to do that well.
The so-called “triumph of the quantitative” turns out therefore to be no such thing. Though there is an increasing desire to couch our arguments in quantitative language, the understanding required to truly grasp the implications of the analysis are beyond the abilities of most policy makers. Appeals to quantitative analysis therefore become appeals to authority, often with insufficient diligence paid to ensuring that the authority is legitimate. Even worse, the reaction of the policy maker to the research is often guided not by the legitimacy of the authority but by the concordance between the results of the research and the beliefs and convictions of the policy maker. Analysts are also duplicitous in this, often hiding the caveats and limitations of the model in an effort to provide quick and clean results. There is a failure of character on all fronts.
If we are not to return to the rather laughable epistemology of Russell, how else are we to proceed? While that is much too large a question to adequately answer at this point, let me suggest that it begins by throwing out the idea that we need sure and certain knowledge. Augustine’s credo ut intelligum—”I believe in order that I may understand”—is a much more realistic expression of how knowledge is attained. It acknowledges that all knowledge requires a set of suppositions that are not questioned unless they are either shown to be internally inconsistent or else lead us to a conclusion that is so unpalatable as to force us to return to the suppositions that led us there. It provides a way forward in the face of uncertainty in our knowledge without retreating into the unlivable tenants of postmodernism that would have us all trapped within the sphere of own opinions.
I began this essay with the brashness of Bertrand Russell so let me end with what, in my own estimation, is a much richer epistemology. Drusilla Scott in her book on Michael Polanyi called The Everyman Revived sums it up well:
“There is no guaranteed certainty for man; that has been a will-o-the-wisp leading him astray. But there is a sureness of direction and of faith, which can be found in many different kinds of knowledge as well as in science. This faith is not irrational nor subjective. We have to believe in our own powers, but we have to train them, use them and discipline them as the scientist does his faculties. The truth of feeling, of moral sense, and of art need as much skill and dedication as the truth of science. It is not our every emotional whim that is to be trusted, any more than our abstract impersonal science, but the best judgment and discrimination that we can attain through self-discipline and through apprenticeship to the masters of our art, who speak to us with authority because we recognize in our hearts that they speak the truth.”