Scientific Credibility and the File-Drawer Problem
A trope of contemporary social commentary is that “science” has somehow become “politicized,” such that people no longer trust or believe what is presented as the scientific consensus on important social, political, and economic issues. The most salient example until recently was climate change, where various scientific professionals, associations, interest groups, and the like were portrayed as purely disinterested seekers of truth while disfavored outsiders were described as self-interested, ideological, or worse. We saw this in particular on social media where outsiders—often scientifically literate laypeople and technical professionals in adjacent fields, independent researchers, and others who paid careful attention to theory and data—were dismissed as “tech bros,” arrogantly expressing opinions without proper authorization. As I noted a few years ago, this trope ignores the fact that scientific research, education, and communication are social institutions and can be analyzed like any other group of purposeful human actors. Joe Salerno’s 2002 article on the role of resources, ideology, and institutions on the rebirth of the Austrian school is a good example of how to analyze intellectual and social movements from an institutional point of view; Michael Bernstein’s A Perilous Progress does something similar for the economics professional as whole.
The idea of scientists as a priestly caste, criticism of whom constitutes “science denial” or “spreading misinformation,” is of course central to the conventional narrative about covid-19. Many commentators worry that substantial public disagreement on the nature and significance of the covid-19 pandemic and the efficacy of vaccines and mitigation measures such as lockdowns, border closures, masks, and social distancing will contribute to a decline in trust of scientists and even science itself. Indeed, there is evidence that experience with previous epidemics leads to reduced trust in scientists and their work (though not “science” in the abstract).
Little acknowledged in these discussions, however, is the role that scientists themselves, particularly in their public communications, have played in eroding public trust in themselves and their work. Systematic misrepresentation of the scientific evidence on covid-19 and its mitigation measures has been a central feature of news coverage and social media commentary for the last year and a half. Press releases from scientific organizations and government agencies, news reports of scientific papers, and social media posts by prominent scientists continue to focus on statistics such as the number of positive test results without controlling for the number of tests administered, the characteristics of the tested population, and the cycle threshold (sensitivity) for PCR tests; to present highly aggregated measures of infection and spread that obscure the enormously skewed distribution in severity by age and health status; and to ignore context that would allow for comparison across similar locations or among similar diseases over time.
Another problem is the idea that, in addressing a complex public policy issue with a variety of social, cultural, and economic ramifications, only the views of infectious disease epidemiologists (and the personal experiences of healthcare professionals) are relevant in deciding if cities should be locked down, children prevented from attending school, businesses closed, and the like. Issues such as the constitutionality or legality of mitigation measures, what risks people consider reasonable, and how to assess marginal tradeoffs among specific health outcomes and other goals—even the idea of tradeoffs and marginal analysis itself—are considered irrelevant.
More specifically, there is a wide gulf between the scientific evidence on mitigation measures—the so-called nonpharmaceutical interventions or NPIs—and the way this evidence has been described. Back in spring 2020, when these mitigation measures began to be imposed, I did my own minireview of the scientific literature on the effectiveness of NPIs on the spread of infectious diseases, particularly respiratory viruses. I focused on the handful of studies that featured randomized controlled trials or quasi-natural experiments in a real-world setting. The consensus of this precovid literature is that masks, frequent hand washing and hand sanitizing, distancing, and the like had either very small effects or no effect on disease severity or spread. This was at the time that shops, restaurants, schools, and offices were beginning to require masks and social distancing, installing plastic barriers and HEPA filters, adding extra cleaning and sanitizing, and other interventions – presumably on the basis of hard, scientific evidence. But that evidence was lacking. I didn’t see it until later but Slate Star Codex published a review of mask studies that covered many of the same papers and reached the same conclusions I did.
What about now, more than a year into the covid-19 pandemic? Surprisingly, most of the evidence offered by government agencies is based on computer or lab simulations of the movement of particles (or anecdotes). The most highly touted field studies are observational (i.e., there are no treatment and control groups, making it impossible to assign causality). Given that the scientific (and social scientific) establishment has maintained for decades that randomized-controlled trials are the “gold standard” for assigning causality, the absence of RCT evidence on masks and other NPIs is surprising. Here is a recent review of what we know. The majority of the evidence is that masks, distancing, plastic barriers, and the like have played at best a very small role, and most likely no role, in mitigating the spread of covid-19. The evidence is almost entirely at odds with the message presented to the public.
Scientists themselves have played a role in spreading this misinformation, partly via the “file drawer problem” in which experimental results that support the preferred narrative are publicized and promoted, while those that disconfirm the narrative are downplayed or ignored. A good example is a recent, large-scale study conducted by the US Centers for Disease Control on the effectiveness of masks in school. Media outlets and the CDC itself breathlessly touted the finding that mask requirements for unvaccinated teachers, along with improved air circulation, had a small, negative effect on virus transmission in schools. However, the executive summary and virtually all the news accounts neglected to mention that the study also looked at student mask wearing, distancing requirements, hybrid teaching, physical barriers in classroom, and the installation of HEPA filters and found that these had no statistically significant effect on transmission.
As schools (and colleges) around the world are hotly debating mask requirements, the fact that the most comprehensive experimental study to date found that masks have no effect on transmission is completely ignored – because few people know about this finding. (Kudos to David Zweig and New York Magazine for covering the story in a major feature this week: “Over the course of several weeks, I also corresponded with many experts—epidemiologists, infectious disease specialists, an immunologist, pediatricians, and a physician publicly active in matters relating to covid—asking for the best evidence they were aware of that mask requirements on students were effective. Nobody was able to find a data set as robust as the Georgia results [which found no effect]—that is, a large cohort study directly looking at the effects of a mask requirement.”) Among the general population, the most comprehensive, large-N experimental study on masks to date is the Danish RCT which found no effect of mask wearing on transmission—a study that was file drawered in its entirety.
If scientists are concerned about a decrease in public confidence in their work, and the standing of scientific research more generally, they should look not at “Twitter trolls” but at the way scientists themselves have presented their findings and the magnitude and significance of their work. Science is a process of inquiry, not a body of revealed truth, and scientists are participants in the community of exploration, discovery, analysis, and communication, not arbiters of “misinformation.” By positioning themselves as guardians of truth and the only legitimate authority on complex policy issues, certain segments of the scientific community have largely created the very problems they now deplore.
Update: Here is Jacob Sullum making essentially the same points.