Mises Wire

Home | Wire | Randomized Controlled Trials and Economic Questions

Randomized Controlled Trials and Economic Questions

  • videoblocks-researcher-specialist-talking-about-lab-experiment-microscope-test-in-laboratory_rdhswjjhg_thumbnail-full01.png


The Austrians have long argued that equilibrium models of economic phenomena cannot capture the causal, realistic aspect of human behavior. "All things are subject to the law of cause and effect," says Menger in the famous opening line of his Principles of Economics. Formal economic models, in contrast, typically depict systems of equations in which each variable simultaneously determines the values of the other variables. 

And yet, mainstream empirical economics has undergone a radical shift in the last two decades, moving away from reduced-form, atheoretical, equilibrium-based models and embracing newer approaches that purport to capture causality. This movement, described as the "credibility revolution," holds that social scientists can identify cause-and-effect, not through a priori reasoning as in Mises's approach, by adopting the methods used in biomedical research. Through careful research design, one can use experimental methods to identify factors or variables that "cause" particular outcomes, even without knowing the underlying mechanisms in a deep or intuitive sense. For example, to find out if workers supply a greater quantity of labor in response to an increase in the wage rate, one designs an experiment in which a "treatment" (an increased wage) is applied to one group of workers while another group, chosen to match the characteristics of the first, gets the "control" or placebo treatment (no wage increase). If the two groups are carefully matched on all characteristics other than wage rates thought to affect labor supply or, even better, groups of workers are randomly assigned to the treatment or control group, any differences in hours worked can be attributed to the change in wage. Careful matching (through observed characteristics or propensity scores) or random assignment takes care of the ceteribus paribus condition that, in the older approaches, would be handled through multiple regression. If there is a statistically significant difference in outcome between the two groups, the treatment can be said to "cause" the outcome. 

Of course, this is a different concept of causation than Menger's or Mises's. Even within the mainstream, however, there has been some pushback against what some describe as a fetish with causal inference. Shortly after receiving his Nobel prize, Angus Deaton complained that randomized controlled trials (RCTs), most famously associated with MIT's Poverty Action Lab, were being asked to do too much -- what works in a small, experimental setting may have little "external validity," i.e., may not apply to other settings. George Akerlof recently observed that the emphasis on research design may come at the expense of the importance of the underlying economic question (a point I have also made). RCT enthusiasts have been called randomistas, not always as a compliment.

These critiques get at a fundamental point that has bothered me for some time. The increasing popularity of RCTs, instrumental variables models, differences-in-differences regressions, propensity score matching, regression discontinuity, and similar research designs seems to have coincided with a narrowing of focus. Rather than pursuing the big questions of economic theory, empirical researchers are applying more and more effort to understanding smaller and smaller questions -- what procedures get students to study harder for a test? How does race or gender affect the number of job offers? These are a far cry from the kinds of questions that have motivated economists throughout the centuries. 

This month's British Journal of Medicine features a hilarious send-up of RCTs in the form of a study, "Parachute Use to Prevent Death and Major Trauma When Jumping From Aircraft." Because the common belief that one should not jump out of a plane without a parachute is based merely on "biological plausibility and expert opinion," an RCT was designed to test for a true causal effect. After randomly assigning test subjects into parachute and no-parachute groups, the researchers could not detect a statistically significant treatment effect -- exactly the same number of jumpers was killed or injured in each group, namely zero, because the subjects were jumping out of small planes parked on the tarmac. Despite the potential lack of "external validity," the researchers (properly, according to the tenets of RCTs) conclude that there is no benefit to wearing a parachute when jumping out of a plane. 

Peter G. Klein is Carl Menger Research Fellow of the Mises Institute and W. W. Caruth Chair and Professor of Entrepreneurship at Baylor University's Hankamer School of Business.

Note: The views expressed on Mises.org are not necessarily those of the Mises Institute.
When commenting, please post a concise, civil, and informative comment. Full comment policy here

Add Comment

Shield icon wire