From Casual Empiricism to Scientific Facts in Economics

Year: 
2015
Topic: 
Methodology

Empirical work in economics and, more generally, in social sciences often comes in for criticism. In some circles, in fact, there seems to be an extreme skepticism about any empirical claim in economics whatsoever. I strongly disagree with that view. These critics often ignore – or are unaware of – the inherent difficulties in empirical exercises. There is, generally speaking, a huge gulf between econometric models, which are invariably based on a set of assumptions, and the process that generates the data in the real world. Rather than viewing this area of endeavor with skepticism, I have a great deal of admiration for the progress that has been made in the way empirical work in economics is being conducted. There is, naturally, always room for improvement, but that obvious fact should not obscure the progress made in applied economics over the past decades.

It is worth recalling here that Friedman’s famous 1953 article on the methodology of positive economics was motivated by the so-called marginalist controversy between Lester, on the one hand, and Stigler and Machlup, on the other, in the mid-40s. Friedman ended up rejecting the positions of both sides in the debate and went on to develop his methodological contribution finally published in 1953. In the process, he had an interesting exchange of letters with Don Patinkin, who was profoundly disappointed with the conclusion reached by Stigler and Machlup. He expressed his frustration in one of his letters to Milton in the following words: “The dammed thing is that it’s almost impossible to set up a critical experiment in economics” (cited in Backhouse, 2009).

Why are experiments so important in the construction of scientific facts? Many different kinds of processes are at work in the world around us, and they are superimposed on, and interact with, each other in complicated ways. As a result, “gold standard” facts in science come in the form of experimental results. Nonetheless, we should never forget that there is a significant sense in which (quasi-) experimental facts and theory are interrelated. Scientific “facts” can be overturned if the knowledge underlying them is deficient or faulty. They can also come to be seen as irrelevant in the light of some shift in theoretical understanding. Still, however informed by theory an experiment may be, in one sense scientific facts are determined by the world as it is, not by theory. This is why the attempt to test the adequacy of scientific theories against experimental results is a meaningful quest (see, among others, Chalmers, 2013).

In the first edition of his famous textbook on economics, Paul Samuelson said the following about applied work in economics: “This is a difficult and complicated task. Because of the complexity of human and social behavior, we cannot hope to attain the precision of a few of the physical sciences. We cannot perform the controlled experiments of the chemist or biologist. Like the astronomer we must be content largely to “observe”. But economic events and statistical data observed are unfortunately no so well behaved and orderly as the paths of the heavenly planets. Fortunately, however, our answers need not be accurate to several decimal places; on the contrary, if only the right general direction of cause and effect can be determined, we shall have made a tremendous step forward.”  

At least in the area of microeconomics, this has certainly changed. Not only do we now design and conduct field experiments to test a range of critical issues in economics, but we have recently reached the point where we can also do so in multiple environments in order to evaluate the robustness of our results. Even so, as I argued in a previous post (on the significance of replication in the social sciences), we need to further promote the replication of relevant, internally valid studies in both similar and different environments in social sciences.

An excellent example of this strategy is the collective work published in the American Economic Journal: Applied Economics (edited by Esther Duflo) on the effects of micro-credits for the poor and summarized by Abhijit Banerjee, Dean Karlan and Jonathan Zinman (2015). The six experiments that they compiled (conducted by a large group of scholars) cover a wide range of locations and experimental designs. A number of robust scientific facts stand out, with one of the main ones being that micro-credits seem to have modestly positive, but not transformative, effects on the poor. In light of this finding, and relying on theoretical reasoning, these authors suggest directions for further research. Methodologically, this is science at its best. These authors start out on common ground in terms of their set of hypotheses, test them using experimental designs and then end up revising their hypotheses in order to advance knowledge through future experimental facts. Empirical work cannot be more exciting and fulfilling than this. 

At this point, however, it is only fair to acknowledge that experiments in the social sciences do have their limitations, sometimes severe ones (in the limit, experimentation is not feasible for a wide range of important causes). But surely the best way to overcome many of them would be to work within families of experiments that would allow us to vary causes, one by one, over a range of interesting values and environments, with the entire effort being conducted under the umbrella of an abstract model that permits us to interpret, extrapolate and generalize the facts established by our experiments.

Returning to Samuelson (1948), it is worthwhile here to recall his words on abstraction: “All analysis involves abstraction. It is always necessary to idealize, to omit detail … to set up the right questions before we go out looking at the world as it is. Every theory, whether in the physical or biological or social sciences, distorts reality in that it oversimplifies. But if it is a good theory, what is omitted is greatly outweighed by the beam of illumination and understanding that is thrown over the diverse empirical data.”  To understand human behavior, we need to abstract from it. That abstraction will also be reflected in our empirical work. Thus, in my view, it is preposterous to expect that we can enhance the robustness of the accumulated empirical knowledge in economics by simply re-analysing previously studied datasets without thorough theoretical modelling guiding those exercises (as it is fashionable nowadays).


References:

Backhouse, R. (2009): “Friedman’s 1953 essay and the marginalist controversy”, in U. Mäki (ed.), The Methodology of Positive Economics: Reflections on Milton Friedman Legacy, Cambridge University Press.

Banerjee, A., D. Karlan and J. Zinman (2015): “Six Randomized Evaluations of Microcredit: Introduction and Further Steps”, American Economic Journal: Applied Economics, 7(1)1-21.

Chalmers, A. F. (2013): What is this thing called science?, Hackett Publishing Company. 

Samuelson, P. (1948): Economics: An Introductory Analysis, McGraw-Hill.

Share this