What’s Curiously Missing from the Wolfers Slides

Once again, the New Keynesians are cherry-picking from post-Keynesian and Marxian critiques of economic theory. This time the subject is macroeconomics – the whole thing. Justin Wolfers out of the University of Michigan recently posted a slideshow he presented at a lecture in celebration of former IMF chief economist Olivier Blanchard. The contents are largely what has come out of the RRPE for the past 50 years with some key omissions.

Can You Kill an Idea?

Wolfers’ talk identifies three broad trends in macroeconomics and a list of six “[t]hings that probably aren’t true.” The three trends – shift to empirics, cross-disciplinary fertilization, and policy relevance – very much play into the survival of the six theoretical dinosaurs:

  • Rational expectations
  • DSGE Models
  • Consumption Euler equation
  • Calvo pricing
  • New Keynesian Phillips curve
  • Classical dichotomy

Many of these, particularly the last two, I had thought were largely disregarded as empirical possibilities and only served as theoretical simplifications from which to investigate deviations for pedagogical purposes. The presentation, both in description of the broad trends and the treatment of the six macroeconomic fallacies, belies a fundamental article of faith: the belief in microfoundations.

Data, Data Everywhere, But Not a Stop to Think

Wolfers identifies three major developments in the shift to empirics: Falsification, the credibility revolution, and naturally occurring data (“big data”). Each of these three developments have undoubtedly aided in pushing the above macroeconomic fallacies to their empirical limits. However, the fact of their continued application bespeaks a disregard for critiques of some of the core foundations of microfounded macroeconomics.

The shift from Keynes-era aggregate accounting to macroeconomic forecasting was abetted by an attempt to mold economics into an experimental science like physics and chemistry. Economists such as Milton Friedman (and buttressed by the philosophy of Karl Popper) made the case that science, and hence economics, was essentially a project of eliminating potential conclusions through hypothesis testing. Utilizing statistical methods originally developed for eugenics, the falsificationists had at their disposal a method of quantitative hypothesis testing that would come to be renamed “econometrics.”

Almost as soon as falsification became empirical praxis, it was immediately criticized by a nascent turn in the philosophy of science. W.V.O. Quine argued that since every scientific hypothesis relies on a litany of theoretical assertions about how the world works, it is not possible to test any single hypothesis in isolation. Rather, when one engages in hypothesis testing, one is merely testing how a hypothesis comports with the other hypotheses that the researcher assumes to be true. Thus, to falsify any hypothesis is really to falsify a system of hypotheses, none of which can be firmly identified as the false hypothesis.

Further, Thomas Kuhn in Structure of Scientific Revolutions argued that, despite the faith in falsification, scientific paradigms rarely change as the result of falsification since an individual experiment may just as easily be the result of a peculiar data set. For Kuhn, science proceeds largely unfazed by results contrary to the given scientific paradigm until enough countervailing evidence piles up to throw the paradigm into crisis and a new one comes along.

Imre Lakatos in The Methodology of Scientific Research Programmes argued that even countervailing evidence is not enough to knock loose a dead scientific paradigm. In his framework, science is not merely a network of hypotheses, but rather a highly ordered structure containing a ‘hard core’ of theoretical commitments buttressed by a ‘protective belt’ of extrapolated conclusions that must be undermined before the theoretical core can be called into question.

The Credibility Gap

By the 1980’s economic science was shot through with holes. As I wrote last week, the Lucas critique called into question the possibility of aggregate approaches altogether. Drawing on Paul Feyerabend’s Against Method, Deirdre McCloskey’s 1983 The Rhetoric of Economics (later part of a collection of essays by the same name) called into question the enterprise of economics as a science on the basis that “science” itself is an incoherent concept reducible to elaborate acts of persuasion. That same year, Edward Leamer published Let’s Take The Con Out of Econometrics, another biting critique of the largely unquestioned methodology of economic science with suggested avenues forward.

Between these two interventions, econometricians began embracing a cornucopia of econometric approaches known as the “credibility revolution.” While this new frontier of empirical testing was certainly exciting, it only solved the various critiques of economic science by obfuscation. Each empirical approach promised new ways of looking at data, but none could command any claim to rectitude over any others apart from its ability to confirm deeply held beliefs about how the economy works. With no way to discern between spurious results and scientific breakthroughs, econometricians plodded along, publishing where the data confirmed conventional wisdom and recalibrating where it failed to do so.

In spite of data sets becoming richer with computing power multiplying to compensate, parameter estimation and calibration remained the empirical norm, however complex it had become. But these models still come up short. At base, all of these models rely on a congruity between aggregate and micro-level factor demand. However, this is a fallacy for at least three reasons:

  1. Aggregating heterogeneous factors upwards makes the resulting indices numerically meaningless. In other words, some sectors may very well have “well behaved” factor demand curves, but aggregation can only be done nominally with imputed values to distinguish “capital” from the profit rate. In other words, the aggregate values are at best a rough average measuring an unknown substance called “capital.”
  2. It is highly unlikely that factor demand is downward sloping in price. Unless the ratio of “capital” and labor is constant across sectors, some producers may switch to more capital intensive and then less capital intensive techniques (or vice versa) given a consistent increase (or decrease) in factor price.
  3. “Marginal productivity” is an illusion created by transforming a basic accounting identity and taking the derivative. This is true for any amount of disaggregation including disaggregating across qualities of labor (i.e., human capital).

As Anwar Shaikh demonstrates in his new book, consumer behavior – also fundamentally constrained by such accounting identities – exhibits the same lack of accord between overriding order and underlying “irrationality.” Marginal utility need not be compared with price in order to exhibit “well behaved” aggregate demand behavior. Again, so long as all income and expenditures are accounted for, the same aggregate behavior emerges.

Neoclassical Bicycle Repair Shop

The general lack of attention to aggregate accounting can perhaps be attributed to the economics’ history as a discipline. Although the Anglo-centric fairy-tale version has economics start with Adam Smith in 1776, the professionalization of economics into a separate academic discipline did not happen until Karl Marx’s 1867 Capital rendered the labor theory of value a hallmark of communist thought. As such, academic economics began with a commitment to investigating questions of distribution through exchange rather than through production.

This paradigm of examining distribution through exchange ultimately led to the proliferation of isoquants and indifference curve which allowed for maximizing budget constraints according to an arbitrary function. Whereas the Marxian system has relative prices determined according to costs of production plus a profit mark-up, the marginalist system arrives at absolute prices determined by the scarcity of factors and consumer goods.

The moral implications of each system are severe, if subtle. For the Marxian system, since factor prices are ultimately determined by the exogenous choice variable of the profit rate, changes in the economy can be effected by forcing capitalists to collect a different profit rate. In the marginalist system, since prices are determined through exchange, changes in the economy can only be effected by allowing prices to change to accommodate exchange of endowments held to be morally sacrosanct.

In order to paper over this rather bleak horizon of marginalist economics, economists have taken to cramming any sort of immaterial value into the utility function to ultimately blame “irrational” individual behavior for both the fate of their enterprise as well as the disruptions in the macroeconomy.

To this end, economists have branched out into other disciplines. Unfortunately, and as evidenced by Wolfers examples, these ventures are not so often syntheses with other disciplines so much as economic extrapolation imposing over the domain of other disciplines. Though Wolfers cites behavioral economics as “cross-disciplinary fertilization” with psychology, most clinical psychologists would agree that economists have a very strange conception of how humans make decisions.

Wolfers’ examples for economics’ reach into sociology – Gary Becker and George Akerlof – use similar logic of hedonistic preference ordering to describe kinship and discrimination. Again, most sociologists would likely not agree that the confluence of political, cultural, and legal structures on a given historical moment could be reduced to mere preference unless one takes for granted the “preference” for these attributes as inexorable “endowments.”

Behavioral economics, and its attendant broadening of the utility function, serves little other purpose than to give a moral judgment for consumer choice without any moral assessment of institutions in which those choices take place. The game of creating structured environments that have an optimal solution is one of testing the strength of ingrained value systems. Irrationality here is not a question of reasoned behavior but of a failure to be unscrupulous. Further, when behavioral experiments do work out, it is only a reflection of the totalizing nature of the structured environment created by the experiment, not a vindication of the calculus-driven reasoning of marginalist economics. Repairing each of the parts of a bicycle is not the same as getting a new bicycle.

So What

The theoretical commitments to marginalism and microfoundations has fundamentally restricted the domain of investigation of macroeconomics. By virtue of assuming so many arbitrary untruths about human behavior and technical knowledge has limited the province of “sensible” policy analysis to that which leaves things, for the most part, untouched.

The credibility revolution, combined with the Deirdre McCloskey’s “methodological anarchism” (her words, not mine), heralded a seemingly infinite menu of augmentations, correctives, and modifications on the traditional regression model. With so many models to choose from, and no clear way to compare them, economic knowledge becomes largely a process of confirming what’s popular.

However, what’s popular isn’t merely incidental.

What’s popular is determined by what people actually see: what corporate publishers can sell to school districts and universities; what research granting organizations are willing to fund; what articles journal editors are willing to publish; what think tanks say in their policy reports. At every stage, academic economists must appease those with so much money as to be able to fund research. One’s career advancement usually depends on this.

Economists have begun adopting analytical tools to test for scientific consensus and publication bias, but their publication remains relatively sparse. Even still, there is little reason to believe that as such approaches proliferate, they will become subject to the same forces of finance.

Where New Classical and New Keynesian approaches argue about whether regulations or externalities make markets imperfect, they obfuscate whether their idea of market perfection is even logically tenable. To challenge such a position requires the economist to recognize that their craft is as much one about building moral power as it is about causal inference.

Leave a Reply

Your email address will not be published. Required fields are marked *