Body
One of the best pieces of advice Rudi Dornbusch gave me was: Never talk about methodology. Just do it. Yet, I shall disobey and take the plunge.
The reason and the background for this blog is a project started by David Vines about DSGEs, how they performed in the crisis, and how they could be improved.[1] Needled by his opinions, I wrote a PIIE Policy Brief. Then, in answer to the comments to the brief, I wrote a PIIE RealTime blog. And yet a third, another blog, each time hopefully a little wiser. I thought I was done, but David organized a one-day conference on the topic, from which I learned a lot and which has led me to write my final (?) piece on the topic.
This piece has a simple theme: We need different types of macro models. One type is not better than the other. They are all needed, and indeed they should all interact. Such remarks would be trivial and superfluous if that proposition were widely accepted, and there were no wars of religion. But it is not, and there are.
Here is my attempt at typology, distinguishing between five types. (I limit myself to general equilibrium models. Much of macro must, however, be about building the individual pieces, constructing partial equilibrium models, and examining the corresponding empirical micro and macro evidence, pieces on which the general equilibrium models must then build.) In doing so, I shall, with apologies, repeat some of what was in the previous blogs.
Foundational models. The purpose of these models is to make a deep theoretical point, likely of relevance to nearly any macro model, but not pretending to capture reality closely. I would put here the consumption-loan model of Paul Samuelson, the overlapping generations model of Peter Diamond, the equity premium model of Ed Prescott, the search models of Diamond, Mortensen, and Pissarides, and the models of money by Neil Wallace or Randy Wright (Randy deserves to be here, but the reason I list him is that he was one of the participants at the Vines conference. I learned from him that what feels like micro-foundations to one economist feels like total ad-hocery to another…) .
DSGE models. The purpose of these models is to explore the macro implications of distortions or set of distortions. To allow for a productive discussion, they must be built around a largely agreed upon common core, with each model then exploring additional distortions, be it bounded rationality, asymmetric information, different forms of heterogeneity, etc. (At the conference, Ricardo Reis had a nice list of extensions that one would want to see in a DSGE model).
These were the models David Vines (and many others) was criticizing when he started his project, and, in their current incarnation, they raise two issues:
The first is what the core model should be. The current core, roughly an RBC (real business cycle) structure with one main distortion, nominal rigidities, seems too much at odds with reality to be the best starting point. Both the Euler equation for consumers and the pricing equation for price-setters seem to imply, in combination with rational expectations, much too forward lookingness on the part of economic agents. My sense is that the core model must have nominal rigidities, bounded rationality and limited horizons, incomplete markets and the role of debt. I, and many others, have discussed these issues elsewhere, and I shall not return to them here. (I learned at the conference that some economists, those working with agent-based models, reject this approach altogether. If their view of the world is correct, and network interactions are of the essence, they may be right. But they have not provided an alternative core from which to start.[2] )
The second issue is how close these models should be to reality. My view is that they should obviously aim to be close, but not through ad-hoc additions and repairs, such as arbitrary and undocumented higher-order costs introduced only to deliver more realistic lag structures. Fitting reality closely should be left to the next category I examine, i.e., policy models.
Policy models. (Simon Wren-Lewis prefers to call them structural econometric models.) The purpose of these models is to help design policy, to study the dynamic effects of specific shocks, to allow for the exploration of alternative policies. If China slows down, what will be the effect on Latin America? If the Trump administration embarks on a fiscal expansion, what will be the effects on other countries?
For these models, fitting the data and capturing actual dynamics is clearly essential. But so is having enough theoretical structure that the model can be used to trace the effects of shocks and policies. The twin goals imply that the theoretical structure must by necessity be looser than for DSGEs: Aggregation and heterogeneity lead to more complex aggregate dynamics than a tight theoretical model can hope to capture. Old fashioned policy models started from theory as motivation and then let the data speak, equation by equation. Some new-fashioned models start from a DSGE structure and then let the data determine the richer dynamics. One of the main models used at the Federal Reserve, the FRB/US model, uses theory to restrict long-run relations and then allows for potentially high-order costs of adjustment to fit the dynamics of the data. I am skeptical that this is the best approach, as I do not see what is gained, theoretically or empirically, by constraining dynamics in this way.
In any case, for this class of models, the rules of the game must be different than for DSGEs. Does the model fit well, for example, in the sense of being consistent with the dynamics of a VAR characterization? Does it capture well the effects of past policies? Does it allow one to think about alternative policies?
Toy models. Here, I have in mind models such as the many variations on the IS-LM model, the Mundell-Fleming model, the RBC model, and the New Keynesian model. As my list indicates, some may be only loosely based on theory, others more explicitly so. But they have the same purpose. Allow for a quick first pass at some question, or present the essence of the answer from a more complicated model or class of models. For the researcher, they may come before writing a more elaborate model or after, once the elaborate model has been worked out and its entrails examined.
How close to formal theory these models remain is just not a relevant criterion here. In the right hands, and here I think of master craftsmen such as Robert Mundell or Rudi Dornbusch, they can be illuminating. There is a reason why they dominate undergraduate macroeconomics textbooks: They work as pedagogical devices. They are art as much science, and not all economists are gifted artists. But art is of much value. (The nature of art has changed somewhat. In the old days, the fact that paper is two-dimensional forced one to write models with two equations, or sometimes, with a lot of ingenuity and short cuts, models with three equations. The ease of use of MATLAB and Dynare have made it easier to characterize and convey the characteristics of slightly larger models.)
Forecasting models. The purpose of these models is straightforward: Give the best forecasts. And this is the only criterion by which to judge them. If theory is useful in improving the forecasts, then theory should be used. If it is not, it should be ignored. My reading of the evidence is the verdict is out on how much theory helps. The issues are then statistical, from how to deal with over-parameterization to how to deal with the instability of the underlying relations, etc.
In sum: We need different models for different tasks. The attempts of some of these models to do more than what they were designed for seem to be overambitious. I am not optimistic that DSGEs will be good policy models unless they become much looser about constraints from theory. I am willing to see them used for forecasting, but I am again skeptical that they will win that game. This being said, the different classes of models have a lot to learn from each other and would benefit from more interactions. Old fashioned policy models would benefit from the work on heterogeneity, liquidity constraints, embodied in some DSGEs. And, to repeat a point made at the beginning, all models should be built on solid partial equilibrium foundations and empirical evidence.
Notes
[1] The outcome of this project will be a number of articles, to be published in a Special Issue of the Oxford Review of Economic Policy with the title "Rebuilding the Core of Macroeconomic Theory."
[2] A semantic issue: While I believe that this class of models must indeed be dynamic, stochastic, and general equilibrium, the acronym DSGE is widely seen as referring to a specific class of models, namely RBC-based models with distortions. Agent-based modelers would argue that this is not the only approach within that class.