Blog Name

The Need for Different Classes of Macroeconomic Models

Date

Body

This is my third piece on dynamic stochastic general equilibrium models (DSGEs). The first, a PIIE Policy Brief, was triggered by a project, led by David Vines, to assess how DSGEs had performed during the financial crisis (namely, badly) and how they could be improved.1 That brief went nearly viral (by the standards of blogs on DSGEs ☺). The many comments I received led me to write a second piece, a PIIE RealTime blog, which again led to a further round of reactions, prompting me to write this blog (which I hope and fully expect will be the last on this topic). In this blog I want to make one main point:

Different classes of macro models are needed for different tasks.

Let me focus on two main classes.2

Theory models, aimed at clarifying theoretical issues within a general equilibrium setting. Models in this class should build on a core analytical frame and have a tight theoretical structure. They should be used to think, for example, about the effects of higher required capital ratios for banks, or the effects of public debt management, or the effects of particular forms of unconventional monetary policy. The core frame should be one that is widely accepted as a starting point and that can accommodate additional distortions. In short, it should facilitate the debate among macro theorists.

Policy models, aimed at analyzing actual macroeconomic policy issues. Models in this class should fit the main characteristics of the data, including dynamics, and allow for policy analysis and counterfactuals. They should be used to think, for example, about the quantitative effects of a slowdown in China on the United States, or the effects of a US fiscal expansion on emerging markets.

It would be nice if a model did both, namely have a tight, elegant, theoretical structure and fit the data well. But this is a pipe dream. Perhaps one of the main lessons of empirical work (at least in macro, and in my experience) is how messy the evidence typically is, how difficult aggregate dynamics are to rationalize, and how unstable many relations are over time. This may not be too surprising. We know, for example, that aggregation can make aggregate relations bear little resemblance to underlying individual behavior.

So, models that aim at achieving both tasks are doomed to fail, in both dimensions.3

Take theory models. DSGE modelers, confronted with complex dynamics and the desire to fit the data, have extended the original structure to add, for example, external habit persistence (not just regular, old habit persistence), costs of changing investment (not just costs of changing capital), and indexing of prices (which we do not observe in reality), etc. These changes are entirely ad hoc, do not correspond to any micro evidence, and have made the theoretical structure of the models heavier and more opaque.4

Take policy models. Policy modelers, looking to tighten the theoretical structure of their models, have, in some cases, attempted to derive the observed lag structures from some form of optimization. For example, in the main model used by the Federal Reserve, the FRB/US model, the dynamic equations are constrained to be solutions to optimization problems under high order adjustment cost structures. This strikes me as wrongheaded. Actual dynamics probably reflect many factors other than costs of adjustment. And the constraints that are imposed (for example, on the way the past and the expected future enter the equations) have little justification, theoretical or empirical.

So what should be done? My suggestion is that the two classes should go their separate ways.

DSGE modelers should accept the fact that theoretical models cannot, and thus should not, fit reality closely. The models should capture what we believe are the macro-essential characteristics of the behavior of firms and people, and not try to capture all relevant dynamics. Only then can they serve their purpose, remain simple enough, and provide a platform for theoretical discussions.5

Policy modelers should accept the fact that equations that truly fit the data can have only a loose theoretical justification. In that, the early macroeconomic models had it right: The permanent income theory, the life cycle theory, the Q theory provided guidance for the specification of consumption and investment behavior, but the data then determined the final specification.6

Both classes should clearly interact and benefit from each other. To use an expression suggested by Ricardo Reis, there should be scientific cointegration. But the goal of full integration has, I believe, proven counterproductive. No model can be all things to all people.

Notes

1. The outcome of this project will be a number of articles, to be published in a Special Issue of the Oxford Review of Economic Policy with the title "Rebuilding the Core of Macroeconomic Theory."

2. As discussed in the previous blog, the others are partial equilibrium models, small toy models for pedagogical and communication purposes, reduced form models for forecasting, etc. They all have an important role to play.

3. The French have an apt expression to describe such contraptions: “The marriage of a carp and a rabbit. ”

4. I have a number of other problems with existing DSGEs, but these were the topics of my previous blogs. 

5. Ironically, I find myself fairly close to my interpretation of Ed Prescott’s position on this issue, and his dislike of econometrics for those purposes.

6. As Ray Fair has shown, there is no incompatibility between doing this and allowing for forward-looking, rational expectations.

More From

More on This Topic