Distortions in Macroeconomics

Submission to the NBER Macroeconomics Annual, sponsored by the National Bureau of Economic Research

June 18, 2017

After-dinner talks are the right places to test tentative ideas hoping for the indulgence of the audience. Mine will be in that spirit, and reflect my thoughts on what I see as a central macroeconomic question: What are the distortions that are central to understanding short-run macroeconomic evolutions?

I shall argue that, over the past 30 years, macroeconomics had, to an unhealthy extent, focused on a one-distortion (nominal rigidities) one-instrument (policy rate) view of the macro economy. As useful as the body of research that came out of this approach was, it was too reductive, and proved inadequate when the Great Financial crisis came. We need, even in our simplest models, to take into account more distortions. Having stated the general argument, I shall turn to a specific example and show how this richer approach modifies the way we should think about policy responses to the low neutral interest rates we observe in advanced economies today.

Let me develop this theme in more detail.

Back in my student days, i.e. the mid-1970s, much of macroeconomic research was focused on building larger and larger macroeconometric models, based on the integration of many partial equilibrium parts. Some researchers worked on explaining consumption, others on explaining investment, or asset demands, or price and wage setting. The empirical work was motivated by theoretical models, but these models were taken as guides rather than as tight constraints on the data. The estimated pieces were then put together in larger models. The behavior captured in the estimated equations reflected in some ways both optimization and distortions, but the mapping was left, it was felt by necessity, implicit and somewhat vague. (I do not remember hearing the word “distortions” used in macro until the 1980s)

These large models were major achievements. But, for various reasons, researchers became disenchanted with them. Part of it was obscurity: the parts were reasonably clear, but the sum of the parts often had strange properties. Part of it was methodology: Identification of many equations was doubtful. Part of it was poor performance: The models did not do well during the oil crises of the 1970s. The result of disappointment was a desire to go back to basics.

For my generation of students, three papers played a central role. One was the paper by Robert Lucas (1973) on imperfect information. The other two were the papers by Stanley Fischer (1977) and by John Taylor (1980) on nominal rigidities. While the approaches were different, the methodology was similar: The focus was on the effects of one distortion: imperfect information leading to incomplete nominal adjustment in the case of Lucas, and explicit nominal rigidities, without staggering of decisions in Fischer, with staggering of decisions in Taylor. All other complications were cast aside, to focus on the issue at hand, the role of nominal rigidities and the implied non-neutrality of money.

Inspired by these models, further work then clarified the role of monopolistic competition, the role of menu costs, the role of different staggering structures, showing how each of them shaped the dynamic effects of nominal shocks. The natural next step was the re-integration of these nominal rigidities in a richer, micro-founded, general equilibrium model. The real business cycle model, developed by Kydland and Prescott (1982), provided the simplest and most convenient environment. Thus, was born the New-Keynesian (NK) model, a slightly odd marriage of the most neo-classical model and an ad-hoc distortion. But it was a marriage that has held together to this day.

More From: 

Olivier Blanchard Senior Research Staff