# Isn’t it amazing how well the Consumption Euler Equation works?

While preparing graphs for my Principles of Macroeconomics class, I made this one:

The blue line is the growth rate of nominal consumption spending in the US, the red line is the nominal interest rate on a risk-free asset (a 10-year US government bond). See the way the red line tracks the blue line? That’s a beautiful confirmation of the Consumption Euler Equation which is the cornerstone of all modern macro models. (And no, I didn’t tweak this graph by restricting the time period or choosing different axes for the two lines or transforming the data somehow. This is a plot of the raw data without any editing. No funny stuff.)

PS: I’m actually not going to teach the Euler Equation in my Principles Class. Nobody seems to. Mankiw’s textbook doesn’t. But I’m increasingly asking myself why not?

# Towards a measure of welfare-relevant national output

Robert Barro says GDP overstates national income because it counts investment twice.

Here is Scott Sumner explaining Barro’s point with an example:

Thus suppose Tesla builds a battery factory that costs \$1 billion, which lasts for 20 years.  They hire workers and pay another \$2 billion in wages over 20 years.  The batteries sell for a total of \$3.3 billion, a profit margin of 10%.   In this example, \$4.3 billion is added to GDP over the life of the factory—\$1 investment and another \$3.3 billion in consumer goods (batteries).   But there is actually only \$3.3 billion worth of actual “goods” being produced; the \$1 billion factory investment is an input.

As Scott Sumner points out, GDP isn’t meant to be a measure of national welfare, but of national output. This should always be kept in mind and should be pointed out whenever someone is using GDP per capita as a measure of welfare. But it’s clear that GDP, understood as national output, is really useful for many policy discussions.

That said, I was thinking about how to correct GDP to better measure that part of national output which is directly relevant to people’s wellbeing. And here’s what I would do: I would count all spending on consumption goods (private and public) as well as residential construction spending which is presently counted as „investment“. Following Barro’s critique I would not count spending on capital goods such as factories, machines, tools, and intellectual property which are only indirectly useful to consumers in so far as they help produce consumer goods in the future.

As for government consumption, I would suggest to apply a “waste correction” to take into account the fact that some of that consumption just isn’t useful to consumers. Spending billions of euros on a tunnel or an airport or a bridge which nobody has used yet or on a weapons system which (hopefully) will never be used, is to a large degree wasted money, although views will differ exactly how much of it is really wasted. At any rate, I think GDP should try to account for government waste.

So to sum up, I’d propose the following measure:

Welfare-relevant GDP
= Private consumption
+ Government consumption x (1 – waste ratio)
+ Investment in residential construction

Here’s what this would look like for Austria in 2018:

 million euros, 2018 Private consumption 199.459 Government consumption 74.295 of which waste 14.859 Residential investment 17.232 Welfare-relevant GDP 276.126 Conventional GDP 386.063 Ratio: welfare-relevant / conventionalGDP 71,5 %

In other words, conventional GDP overstates the supply of goods that are directly relevant for the welfare of households by almost 30%. I would like the welfare-relevant GDP measure to be used when comparing living-standard across countries or within countries across time. And I would like growth theory to focus on the growth of this measure.

(PS: What about exports and imports? Exports aren’t welfare-relevant for the home country, because those are goods consumed by foreigners. Imports are, of course, already included in measures of private and public spending measures. So there’s no need to add exports and subtract imports as done in conventional GDP.)

# Wie groß ist der Spielraum für eine Steuersenkung in Österreich?

Irgendwann in der laufenden Legislaturperiode wird es eine größere Steuerreform geben. Das war eines der zentralen Wahlversprechen der jetzigen Regierungsparteien. Die genaue Ausgestaltung steht noch zur Debatte, aber der Gesamtumfang wurde schon verkündet: 6,5 Milliarden Euro sollen es sein.

Nun fragt man sich, ob denn eine Steuersenkung in diesem Umfang möglich ist, nachdem die Staatsschulden immer noch über der Maastricht-Grenze von 60% des BIP liegen und das Budget nur knapp im positiven Bereich ist. Um mit den legendären Worten unseres Bundespräsidenten zu sprechen: Wie geht des z’sam?

Spielraum für eine Steuersenkung gibt es dann, wenn die derzeitigen Staatseinnahmen höher sind als zur langfristigen Stabilisierung der Staatsschulden notwendig. Derzeit liegt die Schuldenquote bei 73,8% und die Staatseinnahmen insgesamt bei 48,6%. Die Staatsausgaben liegen bei 48,5%, wovon 1,7% Ausgaben für die Zinsen auf die ausstehenden Staatsschulden sind. Zieht man letztere von den Gesamtausgaben ab, kommt man auf die sogenannten primären Staatsausgaben, die bei 46,8% liegen. Daraus ergibt sich ein Primär-Budgetüberschuss von 1,8%. (Dieser Überschuss ist wesentlich höher als die 0,1%, die man in Zeitungen liest, weil in unserer Zahl eben die Zinszahlungen nicht berücksichtigt werden.)

Die Veränderung der Schuldenquote (B) entsteht aus zwei Komponenten: zum einen aus dem laufenden Primärdefizit (d.h. der Differenz zwischen Primärausgaben G und den Einnahmen T, jeweils in Relation zum BIP gemessen) und der Zinsbelastung aus den bestehenden Schulden. Diese zweite Komponente hängt wiederum von der Differenz zwischen dem Zinssatz auf Staatsanleihen (r) und der Wachstumsrate des BIP (g) ab. In Symbolen:

dB = (r-g)B + G – T.

Derzeit ist der Zinssatz auf langfristige Staatsanleihen bei historisch niedrigen 0,7% während das Wachstum bei 2,4% liegt, d.h. r-g = 1.7%. Unter diesen Bedingungen würde die Schuldenquote quasi „von alleine“ fallen, solange das Primärdefizit nicht mehr 1,3% vom BIP beträgt. Daraus ergibt sich ein Spielraum für Steuersenkungen.

Aber wie groß ist jetzt dieser Spielraum? Das langfristig notwendige Niveau der Staatseinnahmen T* ergibt sich daraus wie folgt:

T* = (r-g)B* + G*,

wobei das Sternchen den langfristigen Wert der jeweiligen Variablen anzeigt. Der Spielraum für eine Steuersenkung ist die Differenz zwischen den derzeitigen Einnahmen T und dem langfristig notwendigen Wert T*.

Um T* zu berechnen müssen wir also drei Dinge wissen:

1. das langfristig angestrebte Ziel für den Anteil der primären Staatsausgaben am BIP,
2. das langfristig angestrebte Ziel für die Schuldenquote,
3. die langfristige Differenz zwischen dem Zinssatz und dem BIP-Wachstum.

In meinem in nachstehender Tabelle gezeigten Szenario gehe ich von folgenden Annahmen aus:

1. Die primären Staatsausgaben sollen am derzeitigen Niveau von rund 47% des BIP beibehalten werden.
2. Die Schuldenquete soll im Einklang mit den Maastricht-Kriterien auf 60% stabilisiert werden.
3. Die Zins-Wachstums-Differenz liegt am historischen Durchschnittswert von rund 0,2%.
 Derzeit Szenario 1 Szenario 2 Staatsschulden in % des BIP (B) 73,8 60,0 60,0 Zins-Wachstums-Differenz in % (r-g) -1,7 0,2 -0,7 Staatsausgaben in % des BIP (G) 46,8 47,0 47,0 Staatseinnahmen in % des BIP (T) 48,6 47,1 46,6 implizierter Spielraum in % des BIP 1,5 2,0

In diesem Szenario beträgt der Spielraum 1,5% des BIP (Differenz zwischen den derzeitigen 48,6% und den langfristig notwendigen 47,1%). Das wären also rund 5,7 Milliarden Euro. In einem alternativen Szenario rechne ich mit einer Zins-Wachstums-Differenz von -0,7%, die ungefähr in der Mitte zwischen dem historischen Durchschnitt und den derzeitigem Wert liegt. In diesem Fall steigt der Spielraum auf 2% bzw. 7,8 Milliarden Euro an. Die vom Finanzminister kolportierten 6,5 Milliarden liegen also dazwischen. Natürlich kann man sich auch Szenarien ausdenken, in denen es keinen Spielraum gibt: z.b. wenn die Zinsen wesentlich stärker steigen und das Wachstum einbricht. Dafür sehe ich jedoch derzeit keine Anzeichen.

Fazit: Eine Steuersenkung im Bereich von 6,5 Milliarden ist durchaus vereinbar mit einer langfristig stabilen Schuldenquote – und zwar ohne, dass dazu Einsparungen bei den Ausgaben notwendig wären. Allerdings: Die Krux bei der ganzen Sache ist die Zins-Wachstums-Differenz. Momentan ist diese stark negativ, was einen historischen Ausnahmefall darstellt. Die große Frage in den kommenden Jahren wird sein, ob und in wie weit sich die Zinsen auf ihren historischen Durchschnitt zurück bewegen werden.

# Why our models are models of models and what that means for the current debate about the future of macroeconomics

In the latest issue of the “Oxford Review of Economic Policy”, Simon Wren-Lewis has written an interesting contribution concerning the shortcomings of contemporary macroeconomic models. In his article, he argues that the “microfoundations hegemony” is among the core problems that hold back progress. I want to add an argument to this debate which shall support the beginning collapse of this dogma.

Historically, the idea of basing macroeconomic models on explicit microfoundations initiated in the 1970s leading to the demise of old-style Keynesian models which relied heavily on ad-hoc restrictions such as a constant aggregate savings rate. With the famous Lucas-critique declaring that ad-hoc restrictions cannot be considered invariant to changes in economic policy, a methodological standard came to dominance in the profession which demands explicit microfoundations as a pre-condition for doing proper macroeconomic modelling. The subsequent points are central to this doctrine:

I. Explicit microfoundations are needed to make models “robust” to the Lucas-critique.

II. Explicit microfoundations provide the basis for “checking” the internal consistency of the underlying thought.

III. As a pre-condition to be certain on i) and ii), the microfoundations have to be expressed using the precise language of mathematics.

Although this all seems quite convincing at first sight, the whole idea nevertheless rests on one particularly troublesome misconception of what (macro?)economic models usually represent. In the standard view, we see them as simplified representations of reality – as approximations to a complex world. “Think of it like a map! If you design a map of the Austrian highway system, you leave out irrelevant aspects like the trees guarding the highway.” – Right? Ok…, so our models are approximations! Approximations of what? Of the real world! Which looks how? Well, of course we cannot know everything in perfect detail – the reality is rather complex, but…but you know how to design proper approximations to it? How do you make proper approximations to something you do not really know because it is too complex?

In my view, the majority of (macro)economic models are indeed best seen as approximations, but as approximations of what somebody thinks about the real world rather than of the real world itself. They are formal models of the fuzzy “models” that we have in our brain – models of models, approximations of simplifications. To see this, consider the example below which you may easily find in a standard macro-paper.

“For sake of simplicity, suppose that an infinity of identical firms produce according to Y=f(K,L) with Y giving output, K denoting the capital stock and L the amount of labour.” How do we proceed on this if we read that?

a. Translate the equation Y=f(K,L) into words: “Ok, so…production uses capital and labour as inputs.”

b. Guess what the author might want to say about the real world:

1. “So there is an infinity of firms in the model. Does he/she mean that there is an infinity of firms in the real world? – I guess not. So how many firms does he/she mean – 1000, 1 000 000?
2. “Does he/she mean that all firms are the same in the real world? – I hope not!”
3. Ah…“for sake of simplicity” – so the assumption was taken although he/she anyway means something else – if so…what?! Hm…
4. “Maybe he/she means that analyzing market power of firms is not necessary for the respective purpose of the model?” – Sounds better. Or maybe, he/she means that market power is generally negligible…– whatever. I just stick to the latter interpretation.

Note that this is a pretty simplified example. In macro models you typically have various actors and feedback effects between consumers, producers, the central bank etc. If you let 10 people conduct the upper steps for such models you will usually get 10 different interpretations. To overcome this, you may introduce some form of heterogeneity in the model, try to get a slightly more realistic expression of competition and so on. You will nevertheless end up with mathematical expressions that do not correspond to what you actually want to say about people´s behavior and their respective interactions. In other fields, the difference between the formal model and the model you have in mind may be small, in macro, the gap is usually rather pronounced.

What does that imply now for the future of macroeconomics? I assume here that one is willing to follow some form of McCloskey´s view of economists as “persuaders”, i.e. we are interested in changing the fuzzy “models” in our brain or in other peoples´ brainswhile the formal ones are only tools for achieving this. It follows:

i) Explicit microfoundations may help to address the Lucas-critique, but they cannot make it immune since other people may simply not interpret the parameters of the formal microfoundations as structural. More importantly, a model that is not explicitly microfounded may be reasonably interpreted by people to be robust by adding an informal story. Both proceedings end up with an informal judgement. Explicit microfoundations are therefore neither necessary nor sufficient to address the Lucas-critique and by using them we do not overcome the final step of informal, fuzzy, subjective judgements.

ii) Since the formal model on paper and the fuzzy model in our brain are distinct, the internal consistency of the formal structure is neither necessary nor sufficient for the consistency of the underlying thought.

iii) Mathematical models are not an intrinsically precise way of communicating economic ideas. Ordinary speech may promote clarity since it describes the fuzzy models in our brains directly rather than approximating them with the often pretty rough formal elements available.

With all this, I neither want to say that we should completely depart from explicit microfoundations nor that we should abandon mathematical representations. I think both are powerful tools for bringing macroeconomics forward. There is just no reason to apply them dogmatically without thinking about whether doing so makes sense for the purpose at hand and it is certainly unjustified to impose this standard on others when judging their contributions, at least if one´s arguments in favor of this standard are based on I)-III). Finally, given that the gap between the formal and the fuzzy model is often pretty sizeable, we cannot stick to simply throwing models at each other. They can be great tools for thinking but in the end, somebody will have to give the actual argument about the source of e.g. the recent financial crisis. This necessitates using and describing the relevant parts of his/her fuzzy model that would have optimally advanced using the formal ones. And: Doing so requires fuzzy, ordinary language, not math!

# “The Rate of Return on Everything“

This is the title of a new paper by Oscar Jorda, Katharina Knoll, Dmitry Kuvshinov, and Moritz Schularick (original paper, voxeu article). The paper is the result of a research project to calculate the rates of return on four major asset categories – equities, bonds, bills, and real estate – in 16 major developed economies going back as far in time as reasonable. (Quibble: Is that really everything? What about gold? currencies? commodities? paintings? vintage cars?)

The paper does nothing but compute long-run averages and standard deviations and draw graphs. No regressions, no econometric witchcraft, no funny stuff. Yet its findings are fascinating.

Some of the results confirm what „everyone already knew, kind of“:

1. Risky investments like equities and real estate yield 7% per year in real terms.
2. The risk premium (equities/housing vis-a-vis short term bond rates) is usually between 4 to 5%.
3. There is no clear long-run trend (either up or down) in rates of return. (Take this, Karl Marx!)

Some of the results are interesting, but not particularly puzzling:

1. The return on total wealth (average of the rates of return on all assets weighted by their share in the economy’s aggregate portfolio) exceeds the rate of growth of GDP. This confirms Piketty’s claim that r > g. In terms of the Solow model it means we are living in a dynamically efficient regime: we cannot make both present and future generations better off by saving less. Perhaps the most interesting aspect of this finding is its robustness: it holds for every sub-period and for every country. It really seems to be a „deep fact“ about modern economies.
2. The return on risk-free assets is sometimes higher, sometimes lower than the growth rate of GDP. For instance, before the two World Wars, the differential between the risk-free rate and growth was mostly positive, so that governments had to run primary surpluses to keep debt stable. Post-1950, the differential was mostly negative.
3. Negative returns on safe assets are not unusual. Safe rates were negative during the two World Wars as well as during the crisis of the 1970s. In recent times safe rates went negative again in the aftermath of the global financial crisis. These findings don’t disprove the „secular stagnation“ hypothesis of Summers et al. but they do put it in historical perspective. It seems that rates were unusually high during 1980s and the recent downward trend could just be a reversion to the mean.

But some results are really puzzling – even shocking from the point of view of standard finance theory:

1. The return on risk-free assets is more volatile than the return on risky ones. I haven’t yet digested this fact fully. Does this mean that “risk-free asset” is a total misnomer? No, because „risk-free“ refers to the unconditional nature of the payoff of an asset, not the variability of its return. A bond is „risk-free“ because it promises a fixed cash flow irrespective of the state of the economy. Stocks are called risky, not because their returns are volatile, but because the dividends they pay are conditional on the performance of the company. So does this mean that people’s time discount rate varies a lot? Why? It can’t be consumption growth variability – because consumption is quite smooth. What’s going on?
2. Housing offers the same yield as equities, but is much less volatile. Jorda et al refer to this as the housing puzzle. I’m not sure how puzzled I should be by this. I surely found the high average yield of real estate assets surprising. However, from what I know about house price indices and the myriad measurement issues surrounding them, I feel that one should be very cautious about the housing returns. I definitely would like someone who knows more about this look carefully at how they calculated the returns (paging Dr. Waltl!). One potential solution to the puzzle I can see would be differences in liquidity. Housing is super illiquid, equities are quite liquid. Couldn’t the high return on housing just be an illiquidity premium?

There is much, much more in the paper, but those were the points that I found most striking. I’m sure this will be one of the most important papers of the past year and will be a crucial source for researchers in finance, growth, and business cycle theory. Plenty of food for thought.

# Modern macro was invented by a Soviet economist

Here’s the story.

In 1927, a Russian economist by the name of Eugen Slutsky wrote a paper entitled “The Summation of Random Causes as the Source of Cyclic Processes“. At the time Slutsky was working for the Institute of Conjuncture in Moskow. That institute was headed by a man called Nikolai Kondratiev.

This was in the early days of the Soviet Union, before Stalin managed to turn it into a totalitarian hellhole, a time when the Communist leadership was relatively tolerant towards scientists and even occasionally listened to their advice. The institute’s job was basically to collect and analyze statistics on the Russian economy in order to help the Party with their central planning. But Kondratiev seemed to take the view that it would be best to allow the market to work, at least in the agricultural sector, and use the proceeds from agricultural exports to pay for industrialization. Lenin apparently took the advice and in 1922 launched the so-called New Economic Policy which allowed private property and markets for land and agricultural goods and re-privatized some industries which had been nationalized after the October Revolution. This policy turned out to be rather successful – at least it ended the mass starvation which War Communism had caused during the years of the Russian civil war.

But then Lenin died and Stalin took over and decided that time had come to get serious about socialism again and finally abolish private property and markets for good. Dissenting voices like Kondratiev’s clearly couldn’t be tolerated in this great enterprise, so in 1928 Kondratiev was sacked and the institute was closed down. Some time later, Kondratiev was arrested, found guilty of being a „kulak professor“ and sent off to a labor camp. Even there he continued to do research until Stalin had him killed by firing squad during the Great Purge of 1938.

But I’m digressing, so back to Slutsky. His 1927 paper was written in the wake of Kondratiev’s 1925 book “The Major Economic Cycles“. That book claimed that capitalist economies exhibit regular boom-bust waves of about 50 years duration, known today as Kondratiev Waves. Other „conjuncture” researchers had claimed the existence of shorter waves.

Slutsky’s first observation was that when you really look at time series of aggregate economic output, you don’t see regular waves, but a lot of irregular fluctuations. So trying to find deterministic, sinusoidal waves in economic time series is probably not a very fruitful exercise.

Slutsky’s second observation was that when you draw a long series of independently and identically distributed random variables (modern terminology, not his) and then take some moving average of them… you get a time series that looks an awful lot like real-world business cycles!

He showed that in two ways. First, he performed simulations. Remember this is 1927 – so how did he simulate his random numbers? Well, the People’s Commissariat of Finance ran a lottery. So Slutsky took the last digits of the numbers drawn in the lottery (this is the basic series shown in figure 1). He then computed a bunch of different moving average schemes one of which is shown in figure 2. See the boom-bust cycles in that picture? Pretty cool, huh?

But Slutsky didn’t just show cool graphs. He also had a beautiful argument for why these moving averages looked like recurrent waves:

We shall first observe a series of independent values of a random variable. If, for sake of simplicity, we assume that the distribution of probabilities does not change, then, for the entire series, there will exist a certain horizontal level such that the probabilities of obtaining a value either above or below it would be equal. The probability that a value which has just passed from the positive deviation region to the negative, will remain below at the subsequent trial is 1/2; the probability that it will remain below two times in succession is 1/4; three times 1/8; ans so on. Thus the probability that the values will remain for a long time above the level or below the level is quite negligible. It is, therefore, practically certain that, for a somewhat long series, the values will pass many times from the positive deviations to the negative and vice versa.

(For the mathematically minded, there’s also a formal proof just in case you’re wondering.)

Since it was written in Russian, the paper went unnoticed by economists in the West until it came to the attention of Henry Schultz, professor at the University of Chicago and one of the founders of the Econometric Society. He had the paper translated and published in Econometrica in 1937.

And so Slutsky’s „random causes“ provided the first stepping stone for the modern business cycle theories which explain how random shocks produce, via the intertemporal choices of households, firms and government agencies, the cyclical patterns we see in aggregate time series.

P.S.: All this time you have probably asked yourself: Slutsky, Slutsky,… that name rings a bell. Oh right, the Slutsky Equation! Yep. Same guy.

# A New Keynesian toy model

I’ve been keeping a collection of “toy models” on my computer. I do this for two reasons. First, building them is a lot of fun and useful as a kind of intellectual work-out to develop the “model-building” regions of my brain. Second, I think they help clarify my own thinking about economic issues.

I’d like to share one of my favorite toy models with you. I learnt it from Cedric Tille when I was at the IfW Kiel. The purpose of this model is to show the basic intuition behind a strand of literature called “New Keynesian” macroeconomics. The NK approach can be thought of as a combination of the techniques of the “Real Business Cycle”  literature (rational expectations, continuous market clearing, dynamically optimizing agents) with “old” Keynesian economics (monetary policy has real effects, government spending has a multiplier effect, etc.). The model is simple enough to be taught to first-year econ students and at the same time rich enough to provide a basis for discussion of the effects of monetary policy, technology shocks, fiscal policy, the distinction between expected and unexpected shocks and more. It is also much closer to current macroeconomic research than the usual AS-AD model contained in most textbooks. The model has a natural extension to an open economy setting, which is contained in this paper by Corset & Pesenti.

Here goes.

Technology. An economy’s output (Y) is produced by labor (L) alone. The aggregate production function is

Y = A*L,                           (1)

where A is the technology parameter (labor productivity).

Households. Households consume output and supply labor. They trade off the marginal utility from consumption against the marginal disutility of working. Under usual assumptions about the shape of utility functions, consumption will be an increasing function of the real wage. Denoting the nominal wage by W and the price level by P, let household consumption (C) be given by

C = k*(W/P),                 (2)

where k is a positive parameter. The basic intuition behind this consumption function is that a higher real wage induces people to substitute consumption for leisure (substitution effect) and raises their real income (income effect). Both effects act to increase consumption, while the effect on labor supply is ambiguous.*

In order to purchase goods, households must hold money. Money demand (M) is a function of nominal consumption spending:

M = (1/v)*P*C,             (3)

where v is the (exogenous) velocity of money. Note that this is just a versions of the quantity theory of money. The money supply is set by the central bank and exogenous to the model. We will think of M as describing the stance of monetary policy.

Firms. Firms compete in a monopolistic way, i.e. each firm has a monopoly over the specific kind of consumption good it produces, but there is a large number of close substitutes. It can be shown that under this kind of competition, the aggregate price level will be set as a mark-up over marginal costs of production. Nominal marginal costs are equal to W/A — it takes 1/A hours to produce one unit of output and each hour costs W euros.

Crucially, firms must set prices before learning the labor productivity and the monetary policy stance. Hence, they must form expectations about nominal marginal costs. Let z be the mark-up, which indicates the market power of firms (which in turn depends on how “tough’’ competition is in the goods market). Then the price level is given by

P = z*E(W/A),               (4)

where E() denotes the expected value conditional on information available to firms when they set prices.

Closing the model. The model is closed by the goods market clearing condition:

Y = C.                              (5)

This is a model with five endogenous variables (Y, C, L, W, and P) and two exogenous variables (M and A). Let’s find the general equilibrium of this economy. First, combine (2) and (3) to get
W = (v/k)*M.                (6)
Taking expectations and inserting into (4) yields
P = (z*v/k)*E(M/A).               (7)
Next, combine (3), (5) and (6) to get
Y = (k/z)*M/E(M/A).              (8)
Re-inserting this into (1) yields
L = (k/z)*[M/A]/[E(M/A)].   (9)

Equilibrium. Suppose that, in the long run, expected values equal actual values, i.e. E(M/A)=M/A. This is just the rational expectations assumption which in this context means that firms don’t make persistent, systematic mistakes in forming expectations about productivity and monetary policy. With this assumption, (8) reduces to

Yn = (k/z)*A,

which we can call the natural rate of output or full-employment output. It increases in productivity and decreases in the degree of monopolistic distortions. The long-run (“natural”) level of employment is given via (9) by

Ln = k/z.

Using these results in (8) yields

Y/Yn = [M/A]/[E(M/A)].

This equation relates the ratio of actual to natural output (the output gap) to the monetary stance and the state of technology. What exactly does this mean?

• An unexpected increase in money supply raises output over its natural level. The reason is that an increase in M while P is fixed makes households spend more which raises output and employment.
• An unexpected increase in labor productivity reduces the output below its natural level. The reason is that a higher A increases potential output, but does nothing to stimulate household spending. Hence output stays the same while labor demand (and therefore employment) falls. So a positive technology shock produces underemployment in the short run.
• Expected changes in monetary policy or technology have no effect on the output gap. In the long run, money is completely neutral with respect to Y and L.
• If the central bank has a way of knowing A in advance (for instance, because they employ competent economists who can forecast A perfectly), they could set M in such a way as to completely stabilize the economy at the natural output level. They “simply” have to set M=b*A.

Fiscal policy. How do we get fiscal policy into the model? Easy. Just add government spending into the goods market clearing condition:

Y = C + G                                     (5*)

and assume for simplicity that the government makes spending proportional to total output G=g*Y. (You also must assume that the government finances its expenditure by lump-sum taxes on households only so that firms’ pricing decisions and households’ labor supply are not distorted.) In this case natural output becomes

Yn =(k/z)*A/(1-g),

which increases in g. Government spending doesn’t affect the output gap, though, because it moves actual and potential output by the same amount.

*) A utility function which gives rise to such a consumption function is U(C,L) = log(C) — (1/k)*L.

# Why I like DSGE models

Christoph has recently vented his frustration about “DSGE bashing” now popular in the econ blogosphere. I feel this frustration, too, not because I believe DSGEs are perfect, but because I think that much of the popular criticism is ill-informed. Since I have worked with DSGE models recently in my research, I can call myself a card-carrying member of the club of DSGE aficionados. So I thought I briefly explain why I like DSGEs and what I think they are good for.

I think of DSGE models as applying ordinary principles of economics – optimizing behavior and market equilibrium (GE for general equilibrium) – to a world that evolves over time (D for dynamic) and is subject to chance (S for stochastic). When I say optimizing behavior I don’t necessarily mean rational expectations and when I say equilibrium I don’t necessarily mean market clearing. There are DSGEs without rational expectations and with non-clearing markets out there, although admittedly they are not the most widely used ones. I find this general approach attractive, because it brings us closer to a Unified Economic Science that uses a single set of principles to explain phenomena at the micro level and at the macro level.

But that’s not the most important reason I like DSGEs, which is that it makes precise and thus helps clarify commonly held notions about business cycles, economic crises and economic. Take, for instance, the notion of “recession”. In popular discussion a “recession” is when GDP growth is negative or at least below what is conceived a normal or desirable rate. In DSGE models, a recession is a negative output gap: the difference between the actual level of output and that level which would occur if prices were fully flexible (the “natural rate of output”). DSGEs make it clear that a negative growth rate is not necessarily bad (if the weather is bad in April and better in May, you want production to go down in April and up in May) and a positive growth rate not necessarily good (two percent real growth can sometimes mean an overheating economy and sometimes be a sluggish one). You have to look at more than one variable (at least two, output growth and inflation) to decide whether the economy is in good or bad shape.

Another reason I like DSGEs is that they discuss economic policy in a much more coherent and sensible manner than most of the earlier literature – and much more so than the financial press. The important question about any policy X is not “Does X increase GDP or reduce unemployment or increase asset prices?”, but “Does X increase the utility of households?”. Also, because DSGEs are dynamic models, they put the focus on policy rules, i.e. how policymakers behave across time and in different situations, instead of looking only what policymakers do right now and in this particular situation.

There is a lot of valid criticism against DSGEs: they often are too simplistic and sweep important but hard-to-model aspects under the rug and they, as a result of that, have lots of empirical issues. But these things should encourage us to make DSGEs better, not return to the even more simplistic approaches that previously dominated macroeconomics.

# How to judge (macro)economic models or Why Paul Krugman gets it wrong

Paul Krugman recently participated in a discussion about the current state of macroeconomics, particularly about the “dominant” paradigm of DSGE models and their predecessor, the IS-LM model. Using DSGE models by myself and disagreeing with basically all of what he said, let me comment on an especially unconvincing piece of reasoning:

“[…] how [do] we know that a modeling approach is truly useful. The answer, I’d suggest, is that we look for surprising successful predictions. So has there been anything like that in recent years? Yes: economists who knew and still took seriously good old-fashioned Hicksian IS-LM type analysis made some strong predictions after the financial crisis”.

In short: forget DSGE models and related stuff and go back to IS-LM because people using IS-LM recently made some “right” predictions. Is the exclusive focus on its “predictions”a reasonable criterion to judge (macro)economic models or how should they be judged else?

With respect to the former, let me give you an admittedly extreme example. Over the last few decades, we could have very well predicted the EU agricultural policy by assuming that the aim of policy makers was to reduce consumer welfare. Would you resort to this sort of model when discussing likely upcoming agricultural policies from an outsider perspective? I would not.

Can we take up another extreme and simply judge a model based on the realism of its assumptions? I suggest we can’t. “Let’s assume a representative agent who cares about consumption, leisure and wearing blue jeans” to take another extreme example. Would you reject a macroeconomic argument formulated with this agent based on the unrealism of the blue jean assumption (as long as the model is not intended to explain the jeans market). I would not because in this context, I regard this assumption to be irrelevant for the models conclusions/predictions. The difference with respect to the first example is then of course that in the former, I’m convinced that the assumption matters for the results, in the latter I´m convinced it does not.

So one cannot judge models solely by their predictions because the underlying assumptions in combination with the implied propagation mechanisms might be clearly implausible/unconvincing and important for the predictions. Assessing the latter in turn requires to dig into the model dynamics implying that one can also not base a judgement solely on the realism of the assumptions.

How can a reasonable criterion that takes the above findings under consideration then look like? As so beautifully described in McCloskey essay on the “Rhetoric of Economics”, (macro)economists are persuaders. What they do is “careful weighing of more or less good reasons to arrive at more or less probable or plausible conclusions – none too secure, but better than would be arrived by chance or unthinking impulse; it is the art of discovering warrantable beliefs and improving those beliefs in shared discourse […] (Booth 1961, see the above link p. 483 for the exact citation)”. The purpose of models, I’d argue is then to help organizing that discourse, help to structure your own thinking, clarify the debate and provide a framework to confront thought with data.  In order to be useful for that, they need to be “persuasive” in the context they are used. The realism of the assumptions and the implied propagation mechanisms, their respective importance for the results and the model´s fit to data are all part of the subjective assessment with respect to that criterion.

How then about DSGE vs. IS-LM in policy debate? IS-LM and related models were mainly discarded in policy analysis because they incorporate reduced form parameters on e.g. the interest rate responsiveness of investment or the marginal propensity to consume which most economists were not convinced to be sufficiently independent of economic policy. This criticism is today as valid as it was 30 or 40 years ago, none of that has changed. All that has changed is that IS-LM made some “correct” (one may very well discuss the use of this word here but that´s not the purpose of this blog entry) predictions. Shall we go back to a model that was labelled as an unpersuasive tool although the main point of criticism is still valid – NO. Shall we use it now lacking a better alternative? Properly specified state-of-the-art DSGE models are a tool that outclasses IS-LM on virtually every aspect (YES even when it comes to rational expectations). For sake of shortening the entry, I will yet argue my case for that in a follow-up post.

# Solving an Economy in Excel

I am going to teach basic macroeconomics to undergrads in the fall and I want to show them the simplest way to solve an economy on the computer. So I discovered a technique used by Yale’s John Gaenakoplos in his fantastic Financial Theory course. He’s using it to solve a 2×2 pure exchange economy. I’m applying it to a miniature Keynesian model.

The model has one good, three agents – private consumers, firms, government – and two equations:

(1) C = MPC*(Y – T)

(2) Y = C + I + G

(1) is the Simple Keynesian Consumption function relating private consumption (C) to net national income (Y-T) with MPC being the marginal propensity to consume. (2) is the goods market equilibrium condition, where I is business investment and G is government expenditure.

Solving the economy involves three steps: (i) set the MPC parameter, (ii) set the exogenous variables (I,G,T), (iii) solve for the endogenous variables (C,Y). Of course, since this model is all nice and linear, it’s easy to derive closed-form solutions. But in more complicated models such as ISLM, finding closed-form solutions can quickly become quite tedious. So I want the computer to do it for me.

First I write down all parameters and variables, each in a separate cell. Then I write down equations (1) and (2) as “loss functions”:

(1*) F(C,Y) = C-MPC*(Y-T),

(2*) H(C,Y) = Y-(C+I+G).

After that I define a “total loss function” as the sum of squares of F and H:

(3) L(C,Y) = F(C,Y)^2 + H(C,Y)^2.

I want Excel to minimize L(C,Y) with respect to C and Y, which can easily be done with the Excel solver. (Restrict all endogenous variables to be non-negative and select GRG as the solution method.) Why does this work? Because the minimum value of L(C,Y) is zero, hence F(C,Y) and H(C,Y) must also be zero in the minimum implying that (1) and (2) are fulfilled.

If you want to know the multiplier effect of a tax cut or an investment boom, just change the relevant cell entries and re-run the solver. What a nice toy for rainy days such as this.

Here’s the file: macro simul