This is a joke that I heard many times and once on a big stage at the 2014 annual meeting of the Verein für Socialpolitik where some supposedly important person from a supposedly important central bank (if I recall correctly) used it as a criticism of current economic methodology (as this person understood it) and generalizing it to mean it as a criticism of any economic methodology that uses math (if I understood this person correctly).
During the course of a recent online discussion, David Friedman raised the question of how to define “patriarchy” in particular and “power” more generally.
I gave the following answer which I’m sharing with you in order to elicit broader commentary (ideally from people who actually know something about social choice theory):
Conceptually, defining “power” should be straightforward.
Borrowing from standard social choice terminology, under any Social Welfare Function, which maps from the set of all preference profiles (list of policy preference rankings, one for every member of the society) to a unique social preference ranking, if my preferences correlate more with the social preferences than yours (where correlation is defined in an appropriate way), I am more powerful.
In a dictatorship, the correlation is 1 if I’m the dictator.
In a democracy, the correlation is high if I’m the median voter, low if I’m a member of the political fringe.
Patriarchy is then a system in which men’s policy preferences are more highly correlated with the social preferences than women’s.
David raised the following problem with my definition:
Suppose one percent of the population prefer outcome A to outcome B, ninety-nine percent the other way around. The social preference function, in situations where it has to choose between the two, chooses A two percent of the time.
The group of people who prefer B have more power than the group who prefer A, but does it make sense to say that an individual member of the group has more power? Might it make more sense to use a definition in which the question is not whether the social choice function correlates with my preferences but whether a change in my preferences produces a change in the social choice function in the same direction?
I think that’s a very good point. So here is my updated definition of power:
If changes in A’s preference ranking are more highly correlated with changes in the social preference ranking than changes in B’s preference ranking are, A is more powerful than B.
Is this how people in social choice have always defined power? If not, is there a deep problem with this definition which didn’t occur to me?
When should a rational individual believe in a miracle?
David Hume, the great skeptical philosopher, answered: practically never. His argument ran as follows: Miracles are extremely rare events and thus have a very low prior probability. On the other hand, people can be misled rather easily either by their own senses or by other people. Therefore, the rational reaction to hearing a miracle story is to reject it, except the evidence supporting it is overwhelming. “Extraordinary events require extraordinary evidence” became a popular summary of Hume’s point of view.
Here is a famous passage from Hume’s “Of Miracles” explaining the point:
When anyone tells me, that he saw a dead man restored to life, I immediately consider with myself, whether it be more probable, that this person should either deceive or be deceived, or that the fact, which he relates, should really have happened. I weigh the one miracle against the other; and according to the superiority, which I discover, I pronounce my decision, and always reject the greater miracle.
This argument sounds intuitively plausible and compelling, but it is mistaken. In fact Hume is committing an elementary error in probability theory, which shouldn’t be held against him since “Of Miracles” predates the writings of Bayes and Laplace.
In the language of modern probability theory, Hume asks us as to compare the prior probability that miracle X occurred, , to the probability of seeing the evidence Y supporting miracle X even though X did not in fact occur, i.e. the conditional probability of Y given the negation of X, Econometricians would call the latter the likelihood of Y under the hypothesis not-X. If Hume says we should reject X in favor of not-X.
But this inference is unwarranted. What a rational observer ought to ask is: Given the evidence Y, is it more likely that X occurred or that it didn’t occur? We are looking for the posterior odds of X conditional on Y:
Bayes’ theorem immediately gives us what we are looking for:
This equation makes it clear that even if Hume’s inequality holds, it is possible that the posterior odds of X are greater than 1. All we need for such as result is that the likelihood of having evidence Y under the hypothesis that X occurred is sufficiently higher than the likelihood of Y under the alternative hypothesis that X did not occur. In econometric terms, the likelihood ratio must exceed a critical value which depends on the prior odds against the miracle:
To conclude: A rational observer is justified in believing a miracle if the evidence for it is sufficiently more likely under the hypothesis that the miracle really did occur than under the hypothesis that it didn’t so as to offset the low prior odds for the miracle. Just comparing the low prior probability of a miracle to the probability of receiving false evidence in favor of it is not enough and can be misleading.
When people ask me what I do, I tell them that I am economist and that my research is about the eurozone crisis, which is enough to satisfy most but not all my conversation partners. Many people want to know exactly what economics is and why it is important. This happens frequently enough that I have prepared a standard response and saved it in my head. But I often wonder how other people respond to the same question.
Therefore I decided to set up a small survey consisting of only 3 questions:
- What is economics?
- What is economics good for?
- What is the most important insight economics has to offer?
You can answer these questions in short or long form, anonymously or with your name. I’d like to get as many different perspectives as possible, so I would encourage you to share this post and/or the survey link below on your social media pages. Warning: I may quote your response in a future post and I may steal it if it’s better than mine.
Looking forward to reading your answers!
In the latest issue of the “Oxford Review of Economic Policy”, Simon Wren-Lewis has written an interesting contribution concerning the shortcomings of contemporary macroeconomic models. In his article, he argues that the “microfoundations hegemony” is among the core problems that hold back progress. I want to add an argument to this debate which shall support the beginning collapse of this dogma.
Historically, the idea of basing macroeconomic models on explicit microfoundations initiated in the 1970s leading to the demise of old-style Keynesian models which relied heavily on ad-hoc restrictions such as a constant aggregate savings rate. With the famous Lucas-critique declaring that ad-hoc restrictions cannot be considered invariant to changes in economic policy, a methodological standard came to dominance in the profession which demands explicit microfoundations as a pre-condition for doing proper macroeconomic modelling. The subsequent points are central to this doctrine:
I. Explicit microfoundations are needed to make models “robust” to the Lucas-critique.
II. Explicit microfoundations provide the basis for “checking” the internal consistency of the underlying thought.
III. As a pre-condition to be certain on i) and ii), the microfoundations have to be expressed using the precise language of mathematics.
Although this all seems quite convincing at first sight, the whole idea nevertheless rests on one particularly troublesome misconception of what (macro?)economic models usually represent. In the standard view, we see them as simplified representations of reality – as approximations to a complex world. “Think of it like a map! If you design a map of the Austrian highway system, you leave out irrelevant aspects like the trees guarding the highway.” – Right? Ok…, so our models are approximations! Approximations of what? Of the real world! Which looks how? Well, of course we cannot know everything in perfect detail – the reality is rather complex, but…but you know how to design proper approximations to it? How do you make proper approximations to something you do not really know because it is too complex?
In my view, the majority of (macro)economic models are indeed best seen as approximations, but as approximations of what somebody thinks about the real world rather than of the real world itself. They are formal models of the fuzzy “models” that we have in our brain – models of models, approximations of simplifications. To see this, consider the example below which you may easily find in a standard macro-paper.
“For sake of simplicity, suppose that an infinity of identical firms produce according to Y=f(K,L) with Y giving output, K denoting the capital stock and L the amount of labour.” How do we proceed on this if we read that?
a. Translate the equation Y=f(K,L) into words: “Ok, so…production uses capital and labour as inputs.”
b. Guess what the author might want to say about the real world:
- “So there is an infinity of firms in the model. Does he/she mean that there is an infinity of firms in the real world? – I guess not. So how many firms does he/she mean – 1000, 1 000 000?
- “Does he/she mean that all firms are the same in the real world? – I hope not!”
- Ah…“for sake of simplicity” – so the assumption was taken although he/she anyway means something else – if so…what?! Hm…
- “Maybe he/she means that analyzing market power of firms is not necessary for the respective purpose of the model?” – Sounds better. Or maybe, he/she means that market power is generally negligible…– whatever. I just stick to the latter interpretation.
Note that this is a pretty simplified example. In macro models you typically have various actors and feedback effects between consumers, producers, the central bank etc. If you let 10 people conduct the upper steps for such models you will usually get 10 different interpretations. To overcome this, you may introduce some form of heterogeneity in the model, try to get a slightly more realistic expression of competition and so on. You will nevertheless end up with mathematical expressions that do not correspond to what you actually want to say about people´s behavior and their respective interactions. In other fields, the difference between the formal model and the model you have in mind may be small, in macro, the gap is usually rather pronounced.
What does that imply now for the future of macroeconomics? I assume here that one is willing to follow some form of McCloskey´s view of economists as “persuaders”, i.e. we are interested in changing the fuzzy “models” in our brain or in other peoples´ brainswhile the formal ones are only tools for achieving this. It follows:
i) Explicit microfoundations may help to address the Lucas-critique, but they cannot make it immune since other people may simply not interpret the parameters of the formal microfoundations as structural. More importantly, a model that is not explicitly microfounded may be reasonably interpreted by people to be robust by adding an informal story. Both proceedings end up with an informal judgement. Explicit microfoundations are therefore neither necessary nor sufficient to address the Lucas-critique and by using them we do not overcome the final step of informal, fuzzy, subjective judgements.
ii) Since the formal model on paper and the fuzzy model in our brain are distinct, the internal consistency of the formal structure is neither necessary nor sufficient for the consistency of the underlying thought.
iii) Mathematical models are not an intrinsically precise way of communicating economic ideas. Ordinary speech may promote clarity since it describes the fuzzy models in our brains directly rather than approximating them with the often pretty rough formal elements available.
With all this, I neither want to say that we should completely depart from explicit microfoundations nor that we should abandon mathematical representations. I think both are powerful tools for bringing macroeconomics forward. There is just no reason to apply them dogmatically without thinking about whether doing so makes sense for the purpose at hand and it is certainly unjustified to impose this standard on others when judging their contributions, at least if one´s arguments in favor of this standard are based on I)-III). Finally, given that the gap between the formal and the fuzzy model is often pretty sizeable, we cannot stick to simply throwing models at each other. They can be great tools for thinking but in the end, somebody will have to give the actual argument about the source of e.g. the recent financial crisis. This necessitates using and describing the relevant parts of his/her fuzzy model that would have optimally advanced using the formal ones. And: Doing so requires fuzzy, ordinary language, not math!
This is going to be super abstract, potentially infuriating and probably wrong.
I sometimes hear people talk about „disequilibrium economics“ and I think I know what they have in mind. Equilibrium is often associated with a system at rest. That’s the physicist’s notion of equilibrium: a ball sitting at the bottom of a bowl, a planet moving around the sun in a stable orbit, etc. Disequilibrium is something not at rest: you hit the ball and it jiggles around inside the bowl, a planet collides with another and flies off its orbit.
Economists have a different notion of equilibrium. Indeed, they have several different notions depending on the context. But basically, an economic equilibrium is a consistency condition imposed on a model by the economist. It follows that „disequilibrium economics“ is a logical impossibility.
Let me explain. Economists build models to explain certain real-world phenomena, say bank runs. Inside these models there are agents, e.g. savers, banks, firms, each described by their preferences, beliefs and constraints. For instance, a saver wants to keep her money in the bank as long as she believes she will get it back eventually. Whether she can get it back depends on the number of savers who demand their money back. As long as most of them don’t want to withdraw their money, everything is fine. However, if there is a critical mass of savers who want their money back, the bank needs to liquidate its assets prematurely at „fire-sale“ prices, which means it cannot repay all the savers’ deposits in full. You have two equilibria: one in which nobody runs on the banks, the banks carry their investments to maturity, everyone gets repaid; another one in which everyone runs, the banks liquidate their investments prematurely, people don’t get repaid in full.
Only the first of these equilibria can sensibly be characterized as „a system at rest“. In the second equilibrium, nothing is at rest: there is chaos in the streets, banks go bust and people get hurt.
What characterizes both equilibria are two conditions:
- Everyone is doing the right thing given their preferences, beliefs, and constraints. The saver who runs on the bank is doing the right thing: Given that everyone else runs, she should run, too, or else she will get nothing. This is called rational behavior, but it should really be called consistent behavior. It’s behavior that is consistent with an agent’s preferences, beliefs and constraints.
- Things need to add up. Or to put in fancier language: individual decisions need to be consistent with each other. The total value of deposits repaid cannot exceed the total value of assets held by the banks. If there are 10 cookies and I want to eat 8 and you want to eat 5, that’s not an equilibrium. It’s a „disequilibrium“. It’s a logical impossibility.
If you’re a behavioral economist, you may take issue with condition (1). You may argue that people often don’t do the right thing, they are confused about their beliefs and they don’t understand their constraints very well. That’s fine with me. Let agents do their behavioral thing and make mistakes. (Although you must be explicit about which mistake out of the approximately infinite number of mistakes they could make they actually do make.) But still, things need to add up. I may be mistaken to want 8 cookies and you may be confused to want 5, but there are still only 10 cookies. Behavioral economics still needs condition (2).
If you’re a first-year undergrad, you may think equilibrium means that markets clear. Then you learn about asymmetric information and realize that things like credit rationing can occur in equilibrium. And you learn about the search models. Adding up constraints may be inequality constraints.
Finally, you cannot „test for equilibrium“ with data. Equilibrium is that which your model predicts. If your prediction is contradicted by the data, it’s because your model is wrong, not because there is „disequilibrium“. I have heard econometricians talk about error correction models where they call the error correction term a measure of „disequilibrium“. What they mean by that is that their economic model can only explain the long-run relationship between variables (the cointegration part), from which there are unexplained short-run deviations. But that just means the model is wrong for these short-run movements.
Equilibrium means consistency at the individual and at the aggregate level. It doesn’t mean stable, it doesn’t mean perfect. In fact, it is completely devoid of empirical content in and of itself. It only becomes meaningful in the context of a concrete model. And without it, economic models wouldn’t make any sense.
Christoph has recently vented his frustration about “DSGE bashing” now popular in the econ blogosphere. I feel this frustration, too, not because I believe DSGEs are perfect, but because I think that much of the popular criticism is ill-informed. Since I have worked with DSGE models recently in my research, I can call myself a card-carrying member of the club of DSGE aficionados. So I thought I briefly explain why I like DSGEs and what I think they are good for.
I think of DSGE models as applying ordinary principles of economics – optimizing behavior and market equilibrium (GE for general equilibrium) – to a world that evolves over time (D for dynamic) and is subject to chance (S for stochastic). When I say optimizing behavior I don’t necessarily mean rational expectations and when I say equilibrium I don’t necessarily mean market clearing. There are DSGEs without rational expectations and with non-clearing markets out there, although admittedly they are not the most widely used ones. I find this general approach attractive, because it brings us closer to a Unified Economic Science that uses a single set of principles to explain phenomena at the micro level and at the macro level.
But that’s not the most important reason I like DSGEs, which is that it makes precise and thus helps clarify commonly held notions about business cycles, economic crises and economic. Take, for instance, the notion of “recession”. In popular discussion a “recession” is when GDP growth is negative or at least below what is conceived a normal or desirable rate. In DSGE models, a recession is a negative output gap: the difference between the actual level of output and that level which would occur if prices were fully flexible (the “natural rate of output”). DSGEs make it clear that a negative growth rate is not necessarily bad (if the weather is bad in April and better in May, you want production to go down in April and up in May) and a positive growth rate not necessarily good (two percent real growth can sometimes mean an overheating economy and sometimes be a sluggish one). You have to look at more than one variable (at least two, output growth and inflation) to decide whether the economy is in good or bad shape.
Another reason I like DSGEs is that they discuss economic policy in a much more coherent and sensible manner than most of the earlier literature – and much more so than the financial press. The important question about any policy X is not “Does X increase GDP or reduce unemployment or increase asset prices?”, but “Does X increase the utility of households?”. Also, because DSGEs are dynamic models, they put the focus on policy rules, i.e. how policymakers behave across time and in different situations, instead of looking only what policymakers do right now and in this particular situation.
There is a lot of valid criticism against DSGEs: they often are too simplistic and sweep important but hard-to-model aspects under the rug and they, as a result of that, have lots of empirical issues. But these things should encourage us to make DSGEs better, not return to the even more simplistic approaches that previously dominated macroeconomics.
Paul Krugman recently participated in a discussion about the current state of macroeconomics, particularly about the “dominant” paradigm of DSGE models and their predecessor, the IS-LM model. Using DSGE models by myself and disagreeing with basically all of what he said, let me comment on an especially unconvincing piece of reasoning:
“[…] how [do] we know that a modeling approach is truly useful. The answer, I’d suggest, is that we look for surprising successful predictions. So has there been anything like that in recent years? Yes: economists who knew and still took seriously good old-fashioned Hicksian IS-LM type analysis made some strong predictions after the financial crisis”.
In short: forget DSGE models and related stuff and go back to IS-LM because people using IS-LM recently made some “right” predictions. Is the exclusive focus on its “predictions”a reasonable criterion to judge (macro)economic models or how should they be judged else?
With respect to the former, let me give you an admittedly extreme example. Over the last few decades, we could have very well predicted the EU agricultural policy by assuming that the aim of policy makers was to reduce consumer welfare. Would you resort to this sort of model when discussing likely upcoming agricultural policies from an outsider perspective? I would not.
Can we take up another extreme and simply judge a model based on the realism of its assumptions? I suggest we can’t. “Let’s assume a representative agent who cares about consumption, leisure and wearing blue jeans” to take another extreme example. Would you reject a macroeconomic argument formulated with this agent based on the unrealism of the blue jean assumption (as long as the model is not intended to explain the jeans market). I would not because in this context, I regard this assumption to be irrelevant for the models conclusions/predictions. The difference with respect to the first example is then of course that in the former, I’m convinced that the assumption matters for the results, in the latter I´m convinced it does not.
So one cannot judge models solely by their predictions because the underlying assumptions in combination with the implied propagation mechanisms might be clearly implausible/unconvincing and important for the predictions. Assessing the latter in turn requires to dig into the model dynamics implying that one can also not base a judgement solely on the realism of the assumptions.
How can a reasonable criterion that takes the above findings under consideration then look like? As so beautifully described in McCloskey essay on the “Rhetoric of Economics”, (macro)economists are persuaders. What they do is “careful weighing of more or less good reasons to arrive at more or less probable or plausible conclusions – none too secure, but better than would be arrived by chance or unthinking impulse; it is the art of discovering warrantable beliefs and improving those beliefs in shared discourse […] (Booth 1961, see the above link p. 483 for the exact citation)”. The purpose of models, I’d argue is then to help organizing that discourse, help to structure your own thinking, clarify the debate and provide a framework to confront thought with data. In order to be useful for that, they need to be “persuasive” in the context they are used. The realism of the assumptions and the implied propagation mechanisms, their respective importance for the results and the model´s fit to data are all part of the subjective assessment with respect to that criterion.
How then about DSGE vs. IS-LM in policy debate? IS-LM and related models were mainly discarded in policy analysis because they incorporate reduced form parameters on e.g. the interest rate responsiveness of investment or the marginal propensity to consume which most economists were not convinced to be sufficiently independent of economic policy. This criticism is today as valid as it was 30 or 40 years ago, none of that has changed. All that has changed is that IS-LM made some “correct” (one may very well discuss the use of this word here but that´s not the purpose of this blog entry) predictions. Shall we go back to a model that was labelled as an unpersuasive tool although the main point of criticism is still valid – NO. Shall we use it now lacking a better alternative? Properly specified state-of-the-art DSGE models are a tool that outclasses IS-LM on virtually every aspect (YES even when it comes to rational expectations). For sake of shortening the entry, I will yet argue my case for that in a follow-up post.
Reading on (you may want to read my previous post before this one), I found another beautiful example of hypothesis testing in literature with a pretty clear p-value calculation. I am now reading P. G. Wodehouse’s “Right Ho, Jeeves”, first published in 1934 I believe.
I am doing some summer reading and just came across a nice literary example of one of the key methodological approaches in science: hypothesis testing.
What do we do when we perform a hypothesis test? We form a theory and call it our null hypothesis. We then look at data and ask ourselves how probable it is that we would see this data (or something like it) if the null hypothesis were true. This probability is called the p-value. If this probability is very low, we then abandon our null hypothesis in favor of its opposite.
Detective stories are generally a good potential source for examples of this approach, as detectives constantly entertain theories or hypotheses that have to be revised or rejected as new evidence is found. The present example is special in that the author really gives us all the steps of such a test in a specific setting, including the calculation of the p-value, that is the probability of seeing such data as was observed under the assumption that the null hypothesis is true. Continue reading