There are many forms of lying, from so called white lies that are really just a form of politeness to deliberate attempts to misrepresent the truth to fashion policy (of some institution) in your own interest. I am here interested in something somewhere in the middle of the lying spectrum, children lying about something to avoid a slightly unpleasant duty. We all know that a child’s answer to “Have you brushed your teeth?” is not always necessarily completely truthful.
In this and the next two blog posts, using the language of game theory, I want to discuss the incentives to lie and how one could perhaps teach children not to lie.
I don’t think I need to provide empirical evidence that children lie on occasion. In case you forgot your own childhood you may want to look at Eddie Izzard’s treatment of this subject.
To fix ideas let me tell you about a game I used to play with my kids when they were very little. I call it the nappy-changing game. We used to play it often. The situation is always more or less as follows. I am on nappy duty and one of my little ones, let’s call him Oscar, is busy playing. Walking past him, I get a whiff of an interesting smell. I ask Oscar “Is your nappy full?” and Oscar invariably answers with a loud “No“.
How can we rationalize this “data”? First I need to describe the game between the two of us. The game, crucially, is one of incomplete information. While I believe it is safe to assume that Oscar knows the state of his nappy, I do not. This is the whole point of the game of course. If I already knew everything there would be no point in Oscar lying. And if Oscar does not know the state of the nappy himself, one could also hardly call his behavior lying. It would just be an expression of his ignorance. But I am pretty sure Oscar knows the state of his nappy. So let us assume that Oscar’s nappy can only be in either of two states: full or clean.
A game, to be well defined, needs to have players, strategies, and payoffs (or utilities). The players are obvious, Oscar and I. The strategies, taking into account the information structure, are as follows. I always ask the question, so let this not be part of the game. Then Oscar can say “Yes” or “No” and can make his choice of action a function of the state of the nappy. This means he has four (pure) strategies: always say yes (regardless of the state of the nappy), be truthful (say yes when the nappy is full and say no otherwise), use “opposite speak” (say no when the nappy is full and say yes otherwise), and always say no. I listen to Oscar’s answer and now have four (pure) strategies as well: always check Oscar’s nappy (regardless of Oscar’s answer), trust Oscar (check the nappy if he says yes, leave Oscar in peace if he says no), understand Oscar’s answer as opposite speak (check nappy if he says no, leave Oscar in peace if he says yes), and always leave Oscar in peace.
Let us now turn to the payoffs in this game. My payoffs are as follows. I want to do the appropriate thing given the state of the nappy. So let’s say I receive a payoff of one if I check Oscar’s nappy when it is full and also if I do not check Oscar’s nappy when it is clean (I will find out eventually!). In all other (two) cases I receive a payoff of zero. I, thus, receive a zero payoff when I check the nappy when it is not full and also when I do not check the nappy when it is full (as I have said, I will find out eventually!). One could play with those payoffs but nothing substantially would change as long as we maintain that I prefer to check the nappy over not checking when it is needed, and I prefer not checking over checking when it is not needed. What about Oscar’s payoffs. I think it is fair to assume that he always prefers not checking, i.e. that I leave him alone. I am sure he would eventually also want me to change him, but much much later than I would want to do it, and I will eventually find out and change him. So I think it is ok to assume that Oscar prefers to be left in peace at the moment of me asking, regardless of the state of the nappy. So let us give him a payoff of one when he is left alone and a payoff of zero when I check his nappy (in either state).
There is one thing I still need to do with this model. I need to close it informationally. The easiest way to do this is to assume that ex-ante there is a commonly known (between Oscar and myself) probability of the state of the nappy being full. Let us call it and let us assume (recall the whiff I got) that . Now the assumption of a commonly known probability of the nappy being full is a ridiculous one, it is I am sure never true. But it allows me to analyze the game more easily, and I believe that in the present case, it is not crucial. I believe that the eventual equilibrium of this game will be quite robust to slight changes in the informational structure. I leave it to the readers to think about this for themselves.
With all this I can write this game down in so-called normal form, as a 4 by 4 bi-matrix game.
Oscar chooses the row, I choose the column, and the numbers in the matrix are the ex-ante expected payoffs that arise in the various strategy combinations. In each cell of the matrix Oscar’s payoff is the first entry, mine the second. Once all this is in place it is easy to identify equilibria of this game. Note that as my strategy to never check (never c) is strictly dominated by my strategy to always check (always c). My ideal point would be that Oscar is truthful and I can trust him, but Oscar in that case has an optimal deviation to always say no. In that case I better do not trust him and instead always check his nappy. This is indeed the only pure strategy equilibrium of this game. Well there is also one in which Oscar always says yes and I always check him, but this is really the same. Note that language has no intrinsic meaning in this game. The meaning of language in this game could only potentially arise in equilibrium.
So what did we learn from this so far? Clearly Oscar’s behavior (of lying) is not irrational (it is a best reply to my behavior). But it has the, from his – and also my – point of view, unfortunate side effect that I do not trust him, so his lying does not fool me. This game is, by the way, an example of a sender-receiver game. See Joel Sobel’s paper on Signaling Games. In fact it is an example of a special class of sender-receiver games: so called cheap-talk games. See Joel Sobel’s paper on Giving and Receiving Advice for further reading. In the language of these games the lying equilibrium between Oscar and me is called a pooling equilibrium. It is called so, because the two kinds of Oscar, the one with a full and the one with a clean nappy, both send the same “message”. The two Oscars play this game in such a way that I cannot differentiate between them. Hence ther term pooling.
In the next post I will take this game up again and consider what can happen if we play this game over and over again, as my kids and I did in the nappy days.