On Lying, III

In my previous post I argued that a person can be kept truthful (in a repeated setting) by the threat of never believing this person again once this person has been caught lying even once. This is a strategy that, as I have pointed out in my previous post and in one comment, many proverbs suggest.

In this post I want to ask the question whether this threat is a credible one. I will have two answers to this question. Yes and no.  

Haha. Well, it depends on what you call a “credible” threat. The most commonly known notion of a non-credible threat is due to Reinhard Selten. Paraphrasing his work somewhat, a threat is not credible if, once asked to actually go through with it, people do not find it in their own interest to do so. Reinhard Selten then defined a Nash equilibrium to be free of non-credible threats if it is a subgame perfect equilibrium, that is a Nash equilibrium that is also a Nash equilibrium in every subgame. What does that mean? Well this means the strategy profile that both players in the game play is such that no matter what has happened in the game so far the strategy profile from any point onwards is still so that no player would want to deviate from it if they believe the other player to follow their part of this strategy profile.

Let us come back to the nappy-changing game as described in my first post in this series. Here is a brief summary of this game. I ask Oscar if his nappy is full (after some initial but uncertain evidence pointing slightly in this direction). Oscar can make his answer depend on the true state of his nappy (full or clean) and this answer can either be “yes” or “no”. I then listen to his answer and make my decision whether to check the state of his nappy or not a function of what answer he gave. Let me reproduce the normal form depiction of this game again here:

 \begin{array}{c|cccc} & \mbox{always c} & \mbox{trust} & \mbox{opposite} & \mbox{never c} \\ \hline \mbox{always yes} & 0,\alpha & 0,\alpha & 1,1-\alpha & 1,1-\alpha \\ \mbox{truthful} & 0,\alpha & 1-\alpha,1 & \alpha,0 & 1,1-\alpha \\ \mbox{opposite} & 0,\alpha & \alpha,0 & 1-\alpha,1 & 1,1-\alpha \\ \mbox{always no} & 0,\alpha & 1,1-\alpha & 0,\alpha & 1,1-\alpha \\ \end{array}

 

The one-shot game only has equilibria in which I do not trust Oscar’s statement and always check his nappy. This is bad for both of us. It would be better for both of us if Oscar was truthful and I believed him. This, I then argued in the previous post in this series can be made an equilibrium outcome if the two players play the grim trigger strategy as suggested by all these proverbs. Note that I keep assuming that I (as a player in this game) can always find out about the true state of the nappy sooner or later, an assumption that has rightly been pointed out not to be completely plausible in all cases. [One could here talk about the more recent literature on repeated games with imperfect monitoring, but I will refrain from doing so at this point – the reader may want to consult the 2006 book by Mailath and Samuelson on Repeated Games and Reputations]

The grim trigger strategy is as follows. I believe Oscar as long as he was always truthful in the past (and as long as I was always trusting in the past). Once I catch Oscar lying (or once I have not trusted Oscar) I never again believe him and always check his nappy from then on. Oscar is truthful as long I have always trusted him (and as long as he has always been truthful). Once he catches me not trusting him (or once Oscar himself was untruthful) Oscar will always so no from that point on. The statements in brackets probably seem strange to someone not used to game theory, but they are needed for the statements below to be fully correct.

This grim trigger strategy, then, is a subgame perfect equilibrium, provided Oscar’s discount factor  \delta > \alpha . Why? I have already argued that, in this case, Oscar would not want to deviate from the equilibrium path of being truthful, because lying would lead to my never trusting him again and this is sufficiently bad for him if his discount factor is sufficiently high. I certainly have no incentive to deviate from this equilibrium path, because I get the best payoff I can possibly get in this game. But subgame perfection also requires that the threat, when we are asked to carry it out, is in itself also equilibrium play. So suppose Oscar did lie at some point and I (and Oscar) are now supposed to carry out the threat. What do we do? Well, we now play the equilibrium of the one-shot game forever and ever. But this is of course an equilibrium of the repeated game, so the grim trigger strategy described here does indeed constitute a subgame perfect equilibrium.

So according to Reinhard Selten’s definition the punishment of never ever believing Oscar again is a credible threat.

But others have argued that Reinhard Selten’s notion of a credible threat is only a minimal requirement and further requirements may be needed in some cases. I do not know what Reinhard Selten thought of this, but I guess he would have agreed. So what is the issue?

Oscar and I when we look at this game should realize that we would both like to play the truthful and trusting equilibrium path. To incentivize us to do so, especially Oscar in this case, we need to use this threat of me never again believing him if I catch Oscar lying. But suppose we are in a situation in which I have to carry out this threat. Then we would both agree that we are in a bad equilibrium and that we would want to get away from it again. In other words we both would want to renegotiate again. But if Oscar foresees this, that I would always be willing to renegotiate back to the truthful and trusting outcome of the game, then his incentives to be truthful are greatly diminished.

With something like this in mind Farrell and Maskin (1989)  and others have put forward different versions of equilibria of repeated games that are renegotiation-proof, that is immune to renegotiation.

They call a strategy profile of a repeated game a weakly renegotiation proof equilibrium if the set of all possible prescribed strategy profiles (after any potential history) are Nash equilibria in the subgame and cannot be Pareto-ranked. This means that for any two possible prescribed strategy profiles (or continuation equilibria as they call it) must be such that one person prefers one of the two while the other person prefers the other. Note that this is not the case in the grim trigger equilibrium of the repeated nappy changing game. In this subgame perfect equilibrium both Oscar and I prefer the original equilibrium of the game over the one carried out after Oscar is lying.

So what is weakly renegotiation proof in the repeated nappy-changing game? Well, I have made some calculations and the best weakly renegotiation proof equilibrium for me (in terms of my payoffs) that I could find is this: On the equilibrium path Oscar is truthful and I randomize between trusting Oscar with a probability of  \frac{1}{2-\alpha} and with a probability of  \frac{1-\alpha}{2-\alpha} I play “do not check (regardless of what Oscar says)”. For this to work I have to randomize using dice or something like this in such a way that Oscar can verify that I correctly randomize. If Oscar ever deviated I then verifiably (to Oscar) randomize by playing “check nappy regardless of Oscar’s answer” with a probability of  \frac{\alpha}{2-\alpha} and trusting Oscar with a probability of  \frac{2(1-\alpha)}{2-\alpha} . Oscar then continues to be truthful. Oscar incentivizes me to behave in this way (after all I am letting Oscar do what he wants sometimes, which is not what I would want) by punishing me, if I ever deviate, with the following strategy. He would continue to be truthful but ask me to play “do not check (regardless of what Oscar says)”. This complicated “construction” works only if the punishment is not done forever, but only for a suitably long time after which we go back to the start of the game with the equilibrium path behavior. Whenever one of us deviates from the prescribed behavior in any stage we simply restart the punishment phase for this player.

This probably all sounds like gobble-dee-gook and I admit it is a bit complicated. Before I provide my final verdict of this post let me just clarify this supposedly best-for-me renegotiation proof equilibrium. Suppose  \alpha is essentially one half. Then in the prescribed strategy profile, on the equilibrium path, I am randomizing between trusting Oscar (which I find best given that he is truthful) with a probability of 2/3, that means two thirds of the time. But one third of the time I leave Oscar in peace even when he tells me his nappy is full. As this happens one half of the time I actually leave him in peace despite a full nappy in one sixth of all cases. I have to do this in order for my punishment of him to be actually preferred by me over the equilibrium path play. In the punishment I mostly switch probability from leaving Oscar in peace to checking him always, which is something that I do not find so bad, but that Oscar really dislikes.

Well, I do not know if you find this very convincing, but I do think that the grim trigger strategy is not really feasible when it comes to teaching kids not to lie. What I actually use in real life is a simple trick. I do not punish behavior only within the nappy-changing game itself. I use television watching rights, something outside the game I just described. The great thing about this is that while Oscar does not like it when his television time is reduced I am actually quite happy when he watches less television and so this works as a renegotiation proof punishment. But the fact that I do this can be explained by the failure of the grim trigger strategy to be renegotiation proof.

Advertisements

2 thoughts on “On Lying, III

  1. Pingback: The game theory of everyday life inspired by the work of Erving Goffman – Introduction | Graz Economics Blog

  2. Pingback: Workshop on Game Theory and Modelling - Doctorado en Ciencias de la Complejidad Social UDD

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Google+ photo

You are commenting using your Google+ account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s