In the psychology lab, experimental versions of formal games often fail to elicit the behavior which the experimenters expect based on their understanding of game theory. These results do not necessarily say much about the prevalence of cold, calculating, self-interested rationality. Some results might be occasions to think about what the player really gains by his or her behavior, and what problems reason is actually being asked to solve.
Some games that psychologists like to play
The one-shot prisoner’s dilemma is a perennial favorite in the lab. Each player chooses one of two options, cooperate or defect. Each player chooses without knowing what the other has chosen. If both players cooperate, each gets a so-so reward. If both defect, then each gets only a poor reward. If one player cooperates and the other defects, then the cooperator gets the worst outcome and the defector gets the best outcome.
Each player, then, faces a reward structure like the one in Newcomb’s Problem
It is called “one-shot” in contrast to another version to be discussed shortly. In a one-shot situation, the two players are assumed to know that playing this game once will be the only interaction the two of them will ever have with each other. Whatever happens in the game, there will be no second chance to even the score or to reward generosity.
No matter what the other player does, each player is better off defecting. The irony is that if both players do that, then they both end up winning the inferior “poor” reward rather than the better “so-so” reward.
Of course, reasonable people do sometimes cooperate in the lab (and in real life), despite lacking assurance that the other player won’t defect and impose the very worst outcome on the cooperator. That behavior is often interpreted as “contrary to game theory,” which it is, but only if the players share the assumptions of whoever has organized the experiment. More on that later.
A variation on the one-shot prisoner’s dilemma is the finitely iterated prisoner’s dilemma. Here, the same prisoner’s dilemma game is played repeatedly by the same players. Both players usually know when it is the last game.
That last game is a one-shot prisoner’s dilemma. Neither player can possibly do better than to defect in that last game, if it really is their last encounter. But that isn’t so of the first game in a series. Each player might cooperate in hopes of inspiring the other player to cooperate. That way, they would each do better than defecting on every game in the series.
But, they “ought to” defect on the last game. That’s the dominant move. Cooperation on the second to last game, therefore, cannot inspire a rational player to cooperate on the last game. So, better to defect then, too. And the game before that? Yes, defect, then, too, since a rational player won’t cooperate in the second to last game.
And so on. This argument is called a “backwards induction.” If you think about it, and accept the argument as prediction of the other player’s behavior, then you defect on the first game, and every game. You never cooperate.
Of course, in the lab (and in real life) people often decline to accept the argument as a prediction of the other player’s behavior. Maybe that’s because the argument is a lousy predictor. In a long series, people typically will cooperate for a while at the beginning. Later, as the series gets near its end, as the rewards of cooperation have mostly been reaped and the risks of being caught as the sole cooperator loom relatively larger, mutual defection sets in.
That behavior is also often interpreted as “contrary to game theory,” but it isn’t. Unending defection is not the dominant play. People can and do perform better than simply to defect. All game theory has done is to identify the one and only “equilibrium” strategy. There are many ideas about equilibrium, but the most common is (roughly) if one player makes an equilibrium play, then the other player cannot do better than to make the equilibrium play as well.
So, defecting on the first game is an equilibrium strategy. If the other player is always defecting, then I really cannot do better by ever cooperating. That doesn’t make it rational to defect on the first game, since I don’t know whether the other player will always defect or not.
There is no other specific equilibrium advice. Suppose somebody advised the players to “cooperate at first, then defect on game seventeen.” Great. I hope the other fellow takes that advice. I’ll cooperate for the first fifteen games, and then defect on game sixteen. He’ll get caught cooperating, and I’ll get a bonus for that round.
Every game in the series rewards “cooperate one time fewer than advised.” So, game theory can never pick a time to cooperate. Whether that is a “failure” of game theory, however, is another question.
Our last popular experimental game is called the ultimatum game. It is very simple. One player is chosen to be the dictator. A divisible resource is offered the players, maybe a roll of forty quarters. The dictator gets to propose a division of the quarters. The other player can accept or decline the offer. If the other player accepts, then the quarters are divided as the dictator has proposed. Otherwise, neither player gets any quarters.
This game, too, has its one-shot and iterated versions. Many people play simply. The dictator proposes 20-20. That seems fair, and both players pocket their respective five dollars. One “mystery” is why the dictator doesn’t always propose 39-1.
Well, sometimes dictators do, as may be tempting in a one-shot game. Predictably, the other player sometimes balks at that division. Supposedly, that’s a mystery, too. Why isn’t the other player, however grudgingly, taking something, a quarter, rather than nothing?
Some thoughts on why game theory works the way it does (or doesn’t) in the lab
Classical equilibria are self-enforcing arrangements. They are what one player can unilaterally impose on the other on pain of the other player doing no better, or perhaps strictly worse, than what could be gotten by playing along. Equilibria are factual circumstances which are neither predictions nor complete accounts of reason’s role in the solution of strategic problems.
The pursuit of self-enforcing arrangements is not an obligatory feature of plain-language rationality. A player may accept the risk of declining self-enforcement for the possible benefits of attempting some other kind of strategic accord. Game theory has fulfilled its role in pointing out that there is risk, but has nothing to say about how that risk ought to be balanced against the possible rewards of jointly non-equilibrium play.
For example, in a finitely iterated prisoner’s dilemma, it is simply a fact that neither player can compel the other to adopt any pattern of play more lucrative than unrelieved defection. This is the fact which is revealed by a sound backwards induction argument.
It does not follow that the only rational course of action is always to defect. What does follow is that there is some risk inherent in passing up the potentially self-enforcing income stream which is offered by the structure of the game.
Generalist scholars of rationality can say more about this game. One might advise the players about methods for balancing risk and potential reward, and even do that much publicly. Each player would then choose in how many rounds of any specific instance of the game they wish to attempt cooperation. That is a private, and so unexploitable, decision.
Advice of that kind is neither contrary to game theory nor hostile to the notions of rationality upon which it is founded. It is, however, outside the scope of the theory. The advice does not take the accustomed form of a joint strategic recommendation, but rather takes the less specific form of guidance for choosing a strategy.
In the one-shot ultimatum game, the second player who declines a miserable share makes use of a simple fact: one can dispose of wealth without taking possession of it. There is no mystery here, the second player has not rejected the offered share, but rather has made a purchase with it.
Presumably, it feels good to frustrate a greedy so-and-so. The meagerness of the offer entails a low cost for indulging in that good feeling.
It is an excellent research question why that gesture feels good. But here and now, satisfaction can be bought at a bargain price, possibly with favorable tax treatment as well. The purchase sounds rational. As for that being some crisis for game theory, the theory never did claim to offer shopping advice.
Finally, there are those episodes of cooperation contrary to formal dominance, as in one-shot prisoner’s dilemmas. In order for dominance to apply, both players must fully believe that this is their one and only interaction, ever.
Really to know that one will never again encounter some other person requires preternatural gifts. Any doubt, even scrupulous doubt, about further interaction removes the force behind a dominance argument, the supposed certainty that one cannot do better than by playing the dominant option. As a popular folk-saying reminds us, it is a long road which does not bend.
A future encounter need not be left passively to chance, nor is retaliation the only possible reason to make an effort to contact someone after the lab session is over. A truly strategically minded person might see an opportunity. A shared episode of cooperation may serve as the foundation of a more extensive relationship with a former stranger. If the risk of taking the initiative in the lab is competitive with the fees of a commercial introduction service in real life, then the reported behavior is plausibly rational.