Newcomb’s Problem

William Newcomb’s problem was reportedly first used as a conversational ice breaker at parties. It became an object of serious study largely thanks to the writing of the late Harvard philosophy professor Robert Nozick.

http://www.slate.com/id/2061419/

The situation is simple. You believe that somebody you know is a nearly faultless predictor of human behavior. Part of the fun is thinking about what it would take to convince you of that. Maybe a lengthy sterling record of successful past predictions, bolstered by a reputation of being a witch, a space alien, or an angel will suffice. But you are convinced. Then, this predictor offers you a very generous proposition.

“Here are two envelopes. One is transparent. It has a thousand dollars inside.” You see that it does. “The other is opaque. It contains either a cashier’s check for a million dollars, or no money at all, just a blank sheet of paper. Here, take them in your hands.” The predictor lets go of them. The envelopes are in your hands, and your hands alone.

“You have two choices. You may keep both envelopes, or you may keep just the opaque envelope and return the thousand dollars to me. Just drop it on the floor, I’ll pick it up later.

“But here’s the rub. I have predicted which way you are about to choose. If I thought you will keep just the one opaque envelope, then I put the check for a million dollars inside the envelope. If I thought you will keep both envelopes, including that thousand dollars for sure, then there is no money in the second envelope.”

So, which envelope(s) do you keep? Straight up, no tricks. You don’t get to flip a coin. You don’t think she’s a stage magician who can make her prediction come true by sleight of hand. What she says is how it is, as far you’re concerned.

Do you take both envelopes, for a thousand dollars plus maybe much more, or just one envelope, for what you estimate is a far better chance of becoming a million dollars richer? 

The principal arguments

There are many arguments for either course of action, because there are many ways of looking at your happy predicament. Two arguments stand above the rest, each urging different choices.

In favor of keeping both envelopes is a dominance argument. The uncertainty about what is in the opaque envelope is irrelevant to your decision. If the million is in your hand, then you should keep both envelopes, to make an extra thousand. If there isn’t any million, then you should still keep both envelopes, to make any money at all. The conclusion, then, is that you should keep both envelopes. No matter what’s in the mystery envelope, you’ll be a thousand dollars richer than if you give the money back.

Keeping only one envelope enjoys an expected utility argument. If you take both envelopes, then you believe that the predictor almost surely foresaw that you would, and that she placed no money in the envelope. On the other hand, if you give her back the thousand dollars, then you believe that the predictor almost surely foresaw that, and she did give you the million dollars. So, you should keep only the opaque envelope, because that way you are much more likely to make a million dollars.

The usual kind of formal decision theory uses both dominance and expected utility considerations. Nozick had described Newcomb’s problem as a conflict between “two principles of choice.” Maybe decision theory contradicts itself, and recommends both answers.

That never seriously threatened the theory. Had there turned out to be a contradiction, then it would probably have been attributed to the obvious self-reference in the situation. The one- envelope analysis uses the outcome of the decision problem to derive the outcome. All sufficiently rich formal systems are prone to self-reference paradoxes (Is the sentence “This sentence is false” true or is it false? If it’s false, then it’s true, and if so, then it is false.). Decision theory is sufficiently rich. That is a feature, not a bug.

But there is no contradiction in the theory because the difference between the two analyses lies outside the theory. The conflict concerns how you model the situation. The theory advises you in each case how to proceed if the real situation conforms with the model. The theory cannot, and doesn’t claim to, tell you how well your model conforms with the real situation.

“The money has already been placed in the envelope or not, so which envelopes you pick cannot change that” is a principle of metaphysics, not of decision theory. Causes must precede effects, and the envelope is already stuffed. If there’s something wrong with that reasoning, then it has nothing to do with decision theory.

That doesn’t mean that the one- envelope argument is wrong, though. Some people do believe in preternatural or supernatural perception of various kinds.

How might the predictor’s perception work? Does your choice cause some sensory organ of hers to perceive your choice? Is the money really “already” in the envelope in that case?

Some people would take those questions seriously. It is not the job of decision theory to instruct them in how the world really works, but rather to make a recommendation consistent with their beliefs and preferences. That is the standard for correctness in decision theory.

If the client believes that the predictor is a witch, then that is the correct standard for judging the theory’s advice to the client. Of course, most real-life clients won’t think the predictor is a witch. Practical decision theorists, who typically build the models and also solve them, had early on adopted heuristics for model building to ensure that clients wouldn’t pay for information that cannot change their decision.

In the hospital, these heuristics prevent extra purely diagnostic tests, which are both costly and risky, after a course of treatment has already been irrevocably decided. “Nice to know” won’t cut it.

In Newcomb’s problem, these heuristics counsel against giving up the one envelope (paying $1,000) for an increased estimated probability of the money being in the other envelope (information that cannot change a decision you will already have made). I wrote about the role of these heuristics for Newcomb’s and other puzzle problems way back in 1985 in Theory and Decision (18: 129-133).

And yet the problem persists and its tribe increases

Even though Newcomb’s is easily “defanged” as a threat to decision theory, it survives as an object of discussion. Nozick himself explained the fascination,

I have put this problem to a large number of people, both friends and students in class. To almost everyone it is perfectly clear and obvious what should be done. The difficulty is that these people seem to divide almost evenly on the problem, with large numbers thinking that the opposing half is just being silly.

In other words, it’s fun to argue about Newcomb’s Problem. The puzzle has a “Necker cube” quality, like the simple line drawing which seems to flip back and forth between two orientations in space.

http://www.yorku.ca/eye/necker.htm

The same person can find their intuition pulling one way one moment, and the other way the next.

Apart from fun, debates and discussions can also be good practice in building and evaluating models. The problem may be unrealistic, but the issues raised may be realistic and important enough to reward thinking about them.

For example, some two-envelope advocates can be stumped by the following. After you have decided, but before you open the opaque envelope, somebody who had chosen only one envelope, also still unopened, offers to swap envelopes with you, for a thousand dollars. Would you pay to swap?

Probably yes, since now the expected utility argument legitimately applies. The predictor is very good at what she does. You believe that your envelope is very probably empty and the other envelope is very probably full. The difference in probabilities is dramatic, and could easily overshadow the mere $1,000 asking price.

OK, but you could have had just what the hypothetical person offers, an envelope prepared for somebody who chose only one envelope, for the same price, a thousand dollars. All you needed to do was to hand back the transparent envelope. You decided not to.

“Consistent hypothetical exchange” and “all that matters are the pay-offs and their probabilities” are important decision theory principles. Is the two-envelope decision maker being inconsistent, refusing to pay a thousand dollars for one envelope, but gladly paying the same thousand dollars for another envelope with the same estimated probability of holding the same contents?

Not necessarily, but the resolution is left as an exercise.

The interesting one-envelope case is someone who fully agrees that the money is already inside the opaque envelope or else it will never be there, but still feels that paying the thousand dollars “feels right.”

What does she feel is buying for her thousand dollars?

One thing she might be paying for is insurance against the possibility that the model is wrong. Is there really no way for the predictor to switch envelopes? It’s easy enough to say in the specifications of a hypothetical problem “There are no tricks…,” but in real life? How could you know such a thing, really?

There are other possibilities as well. If they are ignored in the decision analysis, then a client who values them is poorly served.

All that serious ramification, and it’s still a great ice breaker at parties.

Leave a comment

Filed under Inference and choice

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s