Newcomb’s paradox

January 6, 2010

This post presents my analysis of Newcomb’s paradox. I have written it back in 2001, after having found the paradox discussed in Martin Gardner’s book “Knotted doughnuts”. Reading the discussion in the Wikipedia entry, I find my resolution more satisfying than those offered there. While I clearly take the “no free will” avenue which is mentioned in the entry, I think that drawing the analogy to a program-programmer situation reveals the essence of the situation, and avoids unnecessary muddles such as references to “reverse causation”. It is also worth noting that determinism is also not a necessary factor in the setup. Even if the computer could use a random number generator that is not predictable by the programmer, the situation would not change materially.

Newcomb’s paradox:

A psychologist comes to you claiming to have invented a machine able to scan your brain and predict with certainty your future actions. She proves her machine’s ability by predicting numbers you choose and all other kinds of actions, until you are convinced that the machine really works. In many trials you have never seen it fail.

She then puts a $10 bill on the table, and gives you a sealed envelope. The envelope and its contents are yours. You are also allowed to take the bill if you want to. She says that she used the machine to predict whether you will take the bill. The envelope is empty if the machine said you will take the bill, and it contains a hundred dollars if the machine said you will not keep the bill.

The problem is: should you or shouldn’t you take the bill?

Argument against taking the bill: If you take the bill, the machine would have said you will, and therefore the envelope would be empty, meaning you will total $10. If you do not take the bill the machine would have known that too, and the envelope would contain $100. Therefore not taking the bill yields higher return.

Argument for taking the bill: Say there are x dollars in the envelope. If you take the bill you get x + 10. If you don’t, you get x. Therefore taking the bill yields higher return.

My analysis:

I think that the confusing part of this paradox is that the subjective perception of free will is so strong that even when we are willing to assume that the machine can predict what we do, we are still clinging to the idea that we can make up our own mind.

I find the following trick illuminating. Repeat the whole story in the paradox, replacing you (the subject) with “computer program” and the psychologist with “programmer”. Now ask the question again:

“Should the program or shouldn’t it take the $10 bill?”

Now it sounds absurd. A program will either take the bill or not, there is no issue of “should”. (In fact the whole scenario sounds a bit absurd. Why would the programmer bother to make such an experiment, if she knows the result in advance?)

Under the assumptions made about the mind reading machine, the question does not make any more sense when applied to a person.


3 Responses to “Newcomb’s paradox”

  1. Omri Says:

    Hi brother,

    The whole “paradox” is silly, since it can be easily shown that such a machine cannot be built. Namely, suppose that the person asks the machine what action she will take out of two options, and decides in advance that whatever the machine will tell, she will do the opposite. The machine might be able to ‘predict’ the decision of the person, but not which action she will actually take. QED


  2. Yoram Gat Says:

    Hi brother,

    In the scenario the machine is able to predict how the person behaves under certain conditions – naturally, under those conditions access to the output of the machine is not available to the person. Again, the analogy to the programmer-program scenario is illuminating – a programmer is able to predict the output of a program in a given situation. Of course, this would be impossible if one of inputs to the program is the prediction of the programmer.

  3. […] [Some interesting scenarios would be created, such as the one described by Newcomb's paradox.] […]

Leave a Reply

Fill in your details below or click an icon to log in: Logo

You are commenting using your account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s