RS140 - Kenny Easwaran on, "Newcomb's Paradox and the tragedy of rationality"
Release date: August 9, 2015
Kenny EaswaranThis episode of Rationally Speaking features philosopher Kenny Easwaran, who delves into the notorious "Newcomb's Paradox" -- the puzzle about which it was once said, "To almost everyone it is perfectly clear and obvious what should be done. The difficulty is that these people seem to divide almost evenly on the problem, with large numbers thinking that the opposing half is just being silly." Kenny and Julia explore how Newcomb's Paradox is related to other puzzles in decision theory, like the Prisoners' Dilemma; what its implications are for free will; and what Kenny calls the "deep tragedy" at the heart of rationality.
Kenny Easwaran is an Associate Professor in the Philosophy Department at Texas A&M University. He works on several topics relating to epistemology and decision theory, and the role of probability in helping to understand these and related concepts.
Kenny's pick: "Thinking, Fast and Slow"
Full Transcripts











17 Comments
Reader Comments (17)
The problem with the box of money may seem a little subtler, but that's only because the problem itself is less likely and even less defined in terms of situational ethics. If I were to walk into such a situation and the question posed to me in those terms, the first thing I would have to ask is: "What are you not telling me?" It's most likely that I would opt to not take either on the grounds that I don't know where this money is coming from or who could be better served by it than me.
What are the consequences of being one type of person or another in rationally dealing with life? Knowing that there are other people in the universe who could possibly suffer the consequences of my failure to do what I promise, whether I will ever know about it or not causes me to not always do what is in my own best interest. That is the nature of morality and ethics. My very rational answer to any of these problems is to always do the ethical thing because you simply don't know.
https://en.wikipedia.org/wiki/Stigler%27s_law_of_eponymy
Both scenarios force you to make a decision BEFORE the predictor does anything. But that's what determinism says, that everything is already decided from the outset. Yet the illusion of free will makes you think that you're deciding AFTER the money has been placed.
On the other hand, if the predictor is just your friend who knows you, then he's guessing what you'll do based on your past behavior. If he thinks you'll pick two boxes, then you're out of luck. If you decide to surprise him by picking one box, you'll just be left with no money, so you may as well take both boxes. Your best strategy would be to bluff that you'll pick one box, but then take both boxes.
Assuming the oracle is limited to intentions only and its goal is to maximize its own accuracy how does one maximize the take.
Strategy 1. Enter the room and intend to flip a coin. If heads take both boxes and tails take just the one. The best the oracle can do is flip a coin to decide whether to put the million in the first box. The expected payout is $500,500. The $500 is from the half the time the player picks both boxes and $500,000 is from the half the time the oracle decides to put in the million.
Strategy 2. Enter the room and intend to roll a die. If 6 comes up take both boxes otherwise take just the one. The expected payout is $1,000,167. The $167 is from the 1/6th of the time six shows up. The oracle will always put in the million because this maximizes its accuracy.
Strategy 3. Enter the room with a slightly bias coin that comes up heads 499 times out of 1000. If heads take both boxes and tails take just the one. The expected payout is $1,000,499. Any small bias toward taking only one box will cause the oracle to always put in the million.
This actually comes up in gaming a lot. In tournament poker, a lot of decisions will change based on how many chips you already have. If you call and are wrong, you'll be crippled or out, but if you fold you'll maintain a workable stack. So you might fold in a situation that might have a slightly positive EV in chips just to preserve the utility of the stack. The reverse of that would be having a huge stack where you don't mind calling anything, because even if you lose you'll still have a lead.
Regardless, the person offering the bet in Newcomb's Paradox is an idiot, as she's always out at least a thousand bucks. I'd play poker with her any day.
The Evidential Solution: Take the opaque box.
The Zen Solution: Take neither box.
The Geek Solution: You're already wearing your own brain scanner. Check the logs and do the opposite of T($enter_tent).
The Kobayashi Maru Solution: Sneak in the day before and reprogram the brain scanner.
The Infomercial Solution: Sell 100k copies of the book "Beat the Brain Scanner and get $1,001,000" for $10.01 each.
The Philosopher Solution: Prove that such a device would solve the Halting Problem, or violate Godel's Incompleteness Theorem. Or argue about Quantum Physics, Heisenberg's Uncertainty Principle, or Special Relativity.
The Wall Street Solution: You have insider information from the company that manufactured the brain scanner, a HFT algorithm that gets the result 1ms faster, put/call options in case you make the wrong choice, and $1m bailout from Congress after you invest the money in Greek bonds.
The Brute Force Solution: There's $1m in the tent somewhere. Beat the experimenter up until <non_gender_specific_pronoun> gives it to you. (Note - this is Computer Science related pun. A joke. I do not condone violence or the threat of violence. Seriously, the internet really needs to lighten up.)
In as much as the situation is defined such that the person's decision to take one box or two has absolutely no affect on what money is in the boxes anyone who act/believes differently, as if their decision controls if there is more or less money in the box, is engaging in magical thinking. They are deluding themselves into feeling like they have some power which is clearly not there. Why would a person fantasize that they have powers that they clearly don't? This happens all the time in religion. People imagine that they can ask a deity for favors and have them granted. (They are raised to believe this.) It is not surprising that so many people fall into this fantasy power wish. The resulting "controversy" isn't because of any uncertainty of the rational choice in the situation. The controversy is just that too many people don't distinguish between magical thinking, wishful thinking, and realistic thinking.
I would be interested in seeing psychological studies to see if those who are willing to think that their choice in the situation can possibly affect the presence of money in the box are also more likely to engage in other forms of magical or wishful thinking. What kinds of people would engage in magical thinking? Education level, age, sex, nationality, social and economic levels, etc . I myself am strongly physics oriented and would expect that those in the sciences would be less likely to fall intro magical thinking. The situation in the paradox is a good type of situation to use a test question to look for magical thinking patterns. I imagine that something like this has been done and would like to find out more about it.
If anyone knows of such studies/tests I'd love to hear about them.
On Newcomb's paradox, I agree with the view that backwards causation isn't happening. It's based on what kind of person you are at the moment the paradox is put to you. That may be a problem for free will, but I'm skeptical about free will, so it's not so much of a problem for me. I also like the notion that it's a contrived test conceived by a devious intelligence and as such doesn't necessarily speak against rationality in the natural wild. Lots to muse on, so an excellent episode.
Thanks to all the people involved for all their efforts over the years. Great comments too.
In particular the notion of "the predictor" artificially introduces concerns about free will, retro-causality, determinism and such. People start calculating expectation values, conditional probabilities and so one. The situation of the subject is conflated will these considerations which is confusing. Assuming a perfect predictor for specificity the situation of the subject is exactly the same as given below.
The situation that the person finds themselves in is this. They are told that there is money already in both boxes or not, That their decision to take one or both boxes will not affect the money in either box. They are also told that if they take both boxes there will only be money in the clear box and if they take only the opaque box there will be a million dollars in it.
Clearly saying that their decision won't affect the monies already in the boxes is flat out inconsistent with telling them that the amount of money in the boxes depends on their choice. The introduction of the predictor is just to distract the listener from the inconsistency involved in the situation being set up. It is said that the perfect predictor scenario is the easiest to decide, that the only reasonable choice is to take only the opaque box, since then surely one gets the million dollars. But that is exactly the situation given above. If one accepts the description as given one has accepted a contradiction so either side can argue forever, anything follows from a contradiction.
That's not how I read it. The way I learned it, the predictor will put the money in the box based on what will happen in the future, and afterwards you can take one or both boxes. To me, it was always about determinism and free will.
The way it was described in the episode, a brain scanner reads your intention, so it was more about whether you can intend to do one thing, but then do another, sort of like trying to fool a very good lie detector.
But the analogy to the studies that find correlation without causation would be if the predictor noticed that, say, 60% of women take both boxes while 60% of men take one box, and predicts solely based on your gender. In that case, you'd be better off taking two boxes.
There's no logic or reason involved, because you've already been presented with two absolute facts: if you take just the opaque box, there's a million dollars in it. If you take both, it's empty. The rest of the story is irrelevant. It's horseshit. Their is only one fact that has any meaning at all: if you take just the opaque box, there's a million dollars in it. The only reason you wouldn't do that is if you thought the rules were lies, but that's not presented as a possibility.
Going here with real world causal theory thinking is irrelevant; that is going by what is current state of affairs plus my action, because it is a world of causality loops, in which this logic does not apply. In this world one cannot ignore what preceeded current state of affairs...
There is no tension between the theories.
Just one rational act given a type of a world with different causality.
For the Newcomb Problem, consider two events, t1 brain scan, and t2 box choice. To make the already extremely difficult problem simpler, assume a 100% accuracy of the brain scan. At t2, you must make a choice knowing that t1 previously occurred with 100% accuracy. The event at t1 could make you involuntarily commit to an action at t2. If at t2 you feel you simply cannot forgo the $1K, even though it's 1% of the $1M, then you should take the opaque box and the $1K. If, however, you feel you can risk the $1K in order to have a chance at the $1M, then you should take only the opaque box. The brain scanner need not have any uncertainty since your own mind already provides uncertainty. You have to decide whether the mad scientist can make you involuntarily commit to abstaining from taking the $1K. Thus you have a rational character and engage in a rational action whatever choice you make, so long as you use rationality to predict your previous mental state. A rational person cannot, however, decide that they would have involuntarily committed to the opaque box earlier and then somehow uncommitted themselves and then take both the opaque box and the $1K. So you might have the opportunity to gain $1K or $1M, but you cannot expect to gain both the $1M and the $1K ($1,001,000). Another interesting question arises if we consider a 50% accuracy of the brain scan.