RS145 - Phil Tetlock on "Superforecasting: The Art and Science of Prediction"
Release date: Octiober 18, 2015
Philip E. TetlockMost people are terrible at predicting the future. But a small subset of people are significantly less terrible: the Superforecasters. On this episode of Rationally Speaking, Julia talks with professor Phil Tetlock, whose team of volunteer forecasters has racked up landslide wins in forecasting tournaments sponsored by the US government. He and Julia explore what his teams were doing right and what we can learn from them, the problem of meta-uncertainty, and how much we should expect prediction skill in one domain (like politics or economics) to carry over to other domains in real life.
Phil's pick: "Perception and Misperception in International Politics"
Full Transcripts











21 Comments
Reader Comments (21)
One thing they learned was that allowing forecasters to see the group's forecast and rationales before making predictions greatly improved the group's forecast compared to forcing forecasters to make predictions independently to prevent groupthink. I was in the group that was blindfolded before making predictions, but this result was so clear that the blindfold was lifted.
- I can't believe neither host nor guest made reference to Knight's distinction between risk and uncertainty, which he made in a famous paper published in 1921 - almost 100 years ago! Risk is when the probabilities are known, such as the chances of a certain outcome in a sequence of coin flips. Uncertainty is when either the probability is not known, or there is no probability distribution, such as the chance of the United States becoming a dictatorship.
- the host repeatedly mixed up external probability (of an outcome occurring) with "degree of certainty", which is a way to describe an internal mental state. She made a good point that degree of certainty always exists, but that doesn't mean that the external probability also exists. The guest did not use the phrase "degree of certainty", so I am left with the impression that he is dealing with a different type of probability than the host was asking about.
- she didn't ask what I consider to be the hard question. Suppose two competing forecasters are asked about a one-time event, such as whether a certain person will be elected president in the next election. The first forecasters says the chances are 60-40. The second says the chances are 40-60. Once the election happens, how can you decide which forecaster was more accurate? Because the trial can't be repeated, you can't make that judgment.
Thanks a lot