Search Episodes
Listen, Share, & Support
Listen to the latest episode
Subscribe via iTunes
Subscribe via RSS
Become a fan
Follow on Twitter

Support Us:

Please consider making a donation to help make this podcast possible. Any contribution, great or small, helps tremendously!

 
Subscribe to E-Mail Updates

Related Readings
  • Answers for Aristotle: How Science and Philosophy Can Lead Us to A More Meaningful Life
    Answers for Aristotle: How Science and Philosophy Can Lead Us to A More Meaningful Life
    by Massimo Pigliucci
  • Nonsense on Stilts: How to Tell Science from Bunk
    Nonsense on Stilts: How to Tell Science from Bunk
    by Massimo Pigliucci
  • Denying Evolution: Creationism, Scientism, and the Nature of Science
    Denying Evolution: Creationism, Scientism, and the Nature of Science
    by Massimo Pigliucci
Saturday
Oct172015

RS145 - Phil Tetlock on "Superforecasting: The Art and Science of Prediction"

Release date: Octiober 18, 2015

Philip E. TetlockMost people are terrible at predicting the future. But a small subset of people are significantly less terrible: the Superforecasters. On this episode of Rationally Speaking, Julia talks with professor Phil Tetlock, whose team of volunteer forecasters has racked up landslide wins in forecasting tournaments sponsored by the US government. He and Julia explore what his teams were doing right and what we can learn from them, the problem of meta-uncertainty, and how much we should expect prediction skill in one domain (like politics or economics) to carry over to other domains in real life.

Phil's pick: "Perception and Misperception in International Politics"

 

 

Full Transcripts 


Reader Comments (19)

I took part in the Forecasting ACE team. Didn't realize there were other teams at first. They didn't talk about superforecasters. I was in the top 1% at one point, don't know if that would qualify. But they had a stupid scoring system initially that didn't reward making long-term forecasts, so I'd just make predictions about events that were about to close. Later, they changed the scoring to reward forecasters who made the biggest impact, and this rewarded making a lot of forecasts, which took up too much time. Also they screwed me several times, like I correctly predicted a 6.0+ earthquake in Japan, but they didn't count it.
One thing they learned was that allowing forecasters to see the group's forecast and rationales before making predictions greatly improved the group's forecast compared to forcing forecasters to make predictions independently to prevent groupthink. I was in the group that was blindfolded before making predictions, but this result was so clear that the blindfold was lifted.
October 20, 2015 | Unregistered CommenterMax
Where are the promised links?
November 11, 2015 | Unregistered CommenterZeltal
Great topic, but a few comments:

- I can't believe neither host nor guest made reference to Knight's distinction between risk and uncertainty, which he made in a famous paper published in 1921 - almost 100 years ago! Risk is when the probabilities are known, such as the chances of a certain outcome in a sequence of coin flips. Uncertainty is when either the probability is not known, or there is no probability distribution, such as the chance of the United States becoming a dictatorship.

- the host repeatedly mixed up external probability (of an outcome occurring) with "degree of certainty", which is a way to describe an internal mental state. She made a good point that degree of certainty always exists, but that doesn't mean that the external probability also exists. The guest did not use the phrase "degree of certainty", so I am left with the impression that he is dealing with a different type of probability than the host was asking about.

- she didn't ask what I consider to be the hard question. Suppose two competing forecasters are asked about a one-time event, such as whether a certain person will be elected president in the next election. The first forecasters says the chances are 60-40. The second says the chances are 40-60. Once the election happens, how can you decide which forecaster was more accurate? Because the trial can't be repeated, you can't make that judgment.
November 24, 2015 | Unregistered CommenterPaul Fahn
For deterministic events such as coin flips, there is no "external probability." If you could measure all the weights and sizes and forces and distances involved, then you could predict the coin flip outcome with 100% accuracy. The 50/50 probability is a measurement of your own uncertainty about the outcome. It happens to be well-calibrated because we have good statistics on coin flips. (Technically, 50/50 guesses are always well-calibrated but useless). Same with weather forecasts. If I know that it rains 10% of the time, then every day I can say there's a 10% chance of rain, and I'll be well-calibrated. And if I have no idea, then on even days I can say there's a 50% chance of rain, and on odd days I can say there's a 50% chance of no rain, and I'll still be right half the time, so therefore I'll still be well-calibrated.
November 24, 2015 | Unregistered CommenterMax
Really great episode, thanks !
December 9, 2015 | Unregistered CommenterTimo
awesome episode
January 10, 2016 | Unregistered Commenterlatika
For deterministic events such as coin flips, there is no "external probability." If you could measure all the weights and sizes and forces and distances involved, then you could predict the coin flip outcome with 100% accuracy. The 50/50 probability
January 30, 2016 | Unregistered CommenterTarun Rawat
For deterministic events such as coin flips, there is no "external probability." If you could measure all the weights and sizes
January 30, 2016 | Unregistered Commentervalentine day images
great article about the combo of Science and art
February 4, 2016 | Unregistered Commenterromy
Paul: I don't know what scheme was used, but if each person being tested makes lots of predictions you can take the probabilities into account by scoring them with a weighted sum over the events. e.g., if they say Trump is elected with 1% probability, and he is elected, they are penalized more than if they had said 30%. This scoring not very meaningful for the single event, but it is if they are predicting many different events.
February 5, 2016 | Unregistered CommenterJair
The host repeatedly mixed up external probability (of an outcome occurring) with "degree of certainty", which is a way to describe an internal mental state.
February 9, 2016 | Unregistered Commenterlove msg
To love msg: The host (Julia Galef) rejects the idea of "external probability". Reality is a single self-consistent thingamabob, but we are ignorant about what reality is, so we have to use several models, and that is why we have to use probabilities. If we had a model of reality that was identical with reality, there would be no probabilities.
February 9, 2016 | Unregistered CommenterTimo
nice article write up!!!!
February 20, 2016 | Unregistered Commenterlakha
This is a great roundup of art and science of this era. Thanks for sharing.
February 23, 2016 | Unregistered CommenterBluedart Tracking
Thanks for this awesome share of art
February 23, 2016 | Unregistered CommenterOfficial
Thanks for this wonderful piece of information.
March 23, 2016 | Unregistered CommenterLewa Jr.
I came back to this after listening to RS 156 with David McRaney. Of course, what Tetlock has to say is very relevant to any project that seeks to improve decision-making skills within a specific domain, a specific problem, or to improve one's overall effectiveness. I'd like to hear more about the "Good Judgment Project" and the general techniques that are being suggested, as explained through an interview conducted by Julia. There is material online, but for me it is 'high context' and I need an explanation of the background before it will be useful. A Bayesian orientation to decision-making, problem-solving, and opinion-formation may seem like second nature to her, but it is very different from the way many of us pull opinions out of strange places and then defend them as soon as they are challenged.
April 6, 2016 | Unregistered CommenterBrian Burke
Well to be honest deterministic events like coin flips can be tempered by rigging the coin however probability equations have come a long way in predicting stuff. Eg in tournaments etc
June 17, 2016 | Unregistered CommenterBest Epilator
I have listened to a ton of interviews of Tetlock and this one is by far the best one.

Thanks a lot
December 3, 2016 | Unregistered CommenterPabloPS

PostPost a New Comment

Enter your information below to add a new comment.

My response is on my own website »
Author Email (optional):
Author URL (optional):
Post:
 
All HTML will be escaped. Hyperlinks will be created for URLs automatically.