RS 226 - Rob Wiblin on "An updated view of the best ways to help humanity"
Release date: February 4th, 2019
If you want to do as much good as possible with your career, what problems should you work on, and what jobs should you consider? This episode features Rob Wiblin, director of research for effective altruist organization 80,000 Hours, and the host of the 80,000 Hours podcast.
Julia and Rob discuss how the career advice 80,000 Hours gives has changed over the years, and the biggest misconceptions about their views. Their conversation covers topics like:
Should everyone try to get a job in finance and donate their income?
The case for working to reduce global catastrophic risks
Why reducing risk is a better way to help the future than increasing economic growth
What percentage of the world should ideally follow 80,000 Hours advice?
Links
Rob's Personal Page
Rob's Podcast: "80,000 Hours"
- Episode #45 – Prof. Tyler Cowen's stubborn attachments to maximising economic growth, making civilization more stable and respecting human rights
- Episode #10 – Dr. Nick Beckstead on how to spend billions of dollars preventing human extinction
- Episode #29 – Dr. Anders Sandberg on three new resolutions for the Fermi Paradox and how to easily colonise the universe
- Episode #6 – Dr. Toby Ord on why the long-term future matters more than anything else and what to do about it
- Episode #15 – Prof. Tetlock on how chimps beat Berkeley undergrads and when it’s wise to defer to the wise
"Making Sense of Long-Term Indirect Effects" by Rob Wiblin
"Broad versus narrow approaches to shaping the long-term future" by Nick Beckstead
Calculator for whether it’s better to speed up or slow down growth: "Differential technological development: Some early thinking"
"On the Overwhelming Importance of Shaping the Far Future" by Nicholas Beckstead
"Against the Grain: A Deep History of the Earliest States" by James C. Scott
"Destined for War: Can America and China Escape Thucydides’s Trap?" by Graham Allison
"Science Is Getting Less Bang for Its Buck" by Patrick Collison and Michael Nielsen
"Why despite global progress, humanity is probably facing its most dangerous time ever" by Benjamin Todd
"Presenting the long-term value thesis" by Benjamin Todd
"The Aestheticising Vice," Paul Seabright's review of Seeing Like a State, by James Scott
Edited by Brent Silk
Music by Miracles of Modern Science
Full Transcripts












12 Comments
Reader Comments (12)
Given the general stupidity of so many humans, it does not come as a surprise that the probability that humanity will exterminate itself outweighs the probability of extermination from all natural threats combined.
Instead of focusing on Global Development and Health, or on Earning to Give, would simply focusing on developing new technology that radically increases world GDP have a much greater impact on human wellbeing? Would increasing world GDP greatly increase resources available to the world's poor, and also greatly increase the demand for labor, thus providing the world's poor with jobs?
China exports about $522.9 billion (2017) of goods and services to the US annually. China has a substantial disincentive to make war on its largest customer. Would increasing world trade in general further decrease the possibility of a major world conflict?
Great podcast. Very interesting as usual.
- Julia mentioned this incentive to try stuff and see what happens which I think is the sort of thing Taleb would advise. It isn't clear why Rob thinks targeted interventions are more or less likely to have bad consequences than the things he thinks he knows are dangerous. Clear and immediate threats like stopping someone from launching the nuke aside, how can he (or anyone) know that stopping tensions with China doesn't lead Russia to be isolated and end the world? Or that the increase in protective technologies doesn't lead to increased risk taking (like protective equipment in football leading to harder hits). I would guess that Taleb would note that nothing I imagine here is likely to be the thing that we should have worried about.
Hi Mark, I got myself in a bit of a tangle explaining this, and we were running out of time so I couldn't go back and clarify. Maybe I can make more sense here.
Inasmuch as we know exponential growth is going to continue as long as it possibly can - so no catastrophe - we'll eventually expand to control the entire accessible universe. 2% growth for just 200,000 years will get us growth of a factor of 10^1720, which looks like more value output than there is matter in the accessible universe to support at technological maturity.
If we are going to plateau and stay there for billions of years while we gradually run down the universe's endowment of matter and energy - or wait around and then use the whole universe abruptly at some ideal later time - then you can see that it's close to irrelevant how fast we get there. The total value harvested will be almost identical whether we reach that plateau in 200,000 years, or 2,000,000 years.
Except of course for the reason I pointed out, that cosmological expansion means the accessible universe is shrinking by a billionth each year, so growth delayed is mass denied.
I also failed to mention that the universe's negentropy is being gradually depleted by e.g. the burning of stars. This reduces our remaining ability to perform computations, though this process is occurring surprisingly slowly and probably outweighed by other factors.
At this point we have to call in the physicists and cosmologists as we're beyond what I as an economist can usefully discuss.
Of course as I say on the show, all the non-total-utilitarian reasons to go more quickly, such as a selfish preference for a better life now, or concern for the wellbeing of our friends and immediate descendants, remain as strong as before.
But I wonder where both those assumptions are justified? Given a 1/1000 annual chance of extinction events, we have a 50/50 shot of making it about 700 years. To survive a billion years with 50/50 odds would require a 1/1,000,000,000 annual chance of extinction. It isn't clear that 1) that is achievable or 2) we could even know. So relying on unlikely future existence seems an odd reason to justify not helping more now.
It seems almost like having to reconcile quantum physics and gravity in that the scale being considered now and a billion years from now are just completely different. I don't know that I agree the long view you've taken is one that makes the most sense but I do appreciate the clarification and very much enjoyed listening to the interview.
I expect that at some point, if humanity persists e.g. 1,000 years, that we will find ways to reduce the risk of extinction very close to zero, for example by spreading widely across the galaxy so that no single disaster can wipe us out. This decline in the annual risk increases our civilisation's expected lifespan, and so reduces the relative value of focussing on the present.
As for why I give weight to a total utilitarian view, would take us into a long philosophical discussion. There's probably about 10 hours of discussion of that general topic on my show: https://80000hours.org/podcast/episodes/ .
We think 80,000 Hours can add more to the conversation by focussing on outcomes, especially welfare outcomes because: i) there is more good material already written about careers from a virtue ethics or deontological position; ii) the stakes for consequentialists are very large; iii) and the way to create the best consequences with one's work is very non-obvious.
That said, I agree that given the preliminary state of ethical research we should spread our bets and give weight to various different moral considerations. Figuring out how to do that is a research focus for our trustee Will MacAskill.
https://foreignpolicy.com/2014/01/21/air-force-swears-our-nuke-launch-code-was-never-00000000/
http://tomnichols.net/blog/2014/01/08/were-americas-nuclear-codes-set-to-zero-looks-like-it-and-worse/
The original source appears to be Bruce Blaire, who served in the U.S. Air Force as a Minuteman ICBM launch control officer at the time:
https://en.wikipedia.org/wiki/Bruce_G._Blair
While Dr Blaire is a credible source I would now say it's only 70% likely to be true.
"it would be better to say that 80000 hours reviews the literature in order to achieve a consensus view of best practice"
That would have described quite a bit of our work back in 2013/14 when we were getting up to speed on what was known. It's not a more accurate description of what we do today than the catchall term 'research'.
Most of the things we now care about have very little published literature about them, so we learn more by speaking to experts, and then critiquing and compiling their views. Consider this article for example: https://80000hours.org/articles/us-ai-policy/
One thing I'll certainly concede is that that some of the work we do (me in particular) is just communicating what is already known in a clearer way than has been done before. Lately I'm glad to say we're starting to run out of things we know that just need to be explained, and are venturing into new territory more often.
Thanks for a great discussion, as always.
It hasn't gotten as much attention from my community so far because so many other people are already aware of it and working to solve the problem.
As a small group we expect we can have more impact as 'venture capitalists', trying to sound the alarm about - and find ways to fix - new risks that haven't yet been broadly recognised.