clock menu more-arrow no yes mobile

Filed under:

An unusual way to figure out if humanity is toast

A group of experts and “superforecasters” try to estimate the probability humans will go extinct by 2100.

A man with a sign saying “SOS” with an Earth as the O.
This guy seems worried about humanity’s future.
Getty Images
Dylan Matthews is a senior correspondent and head writer for Vox's Future Perfect section and has worked at Vox since 2014. He is particularly interested in global health and pandemic prevention, anti-poverty efforts, economic policy and theory, and conflicts about the right way to do philanthropy.

Predicting the future perfectly is impossible. Predicting it with more accuracy than a magic eight ball is extremely hard. But in recent years, a Penn psychologist has been arguing that this kind of prediction is possible — and that some specific people are especially good at forecasting events to come.

Philip Tetlock’s studies of “forecasting” have led him to conclude that forecasting talent is very unevenly distributed. Most people are not great at predicting future events, but the top fraction of forecasters can outperform even subject matter experts in some circumstances. He calls these people “superforecasters,” and he and his colleagues at the Forecasting Research Institute are trying to use their skills to help give concrete guidance about crucial, hard-to-predict topics.

Case in point: Tetlock, economist Ezra Karger, Forecasting Research Institute CEO Josh Rosenberg, and seven co-authors just released the results of their Existential Risk Persuasion Tournament, which was meant to “produce high-quality forecasts of the risks facing humanity over the next century.” To do this, they asked subject matter experts who study threats that at least conceivably could jeopardize humanity’s survival (like nuclear weapons, pandemics, climate change, and rogue artificial intelligence), but they also asked superforecasters who’ve proven accurate at predicting events in the past. The superforecaster group is not made up of experts on existential threats to humanity, but rather generalists from a variety of occupations with solid predictive track records.

The median expert put 6 percent odds that humans will go extinct by 2100; they estimated 20 percent odds of a catastrophic event before the year 2100 that kills off at least 10 percent of the human population within a five year period. (To put into perspective just how catastrophic such a catastrophic event would be, World War II resulted in the deaths of less than 4 percent of the global population at the time.) The superforecasters, by contrast, are more optimistic, putting a 9 percent chance of catastrophe and a 1 percent chance of extinction.

These are astonishingly large risks. The expert survey suggests that humanity has worse odds of surviving to 2100 than a man diagnosed with prostate cancer has of living another five years; the superforecasters estimate that humans are likelier to go extinct than an average person is to be audited by the IRS.

But remember what I said about predicting the future perfectly. So the obvious question is … should you believe any of it?

How the forecasting tournament worked

The forecasting tournament (called “XPT” for short) recruited some 80 experts to participate. The sample was heavily weighted in favor of experts on AI, of whom 32 participated. There were 12 experts on biological risks, 12 nuclear experts, 9 climate experts, and 15 “general” experts who study a range of extinction threats to humanity. They paired these with a sample of 88 superforecasters working through the Good Judgement Project, Tetlock’s private company where these forecasters make predictions for consulting clients.

The tournament did not simply ask participants to make estimates and leave it at that. That (specifically predicting events for 2024, 2030, 2050, and 2100) was step one, but just step one. Then the forecasters started collaborating. Then they worked in teams of 16, seeing one another’s forecasts and offering comments. They got to vote on which comments were more informative, with winners getting $1,000 prizes for the highest-quality comment, to incentivize them to take it seriously. These teams were either all superforecasters or all experts, but later on, new combo teams of superforecasters and experts were created. These teams were asked to make a wiki website that would explain and document their forecasts. Finally, each team was given access to another team’s wiki and asked to update their views.

Karger, a research economist at the Chicago Fed who first got interested in forecasting when he participated as a forecaster in some of Tetlock’s experiments, said one of the most important lessons from the research is that little persuasion took place through these processes. “When you’re trying to answer unresolvable questions, there isn’t that much persuasion,” he told me.

As you might expect, experts on a particular risk usually put bigger odds on that risk wiping out humanity or killing 10 percent of the population than did experts on other risks. Nuclear experts put 0.55 percent odds on nuclear-induced extinction by 2100; experts on other risks put the odds at 0.19 percent, almost three times smaller. Both AI experts and nonexperts rated AI-caused extinction as the biggest risk, with a 3 percent extinction risk from AI experts and 2 percent from experts on other risks.

These averages mask considerable variation between how individual experts and superforecasters saw these risks. On the issue of AI specifically, the authors separated out the most concerned third of their forecasters (both experts and generalist superforecasters) and the least concerned third. The AI-concerned group was very concerned, with the median member putting 11 percent odds on human extinction; the median AI skeptic put the odds at 0.115 percent, not zero but a hundred times lower.

“AI skeptics saw claims that AI will lead to catastrophic outcomes as extraordinary and thus as requiring extraordinary evidence,” the authors explain. “AI-concerned forecasters were more likely to place the burden of proof on skeptics to explain why AI is not dangerous. They typically started from a prior that when a more intelligent species or civilization arises, it will overpower competitors.” Karger told me that concerned people specifically mentioned that they were deferring to work by researchers Ajeya Cotra and Toby Ord, work that gave them reason to think AI is especially dangerous.

To sort out who to believe, Karger and his co-authors had hoped to find that AI-concerned and AI-skeptic forecasters had different impressions of what will happen in the near future: in 2024, or even 2030. If there were a “skeptical” set of predictions for the near future, and a “concerned” set of predictions, we could see in the next few years who’s right and come to trust the more accurate group more.

But that didn’t happen. “Over the next 10 years, there really wasn’t that much disagreement between groups of people who disagreed about those longer run questions,” Karger told me. That makes it a lot harder to sort through who’s right. People were basing their sense of the danger over the next 100 years less on what’s happening in the near-term technologically, and more on almost philosophical beliefs about the level of risk in the world — beliefs that are hard to argue with or about.

Notably, the tournament organizers did not ask about extinction risk from climate change, despite involving several experts on the topic. In the paper, they explain that “the impacts would be too slow-moving to meet our ‘catastrophic’ threshold (10 percent of humans dying within a 5-year period) and in pilot interviews climate experts told us they would place an extremely low probability on extinction risk from climate change.”

What should we make of these forecasts?

So do these results actually mean we face a real chance of human extinction this century? That the odds of 10 percent of humanity dying off in one event are higher than a person’s odds of dying in their first trigger pull during a game of Russian roulette (approximately 17 percent)?

I admire Karger, Rosenberg, Tetlock, and their co-authors — Zachary Jacobs, Molly Hickman, Rose Hadshar, Kayla Gamin, Taylor Smith, Bridget Williams, Tegan McCaslin, and Stephen Thomas — for trying to use all the tools we have to answer some important questions. But there are good reasons to be skeptical that these methods can tell us much about the world in 2030 — let alone for the 70 years beyond.

For one thing, the superforecasters used in this study are a “set of forecasters with high levels of accuracy on short-run (0-2 year timespan) resolvable questions.” That doesn’t necessarily mean they’re good at soothsaying far into the future. “It is an open question,” the authors concede, “whether forecasters who are accurate on short-run questions will also be accurate on longer-run questions.” What’s more, the group was selected based on tournaments run between 2011 and 2015. Maybe their abilities have degraded? “It is also possible that the epistemic strategies that were successful in 2011-2015, when the superforecasters attained their status, are not as appropriate at other points in time,” the authors concede.

It’s perhaps suggestive that superforecasters and experts alike rate AI as the most likely cause of extinction. There’s a long history of computer scientists and others making this argument — but it’s worth noting that AI is the threat considered in the forecasting paper about which we know the least. We have a very good idea of what thermonuclear bombs and natural pathogens and even engineered pathogens might be able to do based on past experience. A rogue AI capable of performing most or all tasks a human can does not exist yet, and skeptics argue it never will. The pattern of estimated risk falling as we learn more about the threat suggests that our estimated risk from AI will fall in the future as we learn more about it.

There’s also a risk of groupthink. The report notes that 42 percent of respondents reported having attended an effective altruism (EA) community meetup. That makes some sense — effective altruism has long focused on extinction risks and so it’s natural that experts on extinction would have ties to the EA community — but it’s also worrisome. I have a lot of affection for the EA movement and identify as an EA myself. But EAs are as prone to forming bubbles and reinforcing each other’s beliefs as anybody; I’d love to see a survey with more experts outside this clique included.

What’s more, other “forecasts” with significant skin in the game put pretty low odds on the emergence of AI powerful enough to effect extinction. Basil Halperin, Trevor Chow, and J. Zachary Mazlish recently noted that if large institutional investors expected human-level artificial intelligence soon, financial markets would reflect that: interest rates should be very high. A world with human-level AI is probably extremely rich, and if that world is coming soon, that knowledge would reduce humans’ need to save more; they’re going to be rich very soon anyway. Companies and governments would then need to offer to pay more interest so people would save anyway, and interest rates would soar.

But interest rates aren’t extraordinarily high right now, which suggests markets do not expect human-level AI anytime soon. And unlike with forecasting tournaments, the gains to betting right in financial markets are in the billions if not trillions of dollars. People have a really large incentive to bet correctly.

That being said, you would have made a lot of money predicting an upcoming pandemic in December 2019. The markets didn’t see that coming, and they might not see other big risks coming either.

Helpfully, the Forecasting Research Institute has participants make a number of predictions specifically about the year 2024. That means that within eighteen months, we’ll have a much better sense of which of these forecasters are predicting developments in AI, biotech, and other fields relevant to potential apocalypse with much accuracy. In 2030, we’ll know even more.

“Something I’m very excited to do is to analyze the correlation and accuracy between the 2024 and the 2030 questions,” Karger said. “If I can tell you that people who are accurate on questions over a two-year time horizon are also accurate on questions over an eight-year time horizon, then I think we have made some progress.”

Sign up for the newsletter Today, Explained

Understand the world with a daily explainer plus the most compelling stories of the day.