clock menu more-arrow no yes mobile

Filed under:

Why we’re scared of AI and not scared enough of bio risks

What we choose to panic about has less to do with the facts and more to do with chance.

A hand in a purple surgical glove holds a tiny vial while a pipette is inserted into a tray of small reservoirs.
An employee of the State Office for Fair Trading (LAVES) at work in a laboratory in which avian flu samples are being tested, in Oldenburg, Germany, on November 29, 2016.
Carmen Jaspersen/picture alliance via Getty Image
Kelsey Piper is a senior writer at Future Perfect, Vox’s effective altruism-inspired section on the world’s biggest challenges. She explores wide-ranging topics like climate change, artificial intelligence, vaccine development, and factory farms, and also writes the Future Perfect newsletter.

When does America underreact, and when does it overreact?

After 3,000 people were killed on 9/11, the US invaded two countries, leading to multitrillion-dollar occupations that cost the lives of hundreds of thousands of people, including American and allied soldiers and civilians in Iraq and Afghanistan. The US made permanent, economically costly, seriously inconvenient changes to how air travel works to prevent it from ever happening again.

More than 1 million Americans died of Covid-19, and while in the early months of the pandemic the country made massive, life-altering, changes to reduce its spread, it has done very close to absolutely nothing to make sure it never happens again. (Maybe this is because of the massive, life-altering changes in the early months of the pandemic; they became unpopular enough that warnings we should avoid having another pandemic often get a hostile response.)

More directly, the US is still conducting research into making deadlier and more contagious diseases, even while there’s legitimate concern that work like that may have even caused Covid. And despite the enormous human and economic toll of the coronavirus, Congress has done little to fund the preparedness work that could blunt the effects of the next pandemic.

Taking AI seriously

I’ve been thinking about all this as AI and the possibility that sufficiently powerful systems will kill us all suddenly emerged onto center stage. An open letter signed by major figures in machine learning research, as well as by leading tech figures like Elon Musk, called for a six-month pause on building models more powerful than OpenAI’s new GPT-4. In Time magazine, AI safety absolutist Eliezer Yudkowsky argued the letter didn’t go far enough and that we need a lasting, enforced international moratorium that treats AI as more dangerous than nuclear weapons.

In a fairly stunning CBS interview last month, Geoff Hinton, a highly respected senior AI researcher, was asked by a disbelieving interviewer, “What do you think the chances are of AI just wiping out humanity?” Hinton, whose pioneering work on deep learning helped make large language models like ChatGPT possible, replied, “It’s not inconceivable.”

On March 30, Fox News correspondent Peter Doocy read a line from Yudkowsky’s Time piece to White House press secretary Karine Jean-Pierre: “‘Literally everyone on Earth will die.’ Would you agree that does not sound good?’” To nervous laughter, Jean-Pierre assured everyone that the White House has a blueprint for safe AI development.

Don’t forget biology

I’ve argued for years that sufficiently powerful AI systems might end civilization as we know it. In a sense, it’s gratifying to see that position given the mainstream hearing and open discussion that I think it deserves.

But it’s also mystifying. Research that seeks to make pathogens more powerful might also end civilization as we know it! Yet our response to that possibility has largely been a big collective shrug.

There are people heroically working to make US regulations surrounding this research clearer and better, but they’re largely doing so in the background, without the public outcry and scrutiny that one might expect a question with these stakes to inspire.

And while slowing down AI development is going to be difficult, controversial, and complicated given the sheer number of companies working on it and the potential size of the market, there are only a few labs doing dangerous gain-of-function research on pathogens of pandemic potential. That should make shutting that work down much easier — or at least, you’d think so.

Playing dice with existential risks

Ultimately — and this isn’t very satisfying at all my sense is that these fairly momentous changes in our trajectory and priorities often depend on random chance.

If by coincidence someone had happened to discover the 9/11 hijackers in time to stop them, the world we live in today would look radically different.

If by coincidence different people had been in key administration roles when Covid-19 started, we’d know a lot more about its origins and conceivably be a lot more willing to demand better lab safety policy.

And as for where the movement to slow down AI goes from here, a lot of that feels to me like it’s also up to chance. Which messages snatch public attention? Are there notable safety scares, and do they clarify the picture of what we’re up against or make it muddier?

I’d love to live in a world where how we respond to existential risk wasn’t up to chance or what happens to catch the public’s and the media’s attention, one where risks to the security of our whole world received sober scrutiny regardless of whether they happened to make the headlines. In practice, though, we seem to be lucky if world-altering dangerous research — whether on AI or biology — gets any public scrutiny at all.

A version of this story was initially published in the Future Perfect newsletter. Sign up here to subscribe!

Sign up for the newsletter Today, Explained

Understand the world with a daily explainer plus the most compelling stories of the day.