clock menu more-arrow no yes mobile

Filed under:

Four different ways of understanding AI — and its risks

Worldviews are clashing when it comes to artificial intelligence.

Sam Altman, CEO of OpenAI, testifies in Washington, DC, on May 16, 2023.
Aaron Schwartz/Xinhua via Getty Images
Kelsey Piper is a senior writer at Future Perfect, Vox’s effective altruism-inspired section on the world’s biggest challenges. She explores wide-ranging topics like climate change, artificial intelligence, vaccine development, and factory farms, and also writes the Future Perfect newsletter.

I sometimes think of there being two major divides in the world of artificial intelligence. One, of course, is whether the researchers working on advanced AI systems in everything from medicine to science are going to bring about catastrophe.

But the other one — which may be more important — is whether artificial intelligence is a big deal or another ultimately trivial piece of tech that we’ve somehow developed a societal obsession over. So we have some improved chatbots, goes the skeptical perspective. That won’t end our world — but neither will it vastly improve it.

One comparison I sometimes see is to cryptocurrency. A couple years ago, there were plenty of people in the tech world convinced that decentralized currencies were going to fundamentally transform the world we live in. But they mostly haven’t because it turns out that many things people care about, like fraud prevention and ease of use, actually depend on the centralization that crypto was meant to disassemble.

In general, when Silicon Valley declares that its topic de jour is the Biggest Deal In The History Of The World, the correct response is some healthy skepticism. That obsession may end up as the foundation of some cool new companies, it might contribute to changes in how we work and how we live, and it will almost certainly make some people very rich. But most new technologies do not have anywhere near the transformative effects on the world that their proponents claim.

I don’t think AI will be the next cryptocurrency. Large language model-based technologies like ChatGPT have seen much much faster adoption than cryptocurrency ever did. They’re replacing and transforming wildly more jobs. The rate of progress in this space just over the past five years is shocking. But I still want to do justice to the skeptical perspective here; most of the time, when we’re told something is an enormously big deal, it really isn’t.

Four quadrants of thinking about AI

Building off that, you can visualize the range of attitudes about artificial intelligence as falling into four broad categories.

You have the people who think extremely powerful AI is on the horizon and going to transform our world. Some of them think that’ll happen and are convinced it’ll be a very, very good thing.

“Every child will have an AI tutor that is infinitely patient, infinitely compassionate, infinitely knowledgeable, infinitely helpful,” Marc Andreessen wrote in a recent blog post.

Every scientist will have an AI assistant/collaborator/partner that will greatly expand their scope of scientific research and achievement. Every artist, every engineer, every businessperson, every doctor, every caregiver will have the same in their worlds. ...

AI is quite possibly the most important — and best — thing our civilization has ever created, certainly on par with electricity and microchips, and probably beyond those. ...

The development and proliferation of AI — far from a risk that we should fear — is a moral obligation that we have to ourselves, to our children, and to our future.

We should be living in a much better world with AI, and now we can.

Call that the “it’ll be big, and it’ll be good” corner of the spectrum. Contrast that with, say, AI Impacts’ Katja Grace, whose recent survey found half of machine learning researchers saying there is a substantial chance that AI will lead to human extinction. “Progress in AI could lead to the creation of superhumanly smart artificial ‘people’ with goals that conflict with humanity’s interests — and the ability to pursue them autonomously,” she recently wrote in Time.

(In the middle, perhaps you’d place AI pioneer Yoshua Bengio, who has argued that “unless a breakthrough is achieved in AI alignment research ... we do not have strong safety guarantees. What remains unknown is the severity of the harm that may follow from a misalignment (and it would depend on the specifics of the misalignment).”)

Then there’s the “AI won’t majorly transform our world — all that superintelligence stuff is nonsense — but it will still be bad” quadrant. “It is dangerous to distract ourselves with a fantasized AI-enabled utopia or apocalypse which promises either a ‘flourishing’ or ‘potentially catastrophic’ future,” several AI ethics researchers wrote in response to the recent Future of Life Institute letter calling for a pause on the training of extremely powerful systems. These superintelligence skeptics argued that focusing on the most extreme, existential outcomes of AI will distract us from worker exploitation and bias made possible by the technology today.

And last, there’s the “AI won’t majorly transform our world — all that superintelligence stuff is nonsense — but it will be good” quadrant, which includes plenty of people working on building AI tools for programmers. Many people I talk to who are in this corner worry that superintelligence concerns and bias or worker exploitation concerns are overblown. AI will be like most other technologies: good if we use it for good things, which we mostly will.

Talking past one another

It often feels like, in conversations about AI, we’re talking past one another, and I think the four quadrants picture I proposed above makes it clearer why. The people who think AI is going to potentially be a world-shattering big deal have a lot to discuss with one another.

If AI really is going to be a huge force for good, for augmentation of human strengths and vast improvements to every aspect of the way we live, then overly delaying it to address safety concerns risks letting millions of people who could benefit from its advancements suffer and die unnecessarily. The people who think that AI development poses major world-altering risks need to make the case to the optimists that those risks are serious enough to justify the genuinely enormous costs of slowing down development of such a powerful technology. If AI is a world-altering big deal, then the high-level societal conversation we want to be having is about how best to safely get to the stage where it alters the world for the better.

But many people aren’t persuaded that AI is going to be a big deal at all and find the conversation about whether to speed up or slow down baffling. From their perspective, there is no world-altering new thing on the horizon at all, and we should aggressively regulate current AI systems (if they are mostly bad and we mostly want to limit their deployment) or leave current AI systems alone (if they are mostly good and we mostly want to encourage their deployment).

Either way, they’re baffled when people respond with measures aimed at safely guiding superintelligent systems. Andreessen’s claims about the enormous potential of AI are just as nonresponsive to their concerns as Grace’s case that we should steer away from an AI arms race that could get us all killed.

For the societal conversation about AI to go well, I think everyone could stand to entertain a bit more uncertainty. With AI moving as fast as it is, it’s really hard to confidently rule anything in — or out. We’re deeply confused about why our current techniques have worked so well so far and for how long we’ll keep seeing improvements. It’s entirely guesswork what breakthroughs are on the horizon. Andreessen’s glorious utopia seems like a real possibility to me. So does utter catastrophe. And so does a relatively humdrum decade passing without massive new breakthroughs.

Everyone might find we’re talking past each other a little less if we acknowledge a little more that the territory we’re entering on AI is as confusing as it is uncertain.

Sign up for the newsletter Today, Explained

Understand the world with a daily explainer plus the most compelling stories of the day.