Argument analysis of Andreessen’s ‘AI Will Save the World’ article

By

I’m in San Francisco at the Internet Archive (one of the most wonderful artefacts in the history of the web – read about it), attending an AI Knowledge Mapping  Hackathon run by Society Library Founder Jamie Joyce, The event supports the building of a knowledge graph of public debates. This event focused on Marc Andreessen’s recent famous AI Will Save the World article, adding human-generated arguments against every sentence and AI-excerpted proposition.

It’s a great exercise. When I first read the article and listened to the accompanying podcast, I found myself agreeing with just about everything, but it still left me deeply unconvinced on some aspects of the piece. So it’s been good to go back to it in depth. 

We were given a spreadsheet containing all 320 sentences in the article, as well as a list of 128 summarized general claims, and invited to providing supporting, refuting, or refining arguments. Here is the argument spreadsheet if you would like to add any arguments yourself! 

Here is part of what I contributed to the debate mapping, commenting on Andreessen’s statements.

“AI will not destroy the world, and in fact may save it.”

The first part of this sentence is a statement of belief. In fact very little the article directly supports this statement. 

The second part of the statement is not directly supported in the article. however it extensively describes many of the ways that AI could have substantial positive social benefit. Which is not the same thing as saving it.

The fact that there can be substantial positive impact in a variety of domains does not, in fact, repudiate the potential for AI destroying the world.

Much of the article argues that there is a moral panic about AI, with many of the major protagonists benefiting from this panic. Andreessen himself writes that:

“it’s not that the mere existence of a moral panic means there is nothing to be concerned about.”

He then goes on to focus on the fact that we have an AI moral panic, which is probably a fair assessment but does not validate his broader case.

One of the most pointed statements Andressen makes is:

“The claim that the owners of AI will steal societal wealth from workers is a fallacy.”

There are a few problems with this. It is an assertion with little subsequent substantiation.

“Steal” is an emotive word and an active verb. Even if there were no intent to take wealth from workers (a generous assumption), that could happen naturally due to an array of factors. In fact recent U.S. labor share of GDP has decreased substantially from any time prior to 2008.  

Given the extraordinary economic and social value that Andreessen professes AI will give us, it would be extraordinary if the companies that own the most used AI will not accrue a very high proportion of economic value. 

I personally agree with Andreessen that we are likely to have strong employment into the indefinite future. However the disruption to livelihoods and lives as existing roles are eliminated or reshaped by AI could still be brutal, and there is no evidence that the well-documented polarization of work will not continue. 

Andreessen also argues that AI will not kill us all. 

“AI is not a living being that has been primed by billions of years of evolution to participate in the battle for the survival of the fittest, as animals are, and as we are. It is math – code – computers, built by people, owned by people, used by people, controlled by people. The idea that it will at some point develop a mind of its own and decide that it has motivations that lead it to try to kill us is a superstitious handwave. In short, AI doesn’t want, it doesn’t have goals, it doesn’t want to kill you, because it’s not alive. And AI is a machine – is not going to come alive any more than your toaster will.”

The specific words here are important. Andressen’s argument why AI will not kill us is founded on two concepts:

“AI doesn’t want.” However just because AI doesn’t have explicit volition doesn’t mean it won’t engage in non-huiman-aligned action.

“AI is not going to come alive.” This is in fact highly debatable, and has no direct bearing on whether AI may kill us all. 

As such these are by no means solid arguments.

These points I’ve addressed above are where Andreessen’s article is weakest. I do agree with almost all points that he makes. But the things he has justified the most thoroughly in the piece are not in fact his central points, which he essential presents as assertions.

I and others consider these assertions to be (highly) questionable.