clock menu more-arrow no yes mobile

Filed under:

What to know about OpenAI’s failed coup

Sam Altman is back at OpenAI. What happens to its safety mission?

Sam Altman waving from onstage at OpenAI’s DevDay.
He’s back!
Justin Sullivan/Getty Images

So, OpenAI had a weird week. The hottest company in tech just saw the removal, replacement, and reinstatement of its superstar CEO, Sam Altman in the span of five days. It also saw, as a result of that Altman drama, the removal and replacement of most of its board of directors. In the middle of this, almost every OpenAI employee threatened to quit, the company cycled through two interim CEOs, Microsoft set up a new Altman-led AI arm of its own, and we all faced the very real possibility that the $80 billion company behind ChatGPT would completely implode.

And we still don’t really know why.

The chaos started on November 17, when the OpenAI board announced Altman’s termination, kicking off several days of negotiations to bring him back, as was the desire of the company’s employees and its main investor, Microsoft. On November 22, OpenAI announced that Altman would indeed be returning as CEO, and most of the board that voted to fire him was being replaced.

This is not, suffice to say, how CEO firings traditionally play out. But OpenAI isn’t a traditional company. It became a Silicon Valley success story in a time when the industry was seen as largely stagnant. In the past year, thousands have been laid off at tech companies that have only ever known growth. Then along came generative AI and ChatGPT, new technology that is cool and exciting to everyone from the average consumer to one of the most valuable companies in the world. One of them, Microsoft, eagerly hitched its wagon to OpenAI and to Altman, who became the poster boy of the billion-dollar AI revolution.

OpenAI, as the leading developer of the technology that could shape how (or if) we live in the future, was shaping up to be one of the most important companies in the world. For a few days there, it looked like we were witnessing the effective end of that company. Now, however, order seems to have been restored.

That still leaves some big questions unanswered. Again, we still don’t know why OpenAI’s previous board made the extreme decision to remove Altman — nor do we know if their concerns with Altman were alleviated before he came back. And now that there’s a new board in place, one that includes a former Meta executive and a former treasury secretary, it’s hard to predict exactly what OpenAI does next.

Why did Sam Altman get fired?

The short answer: It’s still unclear. Altman seems to have no idea what happened, and the board has said very little, publicly, about its reasoning beyond that it didn’t trust Altman anymore. It’s also, reportedly, refused to say much privately. It appears there were fundamental differences between the (now former) board’s vision for AI, which included carrying out that mission of safety and transparency, and Altman’s vision, which, apparently, was not that.

How did Altman come back when the board was so determined to get rid of him?

Well, that board no longer exists, for one. As part of the deal to bring Altman back, most of its members were replaced, presumably with people Altman wants to be there and who share his vision. Those new members are former Salesforce CEO Bret Taylor, who will serve as its chair, and economist Larry Summers. Quora CEO Adam D’Angelo will remain on, the only member of the previous board to stick around. As this was described by OpenAI as an ”initial” board, we will almost certainly get a few additions in time. Including, perhaps, Altman, who was on OpenAI’s original board, and someone from Microsoft.

Departing board members are Ilya Sutskever, who co-founded OpenAI and is it chief scientist, tech entrepreneur Tasha McCauley; and Helen Toner, Georgetown’s Center for Security and Emerging Technology’s director of strategy and foundational research grants. Toner, reportedly, had an especially frosty relationship with Altman because she co-authored a research paper that he saw as critical of OpenAI. Toner’s public comment so far is that she’s looking forward to getting some sleep.

More than a few people have noted that, aside from Sutskever (who made his change of heart known and still works at OpenAI), the only board members who were removed happen to be women — and they’ve been replaced with two white men. The optics aren’t great here, but, again, we’ll likely get additional members soon who may well be people who aren’t white men.

Perhaps more importantly, as far as Altman and the investors who pushed for the board to be revamped are concerned, is that the board is now made up of people with tech board and business experience. D’Angelo and Taylor both were chief technology officers at Facebook, for one, and Taylor was the chair of Twitter’s board until Elon Musk took over. As for Summers, he’s currently the director of the Mossavar-Rahmani Center for Business and Government at Harvard, where he’s previously served as its president, and has held prominent positions in the Clinton (secretary of the treasury) and Obama (director of the National Economic Council) administrations. He’s also seen as someone who is very tech-business-friendly and would never dream of putting safety before profit.

How did Sam Altman, the boy wonder of AI, become a controversial figure?

Before Altman headed up OpenAI, he was the CEO of the influential startup accelerator Y Combinator, so he was well known in certain Silicon Valley circles. Altman was also a co-founder of OpenAI, and as the company started to be seen as the leader of a new technological revolution, he put himself forward as its youthful, press-friendly ambassador. As CEO, he went on an AI world tour, rubbing elbows with and winning over world leaders and telling various governments, including Congress and the Biden administration, how best to regulate this transformative technology — in ways that were very much advantageous to OpenAI and therefore Altman.

Altman often says that his company’s products could contribute to the end of humanity itself. Not many CEOs (at least, of companies that don’t make weapons) humblebrag about how potentially dangerous their business’s products are. That could be seen as a CEO being refreshingly honest, even if it makes his company look bad. It could also be seen as a CEO saying that his company is one of the most important and powerful things in the world, and you should trust him to lead it because he cares that much about all of us.

If you see generative AI as an enormously beneficial tool for humanity, you’re probably a fan of Altman. If you’re concerned about how the world will change when generative AI starts to replace human jobs and presumably becomes more and more powerful, you may not like Altman very much.

Simply put, Altman has made himself the face of AI, and people have responded accordingly.

And how did OpenAI get to be such a big deal?

OpenAI was founded in 2015, but it’s never been your average Silicon Valley startup. For one, it had the backing of many prominent tech people, including Peter Thiel, Reid Hoffman, and Elon Musk, who is also credited as being one of its co-founders. Second, OpenAI was founded as a nonprofit. Its mission was not to move as quickly as possible to make as much money as possible, but rather to research and develop a technology with enormous transformative potential that therefore needed to be done safely, responsibly, and transparently: AI with the ability to learn and think for itself, also known as artificial general intelligence, or AGI. In order to do so, the company would need to develop generative AI, or AI that can learn from massive amounts of data and generate content upon request.

A few years later, OpenAI needed money. Altman took over as CEO in 2019. Around that time, it established a “capped profit” arm, allowing investors to get up to 100 times a return on what they put into it. The rest of the profit — if there was any — would go back into OpenAI’s nonprofit. The company was still governed by a board of directors charged with carrying out that nonprofit mission, but the board was pretty much the only thing left of OpenAI’s nonprofit origins.

OpenAI released some of its generative AI products into the world in 2022, giving everyone a chance to experiment with them. People were impressed, and OpenAI was seen as the leader in a burgeoning industry. Thanks to $13 billion of investments from Microsoft, OpenAI has been able to develop and market its services, giving Microsoft access to the new technologies along the way. Microsoft pinned a large part of its future on AI, and with its investment in OpenAI, established a partnership with the most prominent and seemingly advanced company in the field. And OpenAI’s valuation grew by leaps and bounds.

Meanwhile, Altman emerged as the leader of the AI movement because he was the head of the leading AI company, a role he has embraced. He has extolled the virtues of AI (and OpenAI) to world leaders. He says regulation is important, lest his company become too powerful (only to balk when regulation actually happens). And along the way, he has become one of the most powerful people in tech, if not beyond. Which is part of why his abrupt termination as CEO of OpenAI was such a shock.

If Altman was otherwise so popular, what was the OpenAI board so upset about?

Removing Altman could have amounted to a huge, potentially company-destroying deal, so you’d think there’d be a very good reason the OpenAI board decided to do it. It has yet to tell us what that reason is.

The board has the authority to remove its CEO with a majority vote. Altman and OpenAI co-founder and president Greg Brockman were on that board — Brockman was its chair — but clearly not involved in the vote for their own ouster from it.

The board said in a statement that its decision was the result of a “deliberative review process by the board, which concluded that [Altman] was not consistently candid in his communications with the board, hindering its ability to exercise its responsibilities. The board no longer has confidence in his ability to continue leading OpenAI.”

So, yeah, that’s a little vague. For what it’s worth, Emmett Shear, who briefly served as OpenAI’s interim CEO during all of this, tweeted that “the board did *not* remove Sam over any specific disagreement on safety, their reasoning was completely different from that. I’m not crazy enough to take this job without board support for commercializing our awesome models.”

We do have some reporting that Altman and the board hadn’t gotten along for a while now, much of this due to the release and massive success of ChatGPT. OpenAI suddenly became one of the hottest tech companies and moved quickly to capitalize on that. That’s what a for-profit startup does — not a nonprofit, which, again, OpenAI supposedly was.

Altman hasn’t said anything publicly about why he was removed, and it’s beyond belief that he had no idea that there were tensions. Brockman, who resigned in solidarity with Altman, said that he and Altman were “shocked and saddened.” Presumably, more will come out in time about the board’s reasoning for firing Altman. According to the New York Times, there will be an “independent investigation” into Altman as part of the former board’s deal to bring him back.

Given OpenAI’s mission to develop safe and responsible AI, there are fears that Altman was driving the development of unsafe and irresponsible AI and that the board felt it had to put a stop to it. But, again, we don’t yet know if those fears are founded.

What happened after Altman got fired? OpenAI got a new CEO and everyone was happy?

During the five days when Altmas was not CEO, OpenAI actually got two interim CEOs and, it seems, almost no one was happy about any of it.

When the board announced Altman’s departure, it said that chief technology officer Mira Murati would be its interim CEO. In the next few days, many of OpenAI’s employees openly revolted, and the board was reported to be desperately trying to get Altman back, with Microsoft very much pressuring them to do so. But then Shear, who is Twitch’s co-founder and former CEO, announced that he was OpenAI’s CEO. Not Murati, and not Altman.

It didn’t seem like he’d have much to oversee, with most of OpenAI’s employees threatening to quit if Altman and Brockman weren’t reinstated and the current board didn’t leave. Murati was the first signee. Several prominent OpenAI employees also tweeted that “OpenAI is nothing without its people,” which Altman quote-tweeted with a single heart. Sutskever was also a signatory of the letter. He has since tweeted that “I deeply regret my participation in the board’s actions.” (Which earned him a three-heart quote tweet from Altman — no hard feelings!)

With Altman back at the helm, it appears that most of the order has been restored. Brockman is also back and tweeted a photo of himself with many OpenAI employees, all looking quite happy about everything.

How did the rest of Silicon Valley respond to the drama? Do people still think Altman should be running OpenAI?

Sam Altman is a very wealthy, very well-connected entrepreneur-turned-investor who was also running the most exciting tech startup in years. So it’s not surprising that once the news of his firing broke, the tech industry’s narrative quickly became one about the OpenAI board’s ineptitude, not any of his shortcomings.

But there is a world beyond the tech industry, and not everyone in it is behind Altman. You won’t hear many people defending the board out loud since it’s much safer to support Altman. But writer Eric Newcomer, in a post he published November 19, took a stab at it. He noted, for instance, that Altman has had fallouts with partners before — one of whom was Elon Musk — and reported that Altman was asked to leave his perch running Y Combinator.

“Altman had been given a lot of power, the cloak of a nonprofit, and a glowing public profile that exceeds his more mixed private reputation,” Newcomer wrote. “He lost the trust of his board. We should take that seriously.”

What was Microsoft’s response to all this? Did they really offer Altman a job?

Microsoft has poured billions of dollars into OpenAI, and a big part of its future direction is riding on OpenAI’s success. OpenAI’s complete implosion would be a very bad development for that future.

When it seemed that talks between Altman and OpenAI had broken down, Microsoft CEO Satya Nadella tweeted that the company was still very confident in OpenAI and its new leadership, but that it was also starting a “new advanced AI research team” headed up by — you guessed it — Sam Altman. He added that Brockman and unnamed “colleagues” were also on board.

But Nadella also made it very clear, in multiple interviews, that he was open to (and would prefer) Altman to return to running OpenAI — and that he wasn’t very happy with its board, which didn’t consult with nor give Microsoft a heads up about its plans, let alone tell its partner and main investor why it made that decision. And Altman tweeted that “satya and my top priority remains to ensure openai continues to thrive.”

With Altman back at OpenAI, it looks like Microsoft’s new AI research team won’t need to go forward. He tweeted that his return to OpenAI was done “w satya’s support.”

What does all this mean for AI safety?

That kind of depends on what OpenAI had in the works and Altman’s plans for it, doesn’t it? Maybe Altman and OpenAI figured out the artificial general intelligence puzzle and the board thought it was too powerful to release so they canned him. Maybe it had nothing to do with OpenAI’s tech at all and more to do with the unresolvable conflict between a nonprofit’s mission and an executive’s quest to build the most valuable company in the world — a conflict that got worse and worse as OpenAI and Altman got bigger and bigger. And which, in the end, Altman won.

If nothing else, this whole debacle serves as a reminder that the safety of products shouldn’t be left to the businesses that put them out into the world, which are generally only interested in safety when it makes them money or stops them from losing it. Housing that mission within a safety-focused nonprofit will only work as long as the nonprofit doesn’t stop the company from making money. And remember, OpenAI isn’t the only company working on this technology. Plenty of others that are very much not nonprofits, like Google and Meta, have their own generative AI models.

Governments around the world are trying to figure out how best to regulate AI. How safe this technology is will largely rely on if and how they do it. It won’t and shouldn’t depend on one man (read: Altman) who says he has the world’s best interests at heart and that we should trust him.

Update, November 22, 12:30 pm ET: This story was originally published on November 20 and has been updated to include news of Altman’s reinstatement and more details about his ouster and return.

Sign up for the newsletter Today, Explained

Understand the world with a daily explainer plus the most compelling stories of the day.