Nearly a week into a leadership crisis that’s thrown the artificial intelligence industry into chaos, OpenAI’s future along with that of its former CEO Sam Altman remain as uncertain as ever.
If Altman’s abrupt ouster came as a surprise, it was nothing compared to the ensuing chaos and whiplash of the past few days that led the vast majority of OpenAI’s employees to threaten the company with their resignations. That has only reinforced the impression that OpenAI’s boardroom drama could take an unexpected turn at any moment.
As if to underscore the confusion, Monday began with an announcement that Altman would be joining Microsoft to lead an in-house AI research team, and the day ended with Microsoft’s CEO, Satya Nadella, saying a different outcome was still possible.
“If something happens where their board and people decide they want to get back to some kind of state,” Nadella told CNN contributor Kara Swisher on Swisher’s podcast, “one thing I’ll be very, very clear [about] is, we’re never going to get back to into a situation where we get surprised like this ever again. That’s done.”
“If we go back to operating like on Friday,” Nadella added, “we will make sure we are very, very clear that the governance gets fixed in a way that we really have more surety and guarantee that we don’t have surprises.”
On Tuesday, multiple news outlets, including Bloomberg and Reuters, reported that the OpenAI board and Altman are in open talks to negotiate his possible return. Such a newfound possibility highlights the volatility of a moment that carries broad ramifications for the AI industry, shaping how Microsoft may direct billions in future investments into AI.
The situation’s fluidity has been a hallmark of the crisis. First came the board’s sudden promotion of OpenAI’s chief technology officer, Mira Murati, as Altman’s replacement on Friday — made in the same breath as Altman’s firing. Then, hours later, Murati herself signed a letter with hundreds of her colleagues calling for Altman’s return, for the board to resign, and threatening to quit.
A co-founder and member of the board who by some accounts led the charge on Altman’s ouster, Ilya Sutskever, inexplicably signed the letter too, and on Monday publicly apologized for his own role in the saga. For a time, it appeared perhaps as if the board members would step down, but that has not yet happened, either.
Then Microsoft, which has a deep partnership with OpenAI, announced Monday it would hire Altman to lead an advanced in-house AI research team, plucking him out of sudden unemployment. Except now, it seems, the deal is not yet final.
Whether Altman is working for Microsoft or OpenAI won’t change the ultimate result, Nadella argued to Swisher, which is that Microsoft benefits from his work. And that is true so far as it goes.
But what happens with OpenAI won’t affect just Microsoft; Altman, along with the hundreds of OpenAI staffers who have threatened to quit unless he is reinstated and the board resigns, represents a locus of power and talent in the AI industry with few equals.
Some are already seeking to capitalize on the moment: On Monday afternoon, Salesforce CEO Marc Benioff encouraged OpenAI refugees to come work for his company, promising to match their compensation “immediately.” Nadella said Monday that Microsoft “will definitely have a place for all AI talent” from OpenAI that chooses to jump ship.
The competition for AI talent, and where that power represented by Altman and his allies ultimately winds up, will have an indelible influence on the course of AI development.
The outcome of the crisis could also define a turning point for a long-running ideological battle in Silicon Valley over the long-term risks of AI, which appear to have been at the center of the tensions between Altman and the board that summarily fired him.
Much is still unclear about the board’s exact reasons for firing Altman. Its official explanation was that Altman was not sufficiently “candid” with the board, and OpenAI’s chief operating officer has reportedly said the termination was not related to “malfeasance” or any financial, security or privacy issues. A key factor, Swisher has reported, seems to have been board concerns about Altman’s preferred pace or scope of AI development, highlighting fears about the dangers of an uncontrolled super-intelligence. OpenAI’s newest interim CEO, Emmett Shear, has denied that the firing was due to any “specific” disagreement on AI safety.
Throughout this game of telephone, the board itself has remained steadfastly silent, leaving outsiders to speculate about its motivations and the power play that led to Altman getting axed.
In the process, it’s created room for critics to accuse the board of incompetence and short-sightedness that risks indirectly discrediting their position on AI risk, too.
“One thing in the OpenAI stuff that feels consistent (based on some internal convos I’ve had) is that a group of people that are *laser focused on the doomer elements of AI and the more abstract principle of AGI [artificial general intelligence] for the betterment of humanity* might also be the same people who do not fully think through the more human/money elements of firing the figurehead of the generative AI movement and pissing off the company that invested $10 billion in a very specific company/vision,” wrote the journalist Charlie Warzel in a post on Threads over the weekend.
Without weighing in on the substantive fears of existential AI risk, it is possible that one casualty of the OpenAI fiasco may end up being the credibility of some of those who worry about AI the most.
Read the full article here