All AI Labs Business News Newsletters Research Safety Tools Sources

Mira Murati’s deposition pulled back the curtain on Sam Altman’s ouster

Mira Murati’s deposition pulled back the curtain on Sam Altman’s ouster

DeepTrendLab's Take on Mira Murati’s deposition pulled back the curtain on Sam...

The Mira Murati deposition, surfaced this week through evidence and witness testimony in Musk v. Altman, has finally given a documentary spine to what was previously a swirl of leaks and Slack-screenshot rumors. According to the new record, the November 2023 ouster of Sam Altman was not the impulsive coup it appeared to be in real time — it was the downstream consequence of a 52-page memo authored by Ilya Sutskever, materially sourced from Murati, and delivered to a board already wrestling with concerns about Altman's candor on safety processes, his undisclosed personal stake in the OpenAI Startup Fund, and his handling of product launches. Former director Helen Toner's testimony this week confirmed that Murati's and Sutskever's contributions did not merely echo the board's worries; they sharpened them. The signed termination document, dated November 16, 2023 and bearing four unanimous board signatures, installed Murati as interim chief — a role she occupied for roughly seventy-two hours before the counter-coup began.

To understand why this matters now, you have to remember how thinly the original story was sourced. The board's infamous "not consistently candid" blog post was a model of corporate vagueness, and into that vacuum poured every conspiracy theory X could metabolize over a long weekend. The narrative that eventually congealed — board overreach, doomer paranoia, a misfire by inexperienced trustees — became the conventional wisdom largely because the people with documents stayed quiet. Murati's deposition, alongside the trial exhibits, is the first time the evidentiary case against Altman has been laid out in something resembling courtroom-grade form, and it lands at a moment when OpenAI is mid-pivot from capped-profit oddity to a more conventional for-profit structure. The timing is not incidental; Musk's lawyers are clearly attempting to frame the 2023 episode as proof that OpenAI's governance has been compromised since well before the corporate restructuring debate began.

The broader significance is that the AI industry's dominant company turns out to have been governed, at the critical moment, by a CTO quietly building a paper trail against her CEO. That is a very different shape than the founder-led mythology OpenAI projects, and it complicates the comparison with rivals. Anthropic's governance, with its Long-Term Benefit Trust, looks structurally more coherent by contrast, and Google DeepMind's embedding inside Alphabet — long mocked as bureaucratic — now reads as boring in the way investors prefer boring. The deposition reframes OpenAI not as a uniquely mission-driven outlier but as a company whose internal contradictions were severe enough that its own technical leadership felt compelled to route around the CEO via the board.

For developers and enterprise customers building on OpenAI's stack, the practical takeaway is uncomfortable: the vendor-risk concerns that procurement teams raised in late 2023 were not overblown, and the internal disputes that produced the ouster have not, on the available evidence, been resolved so much as papered over. Researchers inside the company — many of whom signed the "OpenAI is nothing without its people" letter under conditions that, in retrospect, look closer to a loyalty test than a spontaneous uprising — now have to reckon with the possibility that the executive they defended was the one their CTO had been quietly documenting. Consumers will notice none of this directly, but every enterprise contract negotiation from here forward gets harder.

Competitively, the disclosure is a gift to Anthropic, xAI, and the open-weights camp, each of which can now point to a concrete governance failure rather than gesturing at vibes. Anthropic in particular has spent eighteen months selling itself to Fortune 500 buyers as the adult in the room; the Murati testimony hands its sales team a citable artifact. xAI, despite Musk being the plaintiff with obvious motives, benefits from any narrative in which OpenAI's nonprofit origin story looks like a fiction. Meta and Mistral get to argue that open weights remove the governance question entirely, because you do not need to trust the lab if you control the model.

What to watch is whether further exhibits substantiate the specific allegations — particularly around the Startup Fund ownership and the safety-process misrepresentations — because those are the claims most likely to draw regulatory attention rather than mere reputational damage. The FTC's lingering interest in OpenAI's disclosures, the SEC's evolving stance on AI-company governance, and the still-unresolved corporate conversion all become more fraught if Murati's documentation holds up under cross-examination. The deeper question is whether Murati's own venture, Thinking Machines Lab, can absorb the scrutiny that comes with being recast from quiet exile to central witness — and whether the AI industry's tolerance for governance opacity is finally about to break.

This article was originally published on The Verge — AI. Read the full piece at the source.

Read full article on The Verge — AI →

DeepTrendLab curates AI news from 50+ sources. All original content and rights belong to The Verge — AI. DeepTrendLab's analysis is independently written and does not represent the views of the original publisher.