The Altman Reckoning
The week of Thanksgiving 2023 was AI's biggest governance failure. Nearly three years later, Mira Murati's deposition is systematically unpacking what happened when the board tried to remove Sam Altman, and why he's still CEO. The lawsuit forces uncomfortable questions: Did OpenAI sacrifice safety for profit? Can any CEO trusted with superintelligence actually be held accountable? Altman's legal team is fighting on terrain he doesn't control—Musk's lawyers get to define the narrative of how OpenAI pivoted from nonprofit to profit motive.
This isn't just corporate theater. The deposition reveals the machinery of boardroom power, the pressure from investors, the tension between nonprofit mission and VC-backed reality. Every concession in discovery becomes ammunition for regulators, competitors, and future litigation. OpenAI built the most powerful AI system on Earth. How it governs itself matters.
Agents Stop Playing Pretend
OpenAI just shipped WebSocket-based execution for its Responses API, cutting latency in agentic workflows. AWS launched AgentCore Payments. Amazon Bedrock integrated Coinbase and Stripe. Agents that can't transact are demos. Agents that transact at scale are infrastructure. This is the inflection point.
Payment integration matters because it removes the human-in-the-loop requirement. An agent that can approve a $50K vendor payment, resolve a billing dispute, or process a refund without human review is not a tool—it's a delegation of authority. That's also why the regulatory void is terrifying. OpenAI and AWS are shipping this with "preview" caveats, but production deployments follow fast. Enterprises will adopt before guardrails exist.
Voice Becomes the Interface
OpenAI introduced realtime voice models with reasoning and translation. Parloa is building voice-driven customer service at scale, relying on OpenAI models to handle live conversation. Spotify is positioning itself as the home for AI-generated audio. Save-to-Spotify commands exist. The death of text-based interaction just got marketed professionally.
Voice is harder to moderate than text. Accent discrimination, cultural nuance, the uncanny valley of synthetic speech—these problems feel smaller in boardrooms than they become in production. But the velocity is undeniable. A year ago, voice agents were expensive niche products. Today they're productionized, integrated into consumer platforms, and shipping to enterprise. This happens faster than safety catches up.
Safety Claims Meet Reality
OpenAI launched "Trusted Contact"—an emergency contact feature for users in crisis. Mozilla confirmed that Anthropic's Mythos AI found 271 Firefox vulnerabilities with almost no false positives. Both are real safety wins. And then 404 Media published 10,000 lines of documentation for HELLO BOSS, a realtime deepfake tool powering international scams in Chinese, English, and other languages. It's production-grade software, well-maintained, and generating revenue.
This is the safety paradox of 2026. Legitimate companies ship safety features while malicious actors ship better-engineered software faster. The deepfake scams are winning. They have real users, recurring revenue, active development. Safety sandboxing in the West doesn't slow down fraud in Beijing. The asymmetry matters.
China's $20B Inflection
Moonshot AI raised $2B at a $20B valuation with $200M ARR. The company is the clear open-source leader in China—faster iteration, lower prices, no compliance tax. This is what market dominance looks like when you're not the first mover but you're optimized for your region's regulatory environment and consumer expectations.
The $20B number is the real story. Moonshot isn't winning on capability—OpenAI and Anthropic have more capable models. It's winning on availability, cost, and the absence of Western regulatory overhead. That changes the equation for startups building on foundation models. A Chinese startup using Moonshot gets faster iteration cycles than one waiting for OpenAI's API changes. Scale that across a thousand Chinese companies, and the market divergence becomes permanent.
Hardware Gets Serious
Apple's AirPods with embedded cameras are moving toward production. Google shipped Fitbit Air at $99, a direct Whoop competitor for health data. SpaceX is building a $55B Texas fab for AI chips—Elon Musk's bet that compute manufacturing is now a strategic imperative. These aren't adjacent products. They're the final layer of integration: inference at the edge, AI in your ear, silicon secured by SpaceX instead of TSMC.
The hardware acceleration reveals the real endgame. Inference margins matter more than training. If you can move computation to the phone, watch, or glasses, you own the relationship with the user. The race is to build the moat in silicon and sensors, not just model weights. That's why Apple, Google, and Musk are all moving in the same direction at the same time—they've already conceded the capability race and pivoted to the install base.
All Stories This Period
- How Go Players Disempower Themselves to AI
- Why you can never get your doctor to call you back
- OpenAI launches new voice intelligence features in its API
- ICE Plans to Develop Own Smart Glasses to ‘Supplement’ Its Facial Recognition App
- Voi founders’ new AI startup Pit has become the latest rising star out of Stockholm
- OpenAI introduces new ‘Trusted Contact’ safeguard for cases of possible self-harm
- Perplexity’s Personal Computer is now available everyone on Mac
- Mira Murati’s deposition pulled back the curtain on Sam Altman’s ouster
- Apple’s AirPods with cameras for AI are apparently close to production
- SpaceX has a $55 billion plan to build AI chips in Texas
- Elon Musk’s lawsuit is putting OpenAI’s safety record under the microscope
- Mozilla says 271 vulnerabilities found by Mythos have "almost no false positives"
- Powering the Next American Century: US Energy Secretary Chris Wright and NVIDIA’s Ian Buck on the Genesis Mission
- Bumble is getting rid of the swipe, CEO says
- ChatGPT’s ‘Trusted Contact’ will alert loved ones of safety concerns
- Live updates from Elon Musk and Sam Altman’s court battle over the future of OpenAI
- AWS Launches Agentic AI Payment Capabilities
- The Joy of Typing
- How Anthropic’s Mythos has rewritten Firefox’s approach to cybersecurity
- Secure short-term GPU capacity for ML workloads with EC2 Capacity Blocks for ML and SageMaker training plans
- Overcoming reward signal challenges: Verifiable rewards-based reinforcement learning with GRPO on SageMaker AI
- Startup Battlefield 200 applications close May 27: A shot at VC access, global visibility, TechCrunch coverage, and $100K
- Give Your AI Unlimited Updated Context
- OpenAI Introduces Websocket-Based Execution Mode to Reduce Latency in Agentic Workflows
- EU Nations Approve Deal to Roll Back AI Restrictions
- Exhibit at TechCrunch Disrupt 2026: Get in front of 10,000 decision-makers before space runs out
- Aurora’s Chris Urmson on why self-driving trucks are finally ready to scale
- Anthropic and SpaceX Agree to Major Compute Capacity Deal
- OpenClaw and Claude can put your AI-generated podcasts in Spotify
- Presentation: Engineering at AI Speed: Lessons from the First Agentically Accelerated Software Project