OpenAI 2024-2025: The Company That Won Everything and Lost Its Way
The Word They Deleted
Sometime in the fall of 2025, OpenAI changed its mission statement. The old version read: “to build general-purpose artificial intelligence that safely benefits humanity, unconstrained by a need to generate financial return.” The new version read: “to ensure that artificial general intelligence benefits all of humanity.”
One word was removed. Safely.
Nobody held a press conference. There was no blog post explaining the decision. The change surfaced in corporate filings related to OpenAI’s restructuring as a for-profit public benefit corporation — a document most people would never read. But the deletion was noticed, and it became a shorthand for something that had been happening at OpenAI for two years: the systematic subordination of safety to speed, of caution to capital, of the founding vision to the demands of a company growing faster than any technology enterprise in history.
Between January 2024 and March 2026, OpenAI went from a $86 billion valuation to $730 billion. Its annual recurring revenue went from roughly $2 billion to $20 billion. ChatGPT’s weekly active users went from 100 million to 900 million. It raised $110 billion in a single funding round — the largest private investment in history. It shipped GPT-5, launched an autonomous coding agent, broke ground on a $500 billion data center network, and restructured itself from a nonprofit with a commercial subsidiary into a for-profit corporation controlled, at least on paper, by a nonprofit foundation.
It also lost its co-founder and chief scientist. Its CTO. Its chief research officer. Its VP of research. The head of its superalignment team. The head of its mission alignment team. The head of its AGI readiness program. Multiple members of its governance team. At least seven researchers to Meta’s superintelligence lab in a single summer. And, along the way, the word “safely.”
This is the story of OpenAI’s two most important years — the period that transformed a research lab into the most valuable private company on Earth and left behind a trail of departures, broken promises, and unanswered questions about whether the company that started the AI race can be trusted to finish it responsibly.
After the Coup
The story of OpenAI in 2024 cannot be told without starting in November 2023.
On November 17, the OpenAI board of directors fired Sam Altman as CEO. The stated reason was that Altman “was not consistently candid in his communications with the board.” The full reasons have never been fully disclosed, but subsequent reporting revealed concerns about Altman withholding information about ChatGPT’s launch timeline, his personal ownership of OpenAI’s startup fund, and what the board described as “inaccurate information about the small number of formal safety processes that the company did have in place.”
The firing lasted five days. 745 of OpenAI’s 770 employees signed a letter threatening to resign if the board did not step down and reinstate Altman. Microsoft CEO Satya Nadella publicly offered to hire all of them. The board capitulated. Altman returned on November 22 with a new board that included Bret Taylor as chair, Larry Summers, and Adam D’Angelo — the only member of the original board to remain.
An internal investigation in March 2024 concluded that Altman’s behavior “did not mandate removal.” The finding effectively validated Altman’s narrative and delegitimized the board members who had fired him. Helen Toner, one of the ousted directors, later published a detailed account arguing that the board had legitimate concerns. The public largely sided with Altman. The employees had already voted with their signatures.
The aftermath reshaped OpenAI more profoundly than any product launch or technical breakthrough. The board coup demonstrated, conclusively, that OpenAI’s nonprofit governance structure could not withstand pressure from the company’s employees, investors, and commercial partners. The board had exercised its authority exactly as the nonprofit structure was designed to allow — and had been overruled within a week by the combined force of talent, capital, and market expectations.
The lesson was absorbed by everyone in the organization. The board could fire the CEO, but it couldn’t survive the consequences. Safety-oriented governance, in practice, was a veto that could be overridden. The nonprofit structure was a legal form, not a functional constraint.
What followed was a year of departures that stripped the company of nearly every senior figure who had prioritized safety over speed.
The Exodus
Ilya Sutskever left first.
OpenAI’s co-founder and chief scientist — the person most responsible for the technical vision behind GPT-2, GPT-3, and the decision to scale language models that created the entire modern AI industry — announced his departure in May 2024. He had been one of the board members who voted to fire Altman. After the reinstatement, Sutskever remained at the company for six months in a diminished role before leaving to found Safe Superintelligence (SSI), a startup whose name was itself an implicit rebuke of his former employer.
Hours after Sutskever’s announcement, Jan Leike resigned. Leike had led OpenAI’s superalignment team — the group tasked with ensuring that superintelligent AI systems remained aligned with human values. The team was less than a year old. Leike did not leave quietly. He posted a detailed critique on X, writing that OpenAI’s “safety culture and processes have taken a backseat to shiny products.” He described his team as “sailing against the wind,” struggling for computing resources that were being redirected to commercial model training. He joined Anthropic.
The superalignment team was dissolved.
In September 2024, the departures accelerated. Mira Murati, OpenAI’s CTO and the person who had served as interim CEO during Altman’s five-day absence, announced she was leaving. The same day, Altman confirmed that Bob McGrew, the chief research officer, and Barret Zoph, VP of research, were also departing. Murati had been at OpenAI for six and a half years. She later founded Thinking Machines Lab, an AI research company.
John Schulman, a co-founder who had been instrumental in developing reinforcement learning from human feedback (RLHF) — the technique that made ChatGPT possible — left to join Anthropic. Daniel Kokotajlo and Cullen O’Keefe, members of OpenAI’s governance team, resigned. Kokotajlo stated publicly that he had “lost trust in OpenAI leadership and their ability to responsibly handle AGI.”
In October 2024, Miles Brundage, senior advisor for AGI readiness, departed. He wrote that his research would be “more impactful externally.”
The pattern continued into 2025. At least seven researchers left for Meta’s superintelligence lab over the summer. Chief people officer Julia Villagra departed in August. Chief communications officer Hannah Wong left later in the year. In February 2026, OpenAI confirmed it had disbanded the Mission Alignment team — the successor to the superalignment team — after just sixteen months. Its leader, Joshua Achiam, was moved to a newly created “chief futurist” role with undefined responsibilities.
The cumulative effect was stark. Of the people most associated with AI safety at OpenAI — the researchers who had built the alignment teams, the executives who had advocated for caution, the board members who had tried to enforce accountability — essentially none remained in positions of influence by early 2026. The people who stayed, and the people who were hired to replace the departed, were oriented toward shipping products. The company that had started as a nonprofit dedicated to safe AI had become a for-profit corporation with no senior safety leadership and a mission statement that no longer mentioned the word “safely.”
The Restructuring
The corporate restructuring that OpenAI completed on October 28, 2025, was the legal expression of a transformation that had already happened in practice.
The old structure was baroque. OpenAI had been founded in 2015 as a nonprofit. In 2019, it created a “capped profit” subsidiary to attract investment, with returns limited to 100x the original investment (later described as a practical limit on how much investors could earn). The nonprofit board retained ultimate control, including the theoretical power to shut down the for-profit subsidiary if it determined that the company’s mission was being compromised.
The board coup demonstrated that this power was theoretical. The restructuring made it official.
Under the new structure, OpenAI Inc. became the OpenAI Foundation — a nonprofit that would hold a 26% stake in a new for-profit entity called OpenAI Group, organized as a public benefit corporation. Microsoft received a 27% stake, reflecting its $13.75 billion in cumulative investment. OpenAI employees received 26%. The remaining equity was distributed among other investors.
The nonprofit retained “special voting and governance rights,” including the ability to appoint all board members of the for-profit entity. In theory, this preserved nonprofit oversight. In practice, the foundation controlled a minority equity position in a company whose largest shareholder was Microsoft and whose CEO had already survived one attempt by a nonprofit board to exercise its authority.
The negotiations with Microsoft took nearly a year and were, by multiple accounts, contentious. The original partnership had included an unusual provision: if OpenAI achieved artificial general intelligence (AGI), as determined by the OpenAI board, Microsoft’s commercial rights would be terminated. This clause gave OpenAI enormous leverage — it could, in theory, declare AGI at any time and cut Microsoft out. Microsoft, understandably, objected to this arrangement. The restructured deal extended Microsoft’s access to OpenAI models through 2032, with AGI determinations now subject to review by an independent panel rather than the OpenAI board alone.
There was another source of tension, reported in early March 2026: OpenAI was building a product that directly competed with GitHub, Microsoft’s $7.5 billion developer platform. The company that depended on Microsoft for infrastructure, distribution, and $13.75 billion in funding was now encroaching on one of Microsoft’s most strategically important properties. The relationship, always complex, was becoming adversarial in dimensions that neither partner had fully anticipated.
The Product Machine
While the organization was being restructured and the safety teams were being disbanded, the product teams were executing a shipping cadence that left competitors scrambling for responses.
To understand what happened, it helps to recall where OpenAI stood at the start of 2024. GPT-4, released in March 2023, had been the company’s flagship for nearly a year — an eternity in AI. Google had launched Gemini. Anthropic had shipped Claude 2, then Claude 3. Meta was releasing open-source Llama models that, in some configurations, approached GPT-4 performance at zero cost. The moat that ChatGPT had built through first-mover advantage was eroding on every side. The question was not whether OpenAI would respond, but whether it could respond fast enough.
The answer was the o-series. The o1 model, introduced in late 2024, represented a fundamental architectural shift — OpenAI’s first “reasoning-first” model, designed to decompose problems into steps and think through them sequentially rather than generating answers in a single forward pass. The approach traded latency for accuracy: o1 was slower than GPT-4 but dramatically better on tasks requiring logical reasoning, mathematical proof, and multi-step analysis. It was also expensive to run, which created a pricing challenge that would recur with every subsequent reasoning model.
The o3 and o4-mini models followed in spring 2025. Each iteration improved chain-of-thought reasoning while reducing inference costs. The o-series became the backbone of OpenAI’s enterprise offering — the models that Fortune 500 companies deployed for financial analysis, legal document review, and software engineering tasks where accuracy mattered more than speed.
Then came GPT-5.
The launch on August 7, 2025, was a livestreamed event that Sam Altman treated with the production values of a consumer electronics keynote. He called GPT-5 “a significant step along our path to AGI.” The benchmarks justified some of the rhetoric: 94.6% on AIME 2025 (math), 74.9% on SWE-bench Verified (real-world coding), 88% on Aider Polyglot (multi-language coding), 84.2% on MMMU (multimodal understanding). GPT-5 unified the reasoning capabilities of the o-series with the multimodal fluency of GPT-4o into a single model family — gpt-5, gpt-5-mini, gpt-5-nano, and gpt-5-chat — available to all ChatGPT users, free and paid.
The decision to make GPT-5 the default model for free users was commercially aggressive and strategically important. It meant that the 850 million people who used ChatGPT without paying were now interacting with OpenAI’s most capable model. The logic was straightforward: free users who experienced GPT-5 would be more likely to convert to paid subscribers. The cost was equally straightforward: inference at scale for hundreds of millions of free users was enormously expensive. The bet was that conversion rates would justify the burn.
GPT-5.2 followed in December 2025, launched the same day that Google shipped its deep research agent — a coincidence that underscored how tightly the release cycles of the two companies had become synchronized. GPT-5.3-Codex arrived shortly after, described as OpenAI’s “most capable agentic coding model,” with 25% faster performance and improved repo-scale reasoning.
The Codex product line deserved attention on its own. What had begun as a code-completion tool in 2021 had evolved into something qualitatively different: an autonomous software development agent that could read a codebase, understand the architecture, write and test code, and execute multi-hour tasks with minimal human oversight. The Codex app launched on macOS and then Windows, designed to manage multiple coding agents running in parallel on long-running tasks. Weekly active users tripled in the first quarter of 2026. Token consumption increased fivefold. Engineers reported using Codex not as a typing assistant but as a junior developer — assigning it tickets, reviewing its pull requests, and occasionally being surprised by solutions they hadn’t considered.
Operator, OpenAI’s web-browsing agent, was launched as a subscriber benefit — an AI that could navigate websites, fill out forms, and complete multi-step tasks autonomously. Two years earlier, Anthropic had demonstrated a similar capability with Claude’s “computer use” feature and been cautious about deploying it broadly. OpenAI shipped it as a subscription perk, available to anyone willing to pay $20 per month. The implicit message: if caution was a product strategy, OpenAI was choosing a different one.
The open-source pivot added another dimension. In mid-2025, OpenAI released gpt-oss, an open-weight model family — the company’s first significant move into territory that Meta’s Llama and Mistral had dominated. The decision was a concession that the open-source ecosystem could not be ignored. Developers who needed to run models on their own infrastructure, fine-tune for specific domains, or avoid API dependency had been migrating to Llama and Mistral. The open-weight release was designed to keep them in the OpenAI ecosystem, even if they weren’t paying for API access.
The user numbers told the clearest story. ChatGPT’s weekly active users grew from 300 million in December 2024 to 400 million in February 2025 to 800 million by mid-2025 to 900 million by February 2026. Paid subscribers passed 50 million, up from an estimated 20 million at the start of 2025. OpenAI was adding roughly 433,000 new paying subscribers per week since July 2025. The platform processed over 2.5 billion queries per day.
The revenue trajectory matched the user growth. OpenAI hit its first $1 billion revenue month in July 2025. Annual recurring revenue reached $12 billion by mid-year, then $20 billion by year’s end. Revenue had grown from $2 billion in 2023 to $6 billion in 2024 to $20 billion in 2025. No enterprise software company in history had scaled this fast. Salesforce took 20 years to reach $20 billion in revenue. Microsoft’s cloud business took over a decade. OpenAI did it in three.
In November 2025, OpenAI added a free ad-supported tier. Ads. The nonprofit research lab founded in 2015 to ensure AI benefits humanity was now showing advertisements to users of its AI chatbot. The decision made commercial sense — the ad tier expanded the user base and created a revenue stream that didn’t depend on subscription conversion. But it also marked a symbolic crossing. The company had moved from “unconstrained by a need to generate financial return” to an ad-supported business model in six years. The transformation was complete before the mission statement was even updated to reflect it.
The Cash Furnace
The revenue numbers were extraordinary. The loss numbers were worse.
OpenAI lost approximately $5 billion in 2024 on $3.7 billion in revenue. In the first half of 2025, the company posted $4.3 billion in revenue and $13.5 billion in net losses. The full-year 2025 cash burn was approximately $9 billion on $13 billion in sales — a cash burn rate of roughly 70% of revenue.
The losses were driven by compute. OpenAI’s computing capacity grew from 0.6 gigawatts in 2024 to 1.9 gigawatts in 2025, and the cost of that infrastructure was staggering. Development spending on next-generation models consumed $6.7 billion in just the first half of 2025. Inference costs — the expense of running models for users — quadrupled during the year. The adjusted gross margin collapsed from 40% in 2024 to 33% in 2025. Revenue was growing fast, but costs were growing faster.
The projections published in OpenAI’s financial documents painted a sobering picture. The company expected operating losses to balloon through 2028, reaching roughly three-quarters of that year’s revenue. Cumulative negative free cash flow between 2024 and 2029 was projected at $115 billion to $143 billion. Profitability was not expected until 2029 or 2030, by which point OpenAI projected revenue of approximately $200 billion.
The gap between present losses and projected future profits was being bridged by an ocean of capital. The $40 billion round in March 2025 at a $300 billion valuation. A secondary share sale at $500 billion in October 2025. And then, in February 2026, the $110 billion round — $50 billion from Amazon, $30 billion each from Nvidia and SoftBank — at a $730 billion pre-money valuation.
The $110 billion round was the largest private funding in history. It valued OpenAI at more than all but a handful of public companies globally. It gave the company enough capital to sustain its burn rate for several years. And it was explicitly tied to the Stargate project — the $500 billion data center joint venture with SoftBank, Oracle, and MGX that President Trump had announced in January 2025.
Stargate was infrastructure at a scale that defied easy comprehension. The project planned to build data centers across multiple states — Texas, New Mexico, Ohio, and additional sites — with a combined capacity approaching 7 gigawatts and over $400 billion in investment over three years. A 1-gigawatt campus in Milam County, Texas, broke ground in October 2025. SoftBank CEO Masayoshi Son served as chairman of the venture.
The project was not without friction. Reports surfaced that OpenAI, Oracle, and SoftBank had disagreed over who would control the facilities, their design, and the long-term lease structures. A compromise was eventually reached, with OpenAI controlling facility design and signing long-term leases while SoftBank’s subsidiary developed and owned the physical infrastructure. But the disputes delayed construction and highlighted the complexity of managing a multi-party infrastructure project at this scale.
The fundamental question remained: was OpenAI building infrastructure for demand that existed, or for demand it hoped to create? The company’s own financial projections assumed revenue growth from $20 billion to $200 billion in five years — a tenfold increase that would require ChatGPT to go from 50 million paying subscribers to 220 million, the API business to expand into every major enterprise, and new product lines to materialize at scale. These were assumptions, not commitments. And they were being underwritten by $143 billion in projected cash burn.
The Microsoft Knot
No relationship in the AI industry is more important or more complicated than the one between OpenAI and Microsoft.
Microsoft has invested $13.75 billion in OpenAI. It holds 27% of the restructured company. It provides the Azure infrastructure on which OpenAI’s models are trained and deployed. It distributes OpenAI’s technology through Copilot, its AI assistant embedded in Office, Windows, and Azure. The partnership generated billions of dollars in revenue for both companies and was, for a time, the defining alliance of the AI era.
It was also, increasingly, a source of tension.
The AGI clause in the original partnership agreement had given OpenAI the power to cut Microsoft off from future technology if AGI was achieved. Microsoft spent months negotiating to limit this provision. The restructured agreement extended Microsoft’s access through 2032 and replaced OpenAI’s unilateral AGI determination with an independent review process. Microsoft got what it wanted, but the negotiation itself revealed the underlying dynamic: OpenAI was using its leverage as a technology provider to renegotiate terms with its largest investor and infrastructure partner.
The competitive friction was also sharpening. OpenAI’s Codex app was evolving from a code-generation tool into an agentic software development platform — a product category that overlapped directly with GitHub Copilot, which Microsoft had built on OpenAI’s earlier models. By March 2026, reports indicated that OpenAI was building a product that would compete with GitHub itself. The company that owed its existence to Microsoft’s $13.75 billion was now building tools that competed with Microsoft’s crown jewels.
Microsoft, for its part, was not standing still. The company had hired Mustafa Suleyman — DeepMind co-founder and Inflection AI co-founder — to lead its consumer AI division. It was investing in Mistral, the French open-source AI company. It was building its own small language models with the Phi series. The message was clear: Microsoft was diversifying its AI strategy beyond OpenAI, even as it remained the company’s largest shareholder.
The relationship was becoming a study in mutual dependency and mutual suspicion. Microsoft needed OpenAI’s models to power Copilot. OpenAI needed Microsoft’s infrastructure to train and serve those models. But each company was quietly building the capability to survive without the other. Microsoft was developing its own models. OpenAI was building Stargate to reduce its dependence on Azure. The partnership agreement ran through 2032. The question was whether the relationship would last that long in its current form, or whether one partner would find a way to make the other dispensable.
The Competitive Landscape
OpenAI’s dominance in consumer AI chatbots was real but narrower than the headlines suggested. ChatGPT held 64.5% of tracked chatbot usage in January 2026, but the market was fragmenting.
Google’s Gemini 3, launched in November 2025, was the first model to convincingly match GPT-5 on hard reasoning benchmarks. At the 2025 International Mathematics Olympiad, both models solved five of six problems for identical scores. Google had 750 million monthly active users on the Gemini app — a figure that, depending on how you measured engagement, either trailed or rivaled ChatGPT’s numbers. More importantly, Google had distribution advantages that OpenAI could not replicate: Search, Android, Workspace, and a cloud business growing at 48% year-over-year. When Google embedded Gemini into Gmail’s compose window or Android’s system-level assistant, it reached users who would never download a standalone AI app.
Anthropic, though smaller, had carved out the enterprise market with Claude. The company’s safety-first positioning — ironic, given that its founders had left OpenAI over safety concerns — had resonated with enterprise buyers who needed AI they could trust in regulated industries. Anthropic’s ARR hit $14 billion by early 2026, growing faster than OpenAI had at the same stage. Claude’s performance on coding and analysis tasks had become, by some measures, superior to GPT-5 in specific domains. The company’s $380 billion valuation reflected market confidence that safety-conscious AI was not a niche but a mainstream demand.
Meta’s open-source Llama models represented a different kind of threat — one that didn’t compete with OpenAI for revenue but competed for developer mindshare and ecosystem control. Every company that fine-tuned Llama for its own use case was a company that didn’t need an OpenAI API key. Meta didn’t need to monetize Llama directly; it benefited from the research contributions of the open-source community and from weakening the competitive positions of companies that charged for model access. The open-source threat was structural, and OpenAI’s gpt-oss release was an acknowledgment that ignoring it was no longer an option.
The competitive picture added urgency to OpenAI’s financial challenge. The company needed to maintain its lead in model capability while simultaneously building out Stargate infrastructure, expanding the product portfolio, and reaching profitability before the $143 billion cash burn exhausted investor patience. Any deceleration in model quality — any release cycle where a competitor clearly surpassed GPT — could trigger a shift in developer and enterprise loyalty that would be difficult to reverse.
What OpenAI Has Become
Strip away the funding rounds and the product launches and the organizational drama, and what remains is a company that occupies a unique and potentially unstable position in the technology industry.
OpenAI has 900 million weekly active users and is losing billions of dollars. It has the best language models in the world and no senior safety leadership. It has a nonprofit foundation that controls its board and a for-profit structure that controls its economics. It has a partnership with Microsoft that is simultaneously its greatest asset and its most dangerous dependency. It has a mission to benefit all of humanity and a business model that requires $143 billion in cash burn before profitability.
The company deleted the word “safely” from its mission statement. The people who would have objected — Sutskever, Leike, Murati, Schulman, Kokotajlo, Brundage, Achiam — are gone. Some are building competitors. Some are doing the safety research they couldn’t do at OpenAI. Some are watching from the outside, waiting to see whether the company they helped create can justify the trust that 900 million users place in it every week.
The revenue trajectory suggests OpenAI will eventually become profitable. The $200 billion revenue target for 2030 is ambitious but not implausible given current growth rates. The Stargate infrastructure will provide the compute capacity needed for next-generation models. The Codex app and Operator represent new product categories that could drive enterprise adoption. GPT-6, or whatever the next frontier model is called, will be trained on more compute than any model in history.
But profitability and safety are not the same question. The company that was founded to ensure AI benefits humanity has become a company whose primary obligation is to generate returns for investors who have committed over $168 billion. The nonprofit foundation holds board appointment rights, but the board coup of 2023 demonstrated that governance mechanisms are only as strong as the willingness to enforce them — and the people willing to enforce them are no longer there.
OpenAI in 2026 is the most powerful AI company in the world. It built ChatGPT. It shipped GPT-5. It has more users than any AI product in history. It is also a company that has systematically dismantled every internal mechanism that was designed to ensure its technology was developed responsibly. The superalignment team is gone. The mission alignment team is gone. The AGI readiness advisor is gone. The co-founder who prioritized safety is gone. The CTO who served as a counterweight to the CEO is gone. The word “safely” is gone.
What remains is a product machine, extraordinarily effective, burning through cash at a rate that requires continuous infusions of capital, building infrastructure at a scale that commits it to a future it cannot yet see, and governed by a structure that has already failed its most important test.
The IPO looms as the next transformation. OpenAI’s PBC structure was designed to enable a public offering, and the $730 billion valuation implies a listing that would rank among the largest in history. An IPO would provide liquidity for employees and early investors, but it would also subject the company to quarterly earnings scrutiny from public market analysts who care about margins, not mission statements. The tension between “benefit all of humanity” and “beat consensus EPS” is not one that any company has resolved gracefully.
There is also a temporal question. OpenAI’s financial projections assume profitability by 2029 or 2030. That means three to four more years of losses — potentially $100 billion or more in cumulative negative cash flow — before the business sustains itself. The investors who funded the $110 billion round are sophisticated enough to understand this timeline. But public market investors, if and when the IPO happens, may be less patient. The history of technology IPOs includes companies that went public with unprofitable business models and were punished by the market when profitability took longer than expected. OpenAI’s losses are not merely large; they are historically unprecedented for a pre-IPO company.
The next two years will determine whether OpenAI can grow into the company its valuation implies. The product execution has been extraordinary. The revenue growth has been extraordinary. The competitive position, while increasingly contested, remains dominant. If the trajectory holds, OpenAI will become one of the most valuable companies on Earth.
But none of that addresses the question that the departed employees raised. Speed and scale are not safety. Revenue is not responsibility. A $730 billion valuation does not validate the decision to disband the superalignment team, any more than a $1 billion revenue month justifies deleting the word “safely” from a mission statement.
Nine hundred million people use ChatGPT every week. Most of them have never heard of Ilya Sutskever. They don’t know that the superalignment team was dissolved, or that its successor was dissolved too. They don’t know that the word “safely” was deleted. They open the app, type a question, and get an answer.
The answer is usually good. The models are better than they have ever been. The product works.
Whether the company behind it can be trusted to handle what comes next — the models that are smarter, the agents that are more autonomous, the systems that are harder to control — is a question that OpenAI’s own former employees have already answered.
They left.
Published March 6, 2026. This investigation covers OpenAI’s evolution from the November 2023 board crisis through early 2026.
Related Reading
- Google DeepMind After the Merger: Nobel Prizes, Bleeding Talent, and a $185 Billion Bet
- Anthropic: The Business Logic of AI Safety First
- The Year of AI Agents: From Concept to Production Reality Gap
- Sam Altman and OpenAI: A Comprehensive Deep Analysis
About the Author
Gene Dai is the co-founder of OpenJobs AI, focusing on AI-powered recruitment technology and the intersection of artificial intelligence with enterprise software.