Sam Altman: The Man Who Cannot Be Fired
The Five Days
On the afternoon of Friday, November 17, 2023, Sam Altman logged into a Google Meet expecting a routine conversation. Ilya Sutskever, OpenAI’s chief scientist and co-founder, had texted the night before asking to talk at noon. When Altman joined the call, the entire board was there. Greg Brockman, OpenAI’s president and Altman’s closest ally, was not.
Sutskever told Altman he was fired.
The board’s statement, posted minutes later, said Altman “was not consistently candid in his communications with the board.” No further detail. No warning. Microsoft, which had invested $13 billion into OpenAI, learned about the firing one minute before the public announcement. Investors got zero notice.
What followed was the most chaotic five days in the history of Silicon Valley corporate governance. Satya Nadella offered Altman a job at Microsoft within hours. Then 745 of OpenAI’s 770 employees signed a letter threatening to quit and follow Altman to Microsoft if the board didn’t resign. Emmett Shear, the former Twitch CEO, was installed as interim CEO on Sunday. By Tuesday he was already negotiating Altman’s return. On Wednesday, November 22, Altman walked back into OpenAI as CEO. The board members who had fired him — Helen Toner, Tasha McCauley, and effectively Sutskever — were out.
Five days. That is all it took. And in those five days, something became clear that had been true for years but never tested: Sam Altman was OpenAI. Not the research. Not the mission statement. Not the board. The person. Remove the person and 745 employees threaten to walk. The largest AI company on the planet nearly disintegrates over a weekend because one man is not in the building. No nonprofit charter, no governance structure, no $13 billion investor can hold the thing together without him.
The question worth asking is not how Altman survived the coup. That part is straightforward — he had the employees, the investors, and the leverage. The question is how a thirty-eight-year-old Stanford dropout accumulated enough gravity to make himself unfireable. And what happens when the person who cannot be removed is also the person deciding how quickly to push toward artificial general intelligence.
The Kid From St. Louis
Samuel Harris Altman grew up in Clayton, Missouri, a suburb of St. Louis. His mother was a dermatologist. His father was a real estate broker. At eight, he got an Apple Macintosh and started teaching himself to code. He came out as gay to his parents at sixteen, in a conservative Midwestern city, which he has described as formative — the experience of being different, of having to assess risk and navigate social systems from a young age.
He went to John Burroughs School, a private institution. Then Stanford, where he studied computer science. He lasted two years.
In 2005, at nineteen, Altman dropped out to start Loopt, a location-sharing app. This was before smartphones were ubiquitous, before Instagram, before the idea of broadcasting your location to friends was something anyone wanted to do. Loopt was ahead of its time in the way that being ahead of your time usually means: nobody used it. The company raised $30 million from Sequoia Capital, New Enterprise Associates, and Y Combinator. It signed partnerships with Sprint and other carriers. It had a real team, real investors, real infrastructure. What it didn’t have was users.
In 2012, Loopt was sold to Green Dot Corporation for $43.4 million. After seven years, the investors roughly broke even. Altman personally walked away with about $5 million. He co-founded a venture fund called Hydrazine Capital with his brother Jack, using most of that money as seed capital.
The Loopt failure matters because it established a pattern. Altman was not a great product builder. He was not a great engineer. What he was, from age nineteen, was someone who could raise money, recruit people, and tell a story about the future that made smart people want to follow him. The product failed. The network grew.
The Kingmaker
Paul Graham, Y Combinator’s co-founder, noticed Altman early. Loopt had been in YC’s first batch in 2005, when Altman was nineteen. Graham later wrote that Altman was one of the most impressive founders he’d ever met — not because of what Altman built, but because of how he thought.
In 2011, Altman started advising YC. In 2014, at twenty-eight, he replaced Graham as president. The appointment was controversial inside the YC community. Altman had run one company that had failed to find product-market fit. He had no track record as an investor. What he had was Graham’s endorsement, which in YC’s world was sufficient.
What Altman did at YC was revealing. He didn’t overhaul the program’s structure or its famous dinner talks. He scaled it. Under his presidency, YC funded more companies per batch, moved faster, and expanded its ambition far beyond startups. He launched YC Research, a nonprofit arm that would explore questions too long-term for venture returns: basic income experiments, new city design, the future of computing. The move previewed what Altman would later do at OpenAI — use a nonprofit structure to pursue ideas that VCs wouldn’t fund, then leverage the attention and relationships those ideas generated.
He briefly served as CEO of Reddit for eight days in 2014, between Yishan Wong’s resignation and Steve Huffman’s appointment. Eight days. It remains one of the odder footnotes in Silicon Valley history — a sitting YC president taking the helm of a portfolio company as a favor, then stepping aside without explanation. He invested personally across the YC portfolio, building a web of 400+ financial relationships that would later become his real power base. When OpenAI needed employees to choose sides in November 2023, many of those employees had been funded by Altman at YC, or knew someone who had been.
By the time Altman left YC in 2019, the accelerator had funded roughly 1,900 companies, including Airbnb, DoorDash, Instacart, Stripe, Reddit, and Twitch. Altman’s personal investment portfolio spanned over 400 companies, valued at approximately $2.8 billion by 2024. He had become one of the best-connected people in Silicon Valley without ever building a product that millions of people used.
This is the paradox that defines Altman’s career: his greatest skill is not building things. It is positioning himself at the center of things that other people build.
The Nonprofit Pivot
OpenAI was founded in December 2015 at a dinner party in Napa Valley. The attendees included Elon Musk, Peter Thiel, Reid Hoffman, and Altman. The initial pitch was idealistic: create a nonprofit research lab to develop artificial general intelligence safely and openly, as a counterweight to Google, which had acquired DeepMind the previous year. The founding donors pledged $1 billion. Musk and Altman served as co-chairs.
The early years were unremarkable. OpenAI published papers, hired researchers, and contributed to the open-source ecosystem. It was one of several well-funded AI labs. It was not the most important one. Google Brain and DeepMind had more resources, more talent, and more institutional momentum.
Two things changed the trajectory. The first was Musk’s departure in 2018. He left the board citing potential conflicts with Tesla’s AI work, but the split was personal in a way that only became visible later. Musk and Altman had been close — co-chairs, co-visionaries, co-funders. Then Musk proposed taking over OpenAI himself and merging it with Tesla. The board said no. Musk left. The friendship was over.
What followed was a slow escalation. Musk started tweeting barbs about OpenAI’s direction. He founded xAI in 2023 as a direct competitor. He filed a lawsuit in early 2024 alleging that Altman had broken the founding agreement by turning OpenAI into a “closed-source de facto subsidiary of the largest technology company in the world” — meaning Microsoft. Internal emails surfaced in the lawsuit showed Altman and Brockman discussing the need to raise far more capital than a nonprofit structure allowed, as early as 2017. Musk’s suit was eventually dropped and refiled, then dropped again. But the emails remain in the public record, and they tell a story of a CEO who understood from the beginning that the nonprofit structure was temporary.
The second change was the 2019 creation of OpenAI’s “capped-profit” subsidiary. The structure was novel: a for-profit company controlled by the nonprofit board, with investor returns capped at 100x their investment. Altman took no equity. He drew a salary of $76,001. The optics were deliberate. Here was a CEO building what might be the most valuable technology in history, taking nothing for himself. No stock options. No carried interest. Just a salary that wouldn’t cover a one-bedroom apartment in San Francisco. It made Altman look selfless. It also made him nearly impossible to criticize — how do you accuse a man of greed when he isn’t taking any money?
Microsoft invested $1 billion in the capped-profit entity in 2019. Then $10 billion more in January 2023. Then additional billions in subsequent rounds. By the time ChatGPT launched in November 2022, OpenAI had the compute, the talent, and the capital to pull ahead of every other lab on the planet.
Altman had turned a nonprofit research lab into the most valuable startup in history without ever owning a share of it. His defenders call this selflessness. His critics call it the longest play in Silicon Valley history — accumulating power instead of equity, influence instead of shares, on the theory that when the technology reshapes civilization, the person steering it won’t need a cap table to be the most important man in the room. Neither interpretation fully accounts for the other. Both might be true at once.
The Product Moment
ChatGPT launched on November 30, 2022. The board was not informed in advance. They found out on Twitter.
This detail, reported later by former board member Helen Toner, tells you everything. Altman launched the product that would reshape the technology industry without informing the people who were, technically, his bosses. He didn’t forget. He chose. And the board — the body that existed to provide oversight — learned about it the same way everyone else did: scrolling Twitter.
ChatGPT reached one million users in five days. A hundred million in two months — faster than TikTok, faster than Instagram, faster than anything. The viral growth validated Altman’s conviction that AI needed to be a product, not just a research paper. It also gave him something more valuable than equity: narrative control.
After ChatGPT, Altman became the face of artificial intelligence. In the spring of 2023, he went on a world tour — twenty-five countries in weeks, meeting heads of state, testifying before the U.S. Senate, drawing crowds of thousands in Lagos and London and Seoul. He told the Senate that AI regulation was necessary. He told developers at DevDay that the technology would transform every industry. He told everyone, everywhere, that this was the most important moment in human history. And the crowds kept growing. No other tech CEO since Steve Jobs has generated that kind of personal magnetism around a product. Altman was not just selling software. He was selling the future, and the future had his face on it.
The product succession since then has been relentless. GPT-4 in March 2023. GPT-4 Turbo. The o-series reasoning models. GPT-4o with voice and vision. The o3 series in early 2025. When Google launched Gemini and briefly appeared to close the gap, Altman’s response was immediate. “It’s good to be paranoid and act quickly when a potential competitive threat emerges,” he told employees, calling an internal “code red” to accelerate development. When Anthropic’s Claude gained traction among developers, OpenAI shipped faster. When Meta released open-source models, OpenAI started giving away more of its own stack.
Then came the Pentagon. In early 2025, OpenAI signed an agreement with the U.S. Department of Defense for the use of its AI models. This was a company that had previously stated it would not work with military applications. The announcement triggered a backlash: ChatGPT uninstall rates spiked 295% the following day. Altman did not reverse course. He told staff that they “don’t have a say” in military decisions. The incident crystallized something: the man who had built his reputation on responsible AI development was willing to cross every line he had previously drawn, if the line stood between OpenAI and dominance.
By November 2025, Altman told investors OpenAI had reached $20 billion in annualized recurring revenue. He projected hundreds of billions by 2030. The company was burning more than $10 billion a year on compute and talent. But the growth curve was steep enough that investors kept writing checks.
The Coup’s Aftermath
Altman’s return after the November 2023 coup was not a restoration. It was a consolidation. The old board was replaced with a new one that included Bret Taylor (Salesforce), Larry Summers (former Treasury Secretary), and Adam D’Angelo (Quora). These were Altman allies, not safety watchdogs. The governance structure that had made the coup possible — a nonprofit board with ultimate authority over a for-profit subsidiary — was the next thing to go.
In May 2025, OpenAI announced it would restructure as a public benefit corporation while keeping the nonprofit as a parent entity. This was presented as a compromise. In practice, it removed the cap on investor returns and cleared the path to an eventual IPO. By October 2025, the restructuring was complete. Microsoft owned roughly 27% of the new OpenAI Group, valued at approximately $135 billion.
The nonprofit that once controlled everything now sits above the for-profit entity with diminishing practical authority. The mission statement changed six times in nine years, most recently dropping the word “safely” from its commitment to AI that “benefits all of humanity.” Senator Elizabeth Warren sent Altman a letter in January 2026 questioning the restructuring.
Then came the February 2026 round: $110 billion from Amazon, Nvidia, and SoftBank at an $840 billion pre-money valuation. The largest private funding round in history. For context, $840 billion is roughly the GDP of the Netherlands. It is more than the market capitalization of all but a handful of public companies on Earth.
Sam Altman, who owns zero equity in OpenAI and earns $76,001 per year, now runs an organization valued at nearly $1 trillion.
The Exodus
The departures tell a story that the fundraising announcements do not.
In May 2024, Ilya Sutskever left. He had been OpenAI’s chief scientist since day one, its intellectual anchor, the person whose research on scaling neural networks had made GPT possible. Six months earlier, Sutskever had been the one to fire Altman. Now Altman was back, and Sutskever was the one leaving. He founded Safe Superintelligence — the name itself a rebuke. He said almost nothing publicly. He didn’t need to.
The same week, Jan Leike resigned. Leike had co-led OpenAI’s Superalignment team, the group tasked with ensuring future AI systems remained under human control. His departure letter was not silent. “Over the past years, safety culture and processes have taken a backseat to shiny products,” he wrote. “I believe much more of our bandwidth should go toward preparing for the next generations of models.” OpenAI disbanded the Superalignment team immediately after Leike’s departure.
September 2024 was worse. CTO Mira Murati announced she was leaving. Chief Research Officer Bob McGrew and VP of Research Barret Zoph left the same day. Three of OpenAI’s most senior technical leaders, gone in a single afternoon, with no public explanation. In October, Miles Brundage, the senior adviser for AGI readiness, followed them out the door.
Then in February 2026, OpenAI disbanded its Mission Alignment team after just sixteen months. Twenty-five or more senior researchers have departed across multiple waves.
The pattern is consistent. The people closest to the safety mission have been leaving. The people who stayed, or who joined after 2023, are building products. Dario Amodei left in 2021 and built Anthropic specifically because he believed OpenAI was not taking safety seriously enough. Sutskever left in 2024 and built Safe Superintelligence for the same reason. Leike went to Anthropic. The safety-minded founders and researchers have, one by one, concluded that the organization they joined no longer exists. Maybe this is the correct strategic choice for a company fighting Google, Anthropic, and Meta simultaneously. Maybe the original mission was always a fundraising story. Either way, it is Altman’s company now, and the people who disagreed are gone.
The Side Bets
Altman does not need OpenAI equity to be rich. His personal investment portfolio is worth approximately $2-3 billion as of 2026, built on early bets in Airbnb, Reddit, Stripe, Uber, Asana, and hundreds of other companies.
But his most revealing investments are the ones that intersect with AI’s future.
Helion Energy is a nuclear fusion company. Altman is its chairman. He invested $375 million in 2021 — the largest personal check he has ever written. Helion has a contract to supply Microsoft with fusion energy by 2028. If AI needs unlimited cheap energy, and Altman’s company provides the intelligence while his personal investment provides the power, the alignment of interests is extraordinary.
Oklo is a nuclear fission startup. Altman was its chairman from May 2024 to April 2025, having merged it with a SPAC he’d created. He stepped down, but his financial interest remains.
Worldcoin — now World — is a cryptocurrency project Altman co-founded in 2019 through a company called Tools For Humanity. The premise: scan people’s eyeballs with an orb-shaped device to create a global proof-of-personhood system. By 2023, the company had scanned two million eyes and raised $250 million. The project has drawn privacy concerns across Europe, investigations in Kenya, and skepticism from the crypto community. Altman argues it will be necessary in a world where AI makes it impossible to distinguish humans from machines online.
Each of these investments makes more sense if you assume that AGI is coming soon and that the world will need to reorganize around it. Fusion energy for compute. Proof of personhood for a post-AI internet. A universal basic income (another of Altman’s public causes) for the workers displaced by automation. Altman is not just building the technology. He is positioning himself across every adjacent market that the technology creates. The man who owns no equity in OpenAI may end up owning a piece of everything that OpenAI makes necessary.
The Candor Problem
The board that fired Altman used a specific phrase: “not consistently candid.” It was vague enough to seem petty. Later reporting filled in the details.
Helen Toner, the board member whose academic paper on AI safety partially triggered the crisis, told interviewers afterward that Altman had withheld information repeatedly. He didn’t tell the board about ChatGPT’s launch before it happened. He didn’t disclose that he personally owned OpenAI’s startup fund — a discovery the board made independently after months of what Toner described as “obfuscation.” Two executives reportedly provided documentation of what they called “psychological abuse” and “lying and being manipulative in different situations.”
Then there were the non-disparagement agreements. Departing employees were required to sign documents that would, in perpetuity, forbid them from criticizing OpenAI. If they refused, they could lose their vested equity. When this became public in May 2024 through former employee Daniel Kokotajlo’s disclosure, Altman posted on X that he had been “unaware” of the equity clawback provision. Leaked documents later showed his signature on the authorization.
Altman said he was “embarrassed” and that the provision would be removed. It was. But the sequence — deny, get caught, apologize, fix — has repeated enough times that it has become its own pattern. The question is whether the pattern reflects a leader who moves too fast and occasionally loses track of details, or one who operates on the principle that it is easier to ask forgiveness than permission.
The answer depends on how much you trust Sam Altman. That is exactly the problem with a governance structure that now depends almost entirely on trusting Sam Altman.
The Family
In January 2025, Altman’s younger sister Ann filed a federal lawsuit in the Eastern District of Missouri alleging that Sam had sexually abused her from 1997 to 2006, beginning when she was three and he was twelve. The allegations are graphic and severe.
Altman, his mother, and his brothers issued a joint statement denying the claims “utterly.” They described Ann as facing “mental health challenges” and refusing treatment. Ann Altman had made public claims since 2021 but had not previously taken the matter to court.
This article will not adjudicate the claims. The case is pending. What is relevant to this investigation is that the lawsuit exists, that it was filed at the apex of Altman’s public influence, and that it has received remarkably little sustained media coverage relative to the seriousness of the allegations. Whether this reflects editorial judgment about unresolved legal matters or the gravitational pull of Altman’s media relationships is a question others will have to answer.
What Altman Wants
Sam Altman is not a fraud. The products work. The revenue is real. ChatGPT changed how hundreds of millions of people interact with computers. The technical achievements of OpenAI’s research team — a team Altman recruited and funded — are genuine. Nobody questions this.
What people question is the governance. Or rather, the absence of it.
The $76,001 salary makes for a good story. It also obscures the fact that Altman’s personal portfolio — Helion, Oklo, World, and stakes in 400 other companies — positions him to profit enormously from the world that OpenAI creates, without any of that profit appearing on OpenAI’s books. He owns no equity in the company. He owns pieces of the infrastructure the company requires.
The safety commitment was real once. Altman co-founded OpenAI explicitly to counterbalance Google’s AI dominance. Then the Superalignment team lasted a year. The Mission Alignment team lasted sixteen months. Sutskever left. Leike left. Murati left. The mission statement dropped “safely.” The nonprofit board that fired Altman was replaced within a week. The new board is stacked with allies. The restructuring removed the cap on profits.
At forty, Altman faces no governance mechanism that could constrain him even if it wanted to. He survived the one attempt. He dismantled the structure that made it possible. He is now raising money at a pace that makes him indispensable to his investors — you don’t fire the man running your $840 billion bet.
The comparison that keeps surfacing in conversations with people who know him is not Steve Jobs, though that’s the one journalists reach for. It is J. Robert Oppenheimer. A man of enormous intellect and social skill who was given control of a project with civilizational consequences, who navigated the politics masterfully, who believed he was acting in humanity’s interest, and who — depending on whom you ask — was either vindicated or haunted by what he built.
Altman has read the Oppenheimer biographies. He has talked about them publicly. He knows the parallel. Whether he takes it as a warning or a roadmap is the question that will define the next decade of artificial intelligence.
Published March 6, 2026. This investigation covers Sam Altman’s career and OpenAI’s evolution through early 2026.
Related Reading
- OpenAI 2024-2025: The Company That Won Everything and Lost Its Way
- Jensen Huang and the $4 Trillion Bet: How a Dishwasher Built the Most Important Company in the World
- Google DeepMind After the Merger: Nobel Prizes, Bleeding Talent, and a $185 Billion Bet
- Anthropic: The Business Logic of AI Safety First
About the Author
Gene Dai is the co-founder of OpenJobs AI, focusing on AI-powered recruitment technology and the intersection of artificial intelligence with enterprise software.