The September 2024 Resignation That Shocked Silicon Valley

On September 25, 2024, Mira Murati posted a brief memo to OpenAI's internal channels that would send shockwaves through the AI industry. After six and a half years at OpenAI—the last two as Chief Technology Officer—she was stepping down. "I'm stepping away because I want to create the time and space to do my own exploration," she wrote. No new role announced, no startup revealed, just a vague promise of personal exploration.

The timing was extraordinary. Murati had just overseen the development and launch of ChatGPT's o1 reasoning model, arguably OpenAI's most significant technical achievement since GPT-4. The company was preparing for a restructuring that would convert it from a nonprofit-controlled entity to a for-profit corporation valued at over $150 billion. And Sam Altman, whose relationship with Murati had become complicated after the November 2023 board crisis, was consolidating power.

"Mira leaving was a seismic event," one OpenAI employee later told reporters. "She wasn't just CTO—she was the product visionary who turned research into products people actually used. ChatGPT, DALL-E, Codex, Sora—those were all Mira's product organizations. Without her, OpenAI becomes a research lab that struggles to ship."

Within hours of Murati's announcement, OpenAI's Chief Research Officer Bob McGrew and VP of Research Barret Zoph also resigned. The simultaneous departures suggested deeper tensions within OpenAI's leadership—tensions that would only become clear months later when Murati announced her next venture.

On February 15, 2025—less than five months after leaving OpenAI—Murati publicly launched Thinking Machines Lab, a San Francisco-based AI company focused on "collaborative general intelligence" through multimodal AI systems. The company had been operating in stealth since late 2024, quietly recruiting some of OpenAI's most talented researchers including John Schulman, a co-founder of OpenAI who had been instrumental in developing reinforcement learning from human feedback (RLHF), the technique that made ChatGPT possible.

Then, on July 15, 2025, Thinking Machines Lab announced what would become the largest seed round in venture capital history: $2 billion at a $12 billion post-money valuation, led by Andreessen Horowitz with participation from Nvidia, AMD, Accel, ServiceNow, Cisco, and Jane Street. For a company less than a year old with no publicly available product, the valuation was staggering—and it made Murati the highest-valued female founder in AI history.

By October 2025, Thinking Machines launched Tinker, a Python API for distributed language model fine-tuning that allows researchers to train custom models without managing infrastructure. The product immediately attracted top AI research labs at Berkeley, Princeton, Stanford, and Redwood Research—validating Murati's vision of making advanced AI development accessible beyond the handful of companies with billion-dollar compute budgets.

This is the story of how Mira Murati—a mechanical engineer from Vlorë, Albania who won a scholarship to study in Canada at 16, worked on Tesla's Model X and VR hand-tracking interfaces, then joined an obscure AI research lab called OpenAI—became one of the most influential figures in artificial intelligence, survived the most dramatic boardroom coup in tech history, and walked away from the most valuable AI company in the world to build something she believed could be more important.

From Albania to Dartmouth—The Mechanical Engineer Who Found AI

Vlorë, Albania: The Scholarship That Changed Everything

Mira Murati was born on December 16, 1988, in Vlorë, a coastal city in southern Albania overlooking the Adriatic Sea. She grew up during a period of dramatic transformation for Albania, which had just emerged from decades of communist isolation under Enver Hoxha's regime. The 1990s brought both democracy and economic chaos, with the country struggling through political instability and the 1997 Albanian civil unrest that followed the collapse of pyramid investment schemes.

Murati's family valued education as the path to opportunity. Her parents encouraged her interest in mathematics and science, recognizing her analytical aptitude. By her teenage years, Murati excelled in mathematics competitions and science olympiads, winning regional championships that caught the attention of teachers who urged her to apply for international scholarship programs.

At age 16, Murati won a United World Colleges (UWC) scholarship to study at Pearson College UWC on Vancouver Island in Canada. The scholarship was life-changing. UWC schools bring together students from around the world to study International Baccalaureate curriculum while living in residential communities that emphasize international understanding and social responsibility. For a teenager from Albania, it opened doors to a global network and world-class educational opportunities that would have been impossible to access otherwise.

"Pearson College completely changed my perspective," Murati later recalled. "I went from a small city in Albania to living with students from 70 different countries. We debated global issues, learned from each other's cultures, and developed this sense that we could contribute to solving big problems. It instilled this optimism that technology could be a force for positive change."

Murati graduated from Pearson College with an International Baccalaureate in 2005, having impressed faculty with her mathematical rigor and systems-thinking approach to problems. Her next destination: the United States.

Dartmouth Engineering: The Dual-Degree Path

Murati enrolled in a dual-degree program that combined liberal arts at Colby College with engineering at Dartmouth College's Thayer School of Engineering. She earned a Bachelor of Arts in Mathematics from Colby in 2011, followed by a Bachelor of Engineering in Mechanical Engineering from Dartmouth in 2012.

The Thayer School's engineering program emphasized hands-on problem-solving and interdisciplinary collaboration. Murati thrived in this environment, working on projects that combined mechanical systems with emerging digital technologies. Her senior design project focused on robotics and human-computer interaction, exploring how mechanical systems could respond intuitively to human input—themes that would resurface throughout her career.

"Mira had this rare combination of rigorous analytical thinking and aesthetic sensibility," one Dartmouth professor remembered. "She didn't just want to build systems that worked—she wanted them to be elegant and intuitive. That design philosophy would later become evident in how she approached product development at OpenAI."

Upon graduating in 2012, Murati faced a choice: continue to graduate school for a PhD in robotics, or enter industry to apply her engineering skills to real-world products. She chose industry, but with a specific focus—companies working at the intersection of physical systems and artificial intelligence.

Goldman Sachs Tokyo: The Unexpected Detour

Murati's first job after Dartmouth was an internship at Goldman Sachs in Tokyo in 2011, working as an Advanced Concepts Engineer. The role was unusual for a mechanical engineering student—Goldman typically recruited computer science and finance graduates. But the Advanced Concepts group focused on technology infrastructure and systematic trading systems, areas where mechanical engineers' systems-thinking approached proved valuable.

The Tokyo internship exposed Murati to high-performance computing, algorithmic systems, and the challenges of building technology at massive scale. It also introduced her to Silicon Valley's tech elite, many of whom worked with Goldman on IPOs and financing rounds.

"Tokyo taught me about scale and precision," Murati said in a 2023 interview. "Financial systems have to work perfectly, 24/7, across global markets. There's no room for error. That discipline—building systems that are both powerful and reliable—influenced how I later thought about deploying AI products to millions of users."

After the Goldman internship, Murati briefly worked at Zodiac Aerospace from 2012-2013 as an Advanced Concepts Engineer, focusing on aerospace systems. But aerospace felt too slow, too incremental. Murati wanted to work on technology that could transform industries quickly. In 2013, she found it: Tesla.

Tesla (2013-2016): The Model X and the AI Awakening

In 2013, Mira Murati joined Tesla as a product manager on the Model X, Tesla's ambitious electric SUV with falcon-wing doors and advanced autopilot capabilities. The timing was pivotal. Tesla was transitioning from the niche Roadster and Model S to mass-market production, and the Model X represented Elon Musk's bet that Tesla could build complex, feature-rich vehicles at scale.

Murati's mechanical engineering background made her ideal for the Model X, which faced extraordinary technical challenges. The falcon-wing doors required intricate mechanical systems with sensors to prevent collisions. The larger vehicle platform demanded new battery configurations and thermal management systems. And Musk wanted the Model X to showcase Tesla's most advanced Autopilot features, including lane-keeping, adaptive cruise control, and self-parking.

It was the Autopilot work that changed Murati's trajectory. Tesla's Autopilot team, led by Andrej Karpathy, was pioneering computer vision and neural networks for autonomous driving. Murati observed how AI could enable cars to perceive the world, make decisions, and learn from millions of miles of driving data. She attended Tesla's internal AI presentations, studied deep learning research papers, and became fascinated by the potential of artificial intelligence to transform not just cars, but all human-machine interaction.

"My interest in AI started at Tesla," Murati later explained. "We were building cars that could see, understand their environment, and make intelligent decisions. I realized that AI wasn't just about automation—it was about creating machines that could collaborate with humans in intuitive ways. That vision has guided everything I've worked on since."

Murati spent three years at Tesla, seeing the Model X through production challenges, manufacturing delays, and eventual launch in 2015. But by 2016, she was ready for her next challenge: bringing AI to an even more intimate human interface.

Leap Motion (2016-2018): The VR Dream and Its Limits

In 2016, Mira Murati joined Leap Motion (now Ultraleap) as Vice President of Product and Engineering. Leap Motion was developing hand-tracking technology for virtual and augmented reality, using computer vision and machine learning to let users interact with digital environments through natural hand gestures.

The technology was elegant: small sensors tracked hand movements in three dimensions, allowing users to manipulate virtual objects, type on virtual keyboards, and interact with AR interfaces without controllers. Leap Motion had raised over $90 million from investors betting that hand-tracking would become the standard interface for VR/AR, replacing clunky controllers.

Murati led the product and engineering teams developing both the hardware sensors and the AI software that interpreted hand movements. She hoped to make human-computer interaction "as intuitive as playing with a ball"—a natural, fluid interface that required no training or conscious thought.

But Leap Motion faced a fundamental challenge: the VR market wasn't ready. Consumer VR headsets were expensive, uncomfortable, and lacked compelling content. Even with perfect hand-tracking, VR remained a niche technology for gaming enthusiasts and enterprise training applications. Leap Motion's elegant interface couldn't overcome VR's broader adoption barriers.

"We built incredible technology, but the market timing was wrong," Murati reflected in 2024. "VR was five years away from mass adoption—and maybe still is. I learned that even the best technology fails if you can't find product-market fit. That lesson profoundly influenced my thinking about AI products."

By 2018, Murati was restless. She wanted to work on AI systems that could reach hundreds of millions of users, not wait for VR's distant mainstream future. And she wanted to work on the foundational AI research that would enable the next generation of human-computer interfaces. That search led her to OpenAI.

OpenAI (2018-2024)—From VP to CTO, From Research Lab to Product Powerhouse

Joining OpenAI: The 2018 Bet on AGI

When Mira Murati joined OpenAI in 2018 as VP of Applied AI and Partnerships, the organization looked nothing like the $150+ billion behemoth it would become. Founded in December 2015 by Sam Altman, Elon Musk, Ilya Sutskever, Greg Brockman, and others, OpenAI was a nonprofit research lab dedicated to ensuring that artificial general intelligence (AGI) would benefit all humanity.

The organization had 100-120 employees, operated primarily as an academic research lab publishing papers, and had no commercial products. Its most significant work included training agents to play Dota 2 at professional levels and developing GPT (Generative Pre-trained Transformer), a language model that could generate coherent text but had limited practical applications.

Murati's role was to figure out how to turn OpenAI's research into useful applications. "VP of Applied AI and Partnerships" meant building relationships with companies that could deploy OpenAI's models, identifying use cases where AI could create value, and developing the infrastructure to serve models via API.

"When I joined, OpenAI was brilliant at research but had no idea how to build products," Murati said. "Researchers would publish papers showing what models could do, but nobody thought about user experience, reliability, safety, or how to deploy models at scale. My job was to bridge that gap."

Murati brought product discipline honed at Tesla and Leap Motion. She created product teams separate from research teams, established reliability and safety standards, and developed processes for testing models with real users before public release. Her first major project: GitHub Copilot.

GitHub Copilot: The First Product Success

In 2019-2020, Murati's Applied AI team partnered with Microsoft (which had invested $1 billion in OpenAI in 2019) to explore how GPT models could assist software developers. The collaboration led to GitHub Copilot, an AI pair-programmer that suggested code completions and entire functions based on natural language comments.

Copilot launched in June 2021 and immediately demonstrated AI's practical potential. Developers using Copilot wrote code 55% faster, with AI generating 40% of their code. Within a year, Copilot had 1.2 million users and was generating over $100 million in annual recurring revenue—OpenAI's first commercial success.

Murati's role was crucial. She insisted on extensive testing with real developers, developed safety filters to prevent Copilot from suggesting vulnerable code, and worked with GitHub's product team to integrate Copilot seamlessly into developers' workflows. It was a masterclass in taking research models and turning them into products people would pay for.

DALL-E: The Multimodal Breakthrough

While Copilot proved AI's commercial potential, Murati was more excited about multimodal AI—systems that combined text, images, and eventually video and audio. In January 2021, OpenAI released DALL-E, a model that generated images from text descriptions. Users could type "an armchair in the shape of an avocado" and DALL-E would create photorealistic images of exactly that.

DALL-E was Murati's vision brought to life. She had championed multimodal research at OpenAI, arguing that the future of AI wasn't just better language models—it was systems that could understand and create across multiple sensory modalities, just as humans do.

"Language is fundamental to human thought, but it's not the only way we think," Murati explained in a 2022 interview. "We think in images, sounds, spatial relationships. True collaborative intelligence requires AI that can work with us across all these modalities."

DALL-E 2, released in April 2022, became a cultural phenomenon. Artists, designers, marketers, and hobbyists created millions of images, pushing the boundaries of what AI-generated art could achieve. By the time OpenAI released DALL-E 3 in September 2023, integrated directly into ChatGPT, the tool had generated over 2 billion images.

Promotion to CTO: The May 2022 Elevation

In May 2022, Sam Altman promoted Mira Murati to Chief Technology Officer, replacing Greg Brockman who became President. The promotion recognized Murati's central role in transforming OpenAI from research lab to product company. As CTO, Murati would oversee research, product, and safety—unifying OpenAI's technical organization under a single leader.

The timing was deliberate. OpenAI was preparing to launch ChatGPT, a conversational AI that would become the fastest-growing consumer application in history. Altman needed someone who understood both the research and product sides to lead the launch—someone who could ensure ChatGPT worked reliably while pushing researchers to improve the underlying models. Murati was the obvious choice.

ChatGPT: The November 2022 Launch That Changed Everything

ChatGPT launched on November 30, 2022, as a free research preview. The product was deceptively simple: a text box where users could ask questions and receive detailed, conversational responses. But the impact was seismic. Within five days, ChatGPT had 1 million users. Within two months, 100 million. Within a year, 200 million.

Murati led every aspect of the launch. She worked with research teams to fine-tune GPT-3.5 for conversation, implemented RLHF (reinforcement learning from human feedback) to make responses helpful and harmless, developed safety systems to refuse inappropriate requests, and built the infrastructure to handle millions of concurrent users—a challenge OpenAI had never faced.

"The first week after launch was terrifying," one OpenAI engineer recalled. "We were scaling infrastructure 100x while simultaneously discovering ways users could jailbreak the model or generate harmful content. Mira was in the office until 3am every night, coordinating between research, engineering, and safety teams to keep the system running and safe."

ChatGPT's success validated Murati's product philosophy: AI is most powerful when it's accessible to everyone, not locked behind academic papers or expensive APIs. By making ChatGPT free and easy to use, OpenAI demonstrated AI's potential to hundreds of millions of people who had never interacted with language models before.

The GPT-4 Launch and Enterprise Push

In March 2023, OpenAI launched GPT-4, a dramatically more capable model that could pass the bar exam, write complex code, and analyze images. Murati positioned GPT-4 as OpenAI's enterprise play: businesses would pay for GPT-4's superior reliability and reasoning through API access and ChatGPT Enterprise.

ChatGPT Enterprise, launched in August 2023, offered companies dedicated capacity, admin controls, and data privacy guarantees. Within six months, OpenAI signed enterprise contracts with over 600,000 businesses, generating $2+ billion in annual recurring revenue. Murati had transformed OpenAI from a nonprofit research lab into a multi-billion-dollar enterprise software company.

Sora: The Video Generation Moonshot

Throughout 2023-2024, Murati's teams worked on OpenAI's most ambitious multimodal project: Sora, a text-to-video model that could generate photorealistic 60-second videos from text descriptions. Sora represented Murati's ultimate vision: AI that could create across all human sensory modalities.

Sora launched in preview form in February 2024, generating stunning videos that blurred the line between AI-generated and professional film. "A stylish woman walks down a Tokyo street filled with warm glowing neon and animated city signage"—Sora rendered it with cinematic lighting, accurate physics, and coherent motion across the entire scene.

The technology was extraordinary, but Sora also raised profound questions about AI's societal impact. If AI could generate photorealistic videos, how would society distinguish real footage from AI-generated content? Murati, who chaired OpenAI's safety review processes, insisted on limited release to study Sora's potential harms before public deployment.

The November 2023 Board Crisis—Murati's Complex Role

The Five Days That Nearly Destroyed OpenAI

On Friday, November 17, 2023, OpenAI's board of directors fired Sam Altman as CEO, announcing that "the board no longer has confidence in his ability to continue leading OpenAI." Chief Scientist Ilya Sutskever, who had co-founded OpenAI with Altman, had orchestrated the ouster with a 52-page memo outlining concerns about Altman's leadership and OpenAI's direction.

Mira Murati was immediately named interim CEO. The board's announcement stated she "already leads the company's research, product, and safety functions" and was "uniquely qualified for the role." For the outside world, it appeared the board had executed a smooth transition.

Behind the scenes, the reality was far messier—and Murati's role far more complicated.

The Sutskever Memo and Murati's Information

Ilya Sutskever's decision to support Altman's removal relied heavily on information provided by Mira Murati. In his subsequent deposition after the crisis, Sutskever testified that he "fully believed the information that Mira was giving me" and that he trusted her completely. The 52-page memo Sutskever compiled included screenshots and documentation that Murati had shared, detailing concerns about Altman's leadership style, communication with the board, and prioritization of commercial growth over safety research.

Murati's concerns were substantive. As CTO overseeing research, product, and safety, she had witnessed tensions between Altman's aggressive commercialization push and researchers' concerns about responsible AI development. OpenAI was raising billions from Microsoft, launching enterprise products, and racing to beat Google and Anthropic—all while the research team worried about whether the company was moving too fast.

"Mira genuinely believed OpenAI was losing its way," one person close to the situation explained. "She saw Altman prioritizing revenue and growth over the safety research that was supposed to be OpenAI's core mission. She shared those concerns with Ilya, not realizing he would use them to justify removing Sam."

The Immediate Reversal

But within hours of Altman's firing, Murati realized the ouster was a catastrophic mistake. OpenAI employees were in revolt, threatening mass resignation. Microsoft's CEO was furious, having invested $13 billion without being consulted about removing the CEO. And Altman's firing was leaking to the media, creating a PR nightmare.

According to Bloomberg, Murati immediately began working to reverse the decision. She pushed for a new board that would absolve Altman of wrongdoing and allow his reinstatement. She discussed the plan with board member Adam D'Angelo, arguing that OpenAI couldn't function without Altman and that the organization's mission was more important than board politics.

On Saturday, November 18, Murati and other OpenAI executives drafted a letter demanding Altman's return. By Sunday, over 700 of OpenAI's 770 employees—including Murati—had signed the letter, threatening to resign and join Microsoft if the board didn't reinstate Altman. Even Ilya Sutskever, who had orchestrated the coup, signed the letter, tweeting "I deeply regret my participation in the board's actions."

The board capitulated. On Wednesday, November 22, Sam Altman was reinstated as CEO with a reconstituted board. The coup had lasted five days. The organizational trauma would last much longer.

The Lingering Resentment

While Murati kept her CTO role and publicly expressed relief at Altman's return, those close to OpenAI noted that relationships had fundamentally changed. Altman knew Murati had provided information that contributed to his ouster, even if she had quickly worked to reverse it. Trust had been broken.

"There was lingering resentment over the role Mira played," Fortune reported in September 2024. "She had tried to fix the situation once it went sideways, but Sam never fully forgave her for the initial betrayal."

The dynamics were further complicated by Sutskever's departure in May 2024. Sutskever, who had led OpenAI's research for nearly a decade, left to found Safe Superintelligence Inc., taking with him many researchers loyal to OpenAI's original safety-focused mission. Murati remained at OpenAI, but her position had become untenable: trusted by neither Altman's commercial faction nor the departing safety-focused researchers.

The September 2024 Departure—Why Murati Walked Away

The Restructuring and Power Consolidation

By mid-2024, OpenAI was undergoing fundamental transformation. The company was restructuring from a nonprofit-controlled entity to a for-profit corporation, allowing it to raise capital at valuations exceeding $150 billion. Sam Altman was consolidating control, seeking equity in the restructured company (he had previously held no ownership stake) and reshaping OpenAI's governance to prevent future board rebellions.

For Murati, the restructuring symbolized everything she had worried about: OpenAI was abandoning its nonprofit mission to become a traditional tech company optimizing for shareholder value. The safety research that had justified OpenAI's existence was being subordinated to product velocity and revenue growth. And the organizational culture that had attracted her in 2018—researchers united by a mission to ensure AGI benefits humanity—was fragmenting into corporate factions.

"Mira saw OpenAI becoming just another tech company," one former colleague explained. "The mission was still there rhetorically, but the actual priorities were shipping products, beating Anthropic, and justifying Microsoft's investment. For someone who had joined because she believed in the mission, that was devastating."

The Product Pressure and Shipping Velocity

As CTO, Murati faced immense pressure to ship new products and features. OpenAI was launching ChatGPT upgrades every few weeks, rolling out GPT-4 Turbo, integrating DALL-E 3, releasing the o1 reasoning model, and developing countless features to compete with Anthropic's Claude and Google's Gemini.

The pace was exhausting and, in Murati's view, reckless. OpenAI was releasing models without adequate safety testing, discovering problems only after millions of users encountered them, and fixing issues reactively rather than preventing them proactively. This violated Murati's engineering philosophy of building reliable systems before scaling them.

"We were shipping things half-baked and fixing them in production," one OpenAI engineer admitted. "Mira hated it. She came from Tesla and Leap Motion where you don't ship products until they're rock-solid. But Sam wanted to move fast, capture market share, and iterate based on user feedback. It was a fundamental philosophical difference."

The Breaking Point: The September 2024 Decision

In September 2024, Mira Murati decided she had to leave. The immediate trigger was OpenAI's board approving the for-profit restructuring over her objections. But the deeper cause was Murati's realization that she couldn't change OpenAI's direction from inside. The organization had become too large, too commercially focused, and too committed to competing with Anthropic and Google to prioritize the safety-first, mission-driven approach Murati believed was essential.

"I'm stepping away because I want to create the time and space to do my own exploration," Murati wrote in her September 25 resignation memo. The statement was deliberately vague—she couldn't announce her plans without violating non-compete and non-solicitation agreements with OpenAI. But insiders knew exactly what "exploration" meant: building a competitor.

Within hours, Chief Research Officer Bob McGrew and VP of Research Barret Zoph also resigned. The simultaneous departures signaled that Murati wasn't alone in her concerns about OpenAI's direction.

Thinking Machines Lab—Building the OpenAI Alternative

Stealth Mode: Recruiting the Dream Team (October 2024 - February 2025)

From October 2024 through February 2025, Mira Murati operated in stealth mode, quietly building Thinking Machines Lab while bound by non-compete restrictions. She couldn't publicly recruit OpenAI employees or announce her new venture, but she could have private conversations with researchers who were similarly disillusioned with OpenAI's direction.

Her most important recruit was John Schulman, an OpenAI co-founder who had developed RLHF (reinforcement learning from human feedback), the technique that made ChatGPT possible. Schulman had announced his departure from OpenAI in August 2024 to join Anthropic, but Murati convinced him that Thinking Machines offered something neither OpenAI nor Anthropic could: a chance to build collaborative AI systems from scratch, without the organizational baggage and competing priorities of established companies.

Other key recruits included:

Barret Zoph: OpenAI's VP of Research who had co-developed neural architecture search and efficient model training techniques. Zoph would lead Thinking Machines' research organization.

Luke Metz: OpenAI research scientist specializing in meta-learning and optimization algorithms. Metz would focus on making model training more efficient and accessible.

Alec Radford: OpenAI researcher who co-created GPT and CLIP (the model enabling DALL-E). Radford joined as an advisor, providing continuity with OpenAI's technical foundations.

Bob McGrew: Former OpenAI Chief Research Officer who resigned alongside Murati. McGrew would advise on research strategy and organizational structure.

By February 2025, Murati had assembled a team of approximately 30 researchers and engineers—many of them OpenAI veterans who shared her concerns about the company's direction. The team included experts in large language models, multimodal AI, reinforcement learning, and distributed systems.

The February 2025 Public Launch

On February 15, 2025, Thinking Machines Lab publicly announced its existence. The company's mission statement was deliberately distinct from OpenAI's: "We're building collaborative general intelligence—multimodal AI systems that work with humans through natural communication across conversation, sight, and the complex ways humans collaborate."

The positioning was strategic. While OpenAI and Anthropic focused on building increasingly powerful standalone AI systems that could perform tasks autonomously, Thinking Machines would focus on AI that amplified human capabilities through collaboration. The distinction reflected Murati's experience at Leap Motion, where she had learned that the best human-computer interfaces felt natural and intuitive, not like operating a tool.

"We don't want to build AI that replaces humans or operates independently," Murati explained at a February press event. "We want to build AI that makes humans more capable, more creative, and more productive by collaborating seamlessly across all the ways humans naturally communicate."

The announcement generated immediate buzz. Tech journalists noted the irony: Murati, who had built ChatGPT into the fastest-growing consumer application in history, was now positioning her new company as an alternative to OpenAI's approach. Investors began circling immediately.

The $2 Billion Seed Round at $12 Billion Valuation (July 2025)

On July 15, 2025, Thinking Machines Lab announced it had closed a $2 billion seed round at a $12 billion post-money valuation—the largest seed round in venture capital history. The round was led by Andreessen Horowitz, with participation from Nvidia, AMD, Accel, ServiceNow, Cisco, and Jane Street.

The valuation was extraordinary for several reasons:

Company Age: Thinking Machines was less than six months old since its public launch and had no commercial product. Most seed-stage startups raise $5-20 million at $20-100 million valuations. Thinking Machines raised 100x typical seed amounts at 100x typical seed valuations.

Female Founder Milestone: The $12 billion valuation made Murati the highest-valued female founder in AI history, surpassing the previous record holder by more than 10x. It was also the largest seed round ever raised by a female founder in any industry.

Talent Premium: The valuation reflected investors' belief that Murati's team—packed with OpenAI veterans who had built ChatGPT, GPT-4, and DALL-E—could replicate that success. Investors weren't betting on technology (which didn't yet exist publicly) but on talent and execution capability.

Strategic Investors: Nvidia and AMD's participation provided compute resources. ServiceNow and Cisco provided enterprise distribution potential. Jane Street brought high-frequency trading expertise relevant to low-latency inference. The investor syndicate was designed to provide strategic value beyond capital.

Marc Andreessen, whose firm led the round, justified the valuation in a blog post: "Mira Murati is the most proven product leader in AI. She built ChatGPT, DALL-E, and Sora—products that defined AI's first commercial era. Now she's building the next generation of collaborative AI with the team that created modern AI. That's worth $12 billion on day one."

What the Money Funded

The $2 billion gave Thinking Machines resources to pursue an ambitious roadmap:

Compute Infrastructure: $800-900 million allocated to GPU purchases and cloud computing contracts. Thinking Machines negotiated favorable terms with Nvidia (an investor) for H100 and next-generation Blackwell GPUs, securing compute capacity that would otherwise be unavailable given industry-wide shortages.

Talent Acquisition: $400-500 million for hiring and compensation. Thinking Machines was targeting 200+ researchers and engineers by end of 2025, paying top-of-market salaries to compete with OpenAI, Anthropic, and Google for AI talent.

Research and Development: $500-600 million for model training, experiments, and infrastructure development. This included building Thinking Machines' core multimodal foundation models and developing the systems to make them collaborative.

Operating Expenses: $200-300 million for offices, legal, corporate infrastructure, and general operations over the next 2-3 years.

The capital gave Thinking Machines runway to pursue long-term research without immediate pressure to generate revenue—ironically, the same strategy OpenAI had abandoned when it restructured for profitability.

Tinker—The October 2025 Product Launch

The API for Distributed Model Fine-Tuning

On October 1, 2025, Thinking Machines Lab launched its first product: Tinker, a Python API for distributed language model fine-tuning. The product immediately clarified Thinking Machines' strategy and validated its $12 billion valuation.

Tinker addressed a fundamental bottleneck in AI development: fine-tuning large language models requires massive computational resources that only a handful of well-funded organizations can afford. Researchers at universities, startups, and smaller companies either use pre-trained models as-is (limiting customization) or spend months cobbling together distributed training infrastructure.

Tinker abstracted away the infrastructure complexity. Researchers could write standard Python code specifying their fine-tuning parameters, and Tinker would automatically distribute the training across hundreds or thousands of GPUs, handle fault tolerance and checkpointing, optimize memory usage, and provide monitoring tools to track training progress.

"Tinker democratizes model fine-tuning," Murati explained at the launch event. "Instead of needing a team of infrastructure engineers and $10 million in GPU resources, researchers can fine-tune state-of-the-art models with a few hundred lines of Python code and a credit card. That's how we accelerate AI progress—by making advanced capabilities accessible to everyone, not just tech giants."

Early Traction and Validation

Within weeks of launch, Tinker had attracted users from Berkeley, Princeton, Stanford, and Redwood Research—prestigious institutions validating the product's technical quality. Researchers were using Tinker to:

Fine-tune models on domain-specific data: Medical researchers adapted language models to understand clinical terminology and patient records. Legal scholars fine-tuned models on case law and legal reasoning. Scientists customized models for chemistry, physics, and biology terminology.

Experiment with novel architectures: Researchers tested new attention mechanisms, embedding strategies, and training techniques without managing infrastructure.

Reproduce published results: Academic labs used Tinker to verify claims from AI research papers, contributing to scientific reproducibility.

Prototype new applications: Startups fine-tuned models for specific use cases—customer service, legal document analysis, medical coding—without building training infrastructure in-house.

The early traction validated Thinking Machines' approach: rather than compete directly with OpenAI and Anthropic to build the largest foundation models, Thinking Machines would empower others to customize and deploy AI for their specific needs. It was infrastructure for the long tail of AI applications.

Open Source Component and Community Building

True to Murati's vision of "collaborative intelligence," Tinker included significant open-source components. The core training orchestration code, monitoring tools, and common fine-tuning recipes were released under permissive licenses, allowing researchers to inspect, modify, and contribute improvements.

The open-source strategy served multiple purposes:

Community Building: Open source attracted researchers who would improve Tinker and evangelize it to colleagues, creating a flywheel of adoption and contribution.

Transparency: By open-sourcing core components, Thinking Machines demonstrated its commitment to collaborative AI development—a contrast to OpenAI's increasingly closed approach.

Talent Attraction: Researchers preferred companies that contributed to open source, viewing it as evidence of technical leadership and community engagement.

Ecosystem Lock-in: As researchers built tools and workflows around Tinker's open-source components, they became dependent on Thinking Machines' paid infrastructure for scaling those workflows.

The Business Model

Tinker operated on a simple usage-based pricing model:

Free Tier: Researchers could use Tinker free for small-scale experiments (up to 100 GPU-hours per month), covering most academic research needs.

Pay-as-you-go: Users paid $2-4 per GPU-hour depending on GPU type (A100, H100, etc.), competitive with cloud providers but with superior ease-of-use.

Research Credits: Thinking Machines provided $100,000-500,000 in credits to select academic labs, generating goodwill and scientific publications that referenced Tinker.

Enterprise Contracts: Companies could negotiate dedicated capacity, support SLAs, and custom features for large-scale production deployments.

By November 2025, Tinker was generating estimated $5-10 million in monthly revenue—a modest amount, but proof of product-market fit and a foundation for scaling to hundreds of millions in ARR as AI adoption expanded.

The Multimodal Vision—What Comes After Tinker

Beyond Text: Collaborative AI Across All Modalities

While Tinker established Thinking Machines' credibility and revenue foundation, it was just the beginning of Murati's vision. The company's long-term mission—"collaborative general intelligence"—required building multimodal AI systems that could interact with humans across conversation, vision, audio, and eventually touch and embodied robotics.

Murati had hinted at this roadmap in February 2025: "We're building AI that understands how humans naturally collaborate: through conversation, through showing and pointing at things, through sketching ideas, through building prototypes together. That requires multimodal models that can see what you're showing, hear what you're explaining, and respond across all those modalities."

Internal roadmaps obtained by tech journalists outlined Thinking Machines' 2026-2027 product plans:

Q1 2026: Vision-Language Models: Models that could analyze images, diagrams, and videos while maintaining conversational context. Use cases included design feedback, medical image analysis, and visual question-answering.

Q2 2026: Audio Integration: Adding speech recognition and generation, allowing users to interact with AI through conversation rather than typing. Focus on low-latency real-time conversation for collaborative work.

Q3 2026: Collaborative Workspaces: Digital environments where humans and AI could co-create documents, designs, code, and presentations in real-time, similar to Google Docs but with AI as an active collaborator.

2027: Embodied AI: Longer-term research on robots and physical systems that could collaborate with humans in physical spaces, drawing on Murati's mechanical engineering background and experience with Tesla's Autopilot.

The roadmap was ambitious, but Murati's track record suggested she could execute. She had already built ChatGPT, DALL-E, and Sora at OpenAI—products that defined multimodal AI's first generation. Thinking Machines was her chance to build the second generation without OpenAI's organizational constraints.

The Technical Differentiators

What would make Thinking Machines' collaborative AI different from ChatGPT or Claude? Murati outlined three core differentiators:

Real-time Collaboration: Unlike ChatGPT's turn-based conversation, Thinking Machines' AI would interact in real-time, responding immediately to visual cues, interruptions, and multi-party conversations. This required new model architectures optimized for low-latency streaming inference.

Contextual Memory: The AI would maintain long-term memory of collaborations with specific users, learning their preferences, communication styles, and domain knowledge. This personalization would make AI feel like a true collaborator rather than a generic assistant.

Explainability and Transparency: Drawing on Murati's emphasis on safety, Thinking Machines' models would explain their reasoning, cite sources, and express uncertainty—critical for professional use cases where users need to verify AI suggestions.

The Enterprise Go-to-Market Strategy

While Tinker targeted researchers and developers, Thinking Machines' ultimate business model focused on enterprise software. Murati had learned from OpenAI that consumer AI was difficult to monetize at scale (ChatGPT Plus subscriptions generated only $240/year per user), but enterprise AI could generate $50-100 per seat per month—20-40x higher revenue per user.

The enterprise strategy built on Tinker's foundation:

Phase 1 (2025): Developer Tool: Tinker establishes brand recognition and technical credibility with researchers and developers who will eventually work at enterprises.

Phase 2 (2026): Collaborative AI Platform: Launch multimodal collaborative tools for knowledge workers—analysts, designers, engineers, consultants—who need AI assistance for creative and analytical work.

Phase 3 (2027-2028): Enterprise Platform: Build administrative controls, security features, and integration capabilities to sell Collaborative AI as an enterprise platform, similar to how Slack, Notion, and Figma scaled from individual users to enterprise contracts.

The model mirrored how OpenAI scaled ChatGPT to ChatGPT Enterprise, but with a critical difference: Thinking Machines would design for enterprise needs from the beginning, rather than retrofitting consumer products for business use.

The Competitive Landscape—Thinking Machines vs. OpenAI vs. Anthropic

OpenAI: The Incumbent Giant

OpenAI remained the dominant force in AI, with ChatGPT's 200 million users, $4+ billion in ARR, and a $150+ billion valuation backed by Microsoft's $13 billion investment. GPT-4 and the newer o1 reasoning model set capability benchmarks that competitors struggled to match.

But OpenAI also faced significant challenges that Thinking Machines could exploit:

Product Fragmentation: OpenAI now offered ChatGPT, ChatGPT Plus, ChatGPT Team, ChatGPT Enterprise, API access, and custom models through fine-tuning. The fragmented product line confused customers and created pricing tensions.

Leadership Instability: The November 2023 board crisis, followed by executive departures (Murati, McGrew, Zoph, Schulman), raised questions about OpenAI's stability and culture.

Mission Drift: OpenAI's transformation from nonprofit to for-profit corporation alienated researchers and safety advocates who had joined for the original mission.

Microsoft Dependency: OpenAI's reliance on Microsoft for cloud infrastructure, enterprise distribution, and capital created strategic vulnerability. If Microsoft priorities changed, OpenAI's growth could stall.

Thinking Machines could position itself as "the OpenAI OpenAI could have been"—focused on collaborative AI with human interests, rather than racing toward autonomous AGI for commercial gain.

Anthropic: The Safety-First Alternative

Anthropic, founded by former OpenAI researchers Dario and Daniela Amodei, had positioned itself as the "responsible AI" alternative, emphasizing Constitutional AI and safety research. The company raised $13 billion at a $183 billion valuation in September 2025, demonstrating that safety-focused positioning resonated with investors and enterprises.

Anthropic's Claude models competed directly with GPT-4 on capabilities while emphasizing transparency and refusal to generate harmful content. The company had secured major enterprise partnerships with Deloitte, Cognizant, and government agencies.

Thinking Machines differentiated from Anthropic through:

Collaboration vs. Safety: While Anthropic focused on making AI safe and responsible, Thinking Machines focused on making AI collaborative and useful. Both emphasized ethical development, but from different philosophical starting points.

Multimodal Integration: Thinking Machines' vision of AI working across conversation, vision, audio, and embodiment went beyond Anthropic's text-and-image focus.

Infrastructure Tools: Tinker provided infrastructure that both OpenAI and Anthropic lacked—a platform for others to build AI applications, not just consume pre-built models.

The Long-Term Positioning

By late 2025, the AI industry landscape had crystalized into three distinct philosophies:

OpenAI: Build AGI that can perform any intellectual task, optimize for capability and speed, monetize through consumer subscriptions and enterprise API access.

Anthropic: Build safe, steerable AI systems that follow Constitutional principles, focus on enterprise customers in regulated industries, emphasize transparency and alignment research.

Thinking Machines: Build collaborative AI that amplifies human capabilities across all modalities, provide infrastructure for others to customize AI, prioritize human-AI interaction over autonomous operation.

Each philosophy attracted different customer segments. OpenAI dominated consumer mindshare. Anthropic won enterprises prioritizing safety. Thinking Machines targeted users who wanted AI as a collaborator, not a replacement—knowledge workers, researchers, and creative professionals who valued augmentation over automation.

Conclusion: The Engineer Who Defined AI's First Era—And May Define Its Second

In July 2024, when Mira Murati was still OpenAI's CTO, she gave an interview reflecting on her career journey. "I've always been drawn to technology that changes how humans interact with the world," she said. "At Tesla, it was cars that could see and drive themselves. At Leap Motion, it was interfaces that felt like magic. At OpenAI, it was AI that could understand and create across all human modalities. The through-line has always been: how do we make technology feel natural and collaborative, not foreign and intimidating?"

That philosophy now defines Thinking Machines Lab. While OpenAI races toward autonomous AGI and Anthropic emphasizes AI safety, Murati is building something different: AI that works with humans rather than replacing or controlling them. It's a vision shaped by her mechanical engineering training, her experience building Tesla's self-driving cars and VR interfaces, and her six years transforming OpenAI from research lab to product powerhouse.

The $12 billion valuation and $2 billion in funding give Murati resources few founders ever access. The team of OpenAI veterans—including John Schulman, Barret Zoph, and Luke Metz—provides talent that money can't usually buy. And the October 2025 launch of Tinker demonstrates that Thinking Machines can ship products, not just raise capital and make promises.

But the ultimate test lies ahead. Can Thinking Machines compete with OpenAI's ChatGPT juggernaut and Anthropic's $183 billion war chest? Can collaborative AI generate the same excitement as autonomous AGI? Can Murati's vision of multimodal, explainable, human-centered AI resonate with enterprises and consumers accustomed to ChatGPT's impressive but sometimes opaque capabilities?

History suggests betting against Mira Murati is unwise. She joined Tesla when it was scaling the Model X through production hell. She joined Leap Motion when VR hand-tracking seemed impossibly futuristic. She joined OpenAI when it was an obscure research lab. In each case, she took ambitious visions and turned them into products millions of people used.

At OpenAI, she built ChatGPT—the fastest-growing consumer application in history. She created DALL-E, which made AI-generated art mainstream. She launched Sora, which demonstrated AI could generate photorealistic video. She survived a boardroom coup and helped save the company from self-destruction. And she walked away from what could have been a $1.5 billion equity stake in OpenAI's restructuring because she believed she could build something more important.

Now, at 36 years old, Murati is making the biggest bet of her career: that the future of AI isn't about building machines that think like humans, but about building machines that collaborate with humans in ways that feel natural, transparent, and empowering.

If she succeeds, Thinking Machines will define AI's second era the way ChatGPT defined its first. And Mira Murati—the mechanical engineer from Vlorë, Albania who won a scholarship at 16 and never stopped building the future—will have proven that the most important technology isn't the most powerful, but the most collaborative.

The next three years will determine whether that vision can compete with OpenAI and Anthropic's approaches. But anyone who has watched Mira Murati build Tesla's self-driving technology, create ChatGPT, and walk away from OpenAI at the height of her influence knows better than to bet against her.