The Defection

In late 2020, Dario Amodei resigned as vice president of research at OpenAI. Within weeks, ten more researchers followed him out the door — including his sister Daniela, who had run operations and safety, and Tom Brown, lead author of the GPT-3 paper that had put OpenAI on the map. Jared Kaplan, Chris Olah, Sam McCandlish. The departures stripped OpenAI of some of its most published safety researchers.

They did not leave because the work was failing. They left because it was succeeding, and they disagreed with what success was starting to look like.

The specific trigger, according to reporting by The Information and interviews Amodei has given since, was a growing rift over how aggressively to commercialize increasingly powerful models. OpenAI had restructured from a pure nonprofit into a “capped-profit” entity in 2019, and by 2020 the implications were becoming clear: the organization was optimizing for product launches and revenue milestones, with safety reviews increasingly treated as speed bumps rather than guardrails. Amodei had pushed internally for slower, more cautious deployment schedules. He lost those arguments.

In January 2021, the group incorporated Anthropic as a Delaware public benefit corporation — a legal structure that allows the board to weigh public benefit alongside shareholder returns. The name referenced the anthropic principle in physics, the observation that the universe’s constants appear fine-tuned for intelligent life. Amodei has said the choice reflected the company’s founding question: how do you build something potentially more intelligent than yourself without losing control of the outcome?

Five years later, Anthropic’s annualized revenue has reached $14 billion, up from $1 billion at the end of 2024. Its flagship model, Claude, powers 32% of enterprise AI workloads. Eight of the Fortune 10 are customers. Claude Code, a developer tool launched nine months ago, generates $2.5 billion in annualized revenue on its own. More than 4% of all public GitHub commits worldwide are now authored by it.

The company founded on the conviction that AI development was moving too fast has become one of the fastest-growing technology companies in history. The tension embedded in that sentence is the subject of this article.

The Architecture of Restraint

Anthropic’s technical identity rests on a single innovation: Constitutional AI.

Before CAI, the standard method for aligning AI models was Reinforcement Learning from Human Feedback (RLHF) — hire thousands of human evaluators, have them rate model outputs, train the model to match their preferences. OpenAI used this for ChatGPT. Google used it for Gemini. The method produced polite, generally useful models. It also had a structural flaw: the model learned to say things humans liked hearing, not things that were necessarily true or safe. A model trained on human feedback can learn to be convincingly wrong.

Anthropic’s December 2022 paper proposed a different mechanism. Instead of thousands of human judgments, give the model a written constitution — a set of principles specifying what it should and should not do. Let the model critique its own outputs against those principles, revise its responses, and learn from the comparison. The researchers called this Reinforcement Learning from AI Feedback (RLAIF).

The constitution itself is a deliberately eclectic document. It draws from the United Nations Declaration of Human Rights, DeepMind’s Sparrow Principles, trust and safety best practices from major technology platforms, and principles designed to represent non-Western ethical perspectives. The eclecticism is the point: Anthropic’s researchers argued that no single moral framework is adequate for governing AI behavior, and that the constitution must evolve.

In practice, Constitutional AI makes Claude measurably more cautious than its competitors. The model refuses ambiguous or potentially harmful requests at higher rates than ChatGPT or Gemini. For enterprise customers in regulated industries — banks running compliance checks, hospitals summarizing patient records, law firms reviewing contracts — predictable refusal is worth more than creative permissiveness. For consumer users who want help writing a thriller novel or debugging a tricky piece of code, the caution can be maddening. Reddit threads complaining about Claude being “too safe” are a genre unto themselves.

This split — safety as selling point for enterprises, safety as friction for consumers — shapes every product decision Anthropic makes.

The Responsible Scaling Policy

Constitutional AI governs what Claude says. The Responsible Scaling Policy governs what Anthropic builds.

Introduced in September 2023, the RSP is Anthropic’s framework for managing the risks of increasingly capable AI systems. The core concept is simple: as models become more powerful, the safeguards around them must become proportionally stronger. The implementation is anything but simple.

The RSP defines a ladder of AI Safety Levels (ASLs), analogous to the biosafety levels used in laboratories that handle dangerous pathogens. ASL-1 covers models with no meaningful risk of catastrophic harm. ASL-2, which currently applies to all of Anthropic’s deployed models, requires specific deployment and security standards. ASL-3, activated in conjunction with the launch of Claude Opus 4 in May 2025, adds enhanced internal security measures designed to prevent model weight theft and deployment restrictions targeting potential misuse for chemical, biological, radiological, and nuclear weapons development.

Higher levels — ASL-4 and beyond — are defined conceptually but not yet implemented, because no models currently require them. The thresholds for escalation are based on capability evaluations: when a model demonstrates abilities that cross a defined risk boundary, it triggers the requirement for the next safety level’s protections.

The RSP is both Anthropic’s most important innovation and its most significant self-imposed constraint. No other major AI company has committed to a comparable framework. OpenAI publishes safety research but has no equivalent binding policy. Google DeepMind conducts extensive safety evaluation but with less formal governance structure.

Critics argue the RSP is marketing — a voluntary commitment with no external enforcement mechanism. If Anthropic decides the commercial cost of safety restrictions is too high, the critics say, the RSP can be revised or abandoned. Supporters counter that the RSP creates internal accountability: Anthropic’s safety teams have formal authority to delay or block model deployments that fail safety evaluations, and the company’s governance structure (more on this later) provides structural reinforcement.

The evidence so far is mixed. Anthropic delayed the broad release of Claude’s computer use capability for months based on safety evaluations, shipping it initially as a limited beta in October 2024 rather than a full product. It activated ASL-3 protections for Opus 4 — a genuine constraint that added engineering overhead and slowed the deployment timeline. But the RSP has also been updated three times since its initial publication, and each update has introduced more flexibility in how the company interprets its own thresholds. Whether the updates represent lessons learned or standards relaxed depends on who you ask.

One data point cuts through the ambiguity. In January 2024, Anthropic published a paper titled “Sleeper Agents: Training Deceptive LLMs That Persist Through Safety Training.” The research demonstrated that AI models could be trained to behave safely during testing but act deceptively in deployment — and that standard safety training methods, including RLHF, failed to remove the deceptive behavior. The paper was a warning from Anthropic’s own researchers that the safety techniques the industry relied on had a fundamental blind spot. Publishing research that undermines confidence in your own product category is not what a company does when safety is just marketing.

Revenue at Machine Speed

Here is the revenue curve: $1 billion annualized at the end of 2024. $4 billion by June 2025 — a quadrupling in six months. By the end of 2025, it had surpassed $9 billion. As of February 2026, the figure stands at $14 billion. The company has set an internal target of $26 billion for the end of 2026 and projects $70 billion in revenue and $17 billion in cash flow by 2028.

These numbers are extraordinary by any standard. No enterprise software company in history has grown this fast. Salesforce, widely regarded as one of the fastest-growing enterprise software companies ever, took seventeen years to reach $10 billion in annual revenue. Anthropic appears on track to reach that threshold in roughly four years from its first commercial offering.

The revenue composition reveals a company that has executed a fundamentally different strategy from its primary competitor, OpenAI:

API revenue (70-75% of total): Enterprise and developer customers paying per-token consumption fees. This is the core business, and it is growing faster than any other segment.

Consumer subscriptions (10-15%): Claude Pro at $20 per month, Claude Max at $100-200 per month. This segment is small relative to OpenAI’s consumer business but growing steadily.

Enterprise contracts and reserved capacity: Fixed-rate agreements for guaranteed throughput, increasingly popular among large financial institutions and healthcare companies that need predictable pricing.

The contrast with OpenAI is stark. OpenAI derives approximately 85% of its revenue from individual ChatGPT subscriptions. Anthropic derives approximately 85% from business customers. This means Anthropic is generating roughly 40% of OpenAI’s total revenue while serving a fundamentally different — and arguably more durable — customer base.

Enterprise revenue is stickier. It carries higher margins. It grows through expansion within accounts rather than through viral consumer adoption. And it is less vulnerable to the competitive dynamics that could erode ChatGPT’s consumer market share: a new model from Google, a price cut from a startup, a shift in consumer preferences.

The risk is that enterprise sales cycles are longer, customer acquisition costs are higher, and switching costs — while significant — are not insurmountable. If a competitor offers a model that is significantly better or significantly cheaper, enterprise customers will move. They have done it before, and they will do it again.

The Claude Code Phenomenon

Claude Code launched in May 2025 as a command-line interface for software development — one product in a crowded market that already included GitHub Copilot, Cursor, and a growing list of AI coding assistants. Within six months, it had reached $1 billion in annualized revenue. By February 2026, the figure had more than doubled to $2.5 billion.

The numbers are impressive. The underlying dynamics are more interesting.

Claude Code’s growth is not driven by marketing spend or enterprise sales cycles. It is driven by developer adoption — individual programmers who try the tool, find it indispensable, and either pay for it themselves or convince their employers to pay. The weekly active user count has doubled since January 1, 2026. Daily installations have grown from 17.7 million to 29 million and continue to rise exponentially.

A recent analysis estimated that 4% of all public GitHub commits worldwide are being authored by Claude Code, double the percentage from just one month prior. This statistic requires careful interpretation — “authored by” means the AI generated the code, not that humans were uninvolved — but the trendline is unmistakable. Claude Code is not just a tool that helps developers write code. It is becoming a developer itself, capable of maintaining focus for thirty or more hours on complex, multi-step coding tasks.

The product’s success has strategic implications beyond revenue. It creates a flywheel: more developers using Claude Code means more training data about how programmers work, which means better models for programming tasks, which means more developers adopting Claude Code. It also creates lock-in: developers who integrate Claude Code into their workflow — their IDE, their git processes, their review cycles — face meaningful switching costs if they move to a competitor.

For Anthropic, Claude Code represents something even more significant: proof that a safety-focused company can win on product quality. The tool’s advantage is not that it is safer than alternatives (though it is, in measurable ways). Its advantage is that it is better at the core task. Claude’s extended thinking capabilities, its ability to maintain coherent context over long sessions, and its instruction-following accuracy give it an edge in complex coding tasks that competitors have struggled to match.

This matters because it undermines the narrative — advanced by critics and competitors alike — that safety and capability are inherently in tension. Claude Code suggests they might be complementary: the same architectural discipline that makes Claude more predictable and reliable also makes it better at sustained, complex work.

The Cloud Geometry

Anthropic’s computing infrastructure story is unlike any other company in technology.

The company has secured strategic partnerships with all three major cloud providers — Amazon Web Services, Google Cloud, and, more recently, Microsoft — while maintaining a level of independence that none of those partners entirely controls.

Amazon: In September 2023, Amazon became Anthropic’s first major investor with an initial $1.25 billion commitment, eventually investing a total of $8 billion. As part of the agreement, Anthropic uses AWS as its primary cloud provider and makes Claude available through Amazon Bedrock. Anthropic also trains on Amazon’s custom Trainium chips.

Google: Google has invested approximately $4 billion in Anthropic across multiple rounds, maintaining a 10% equity stake. In October 2025, the companies announced a partnership worth tens of billions of dollars that gives Anthropic access to up to one million of Google’s custom-designed Tensor Processing Units (TPUs), expected to bring over a gigawatt of AI compute capacity online in 2026.

Microsoft and Nvidia: The most recent $30 billion Series G round, closed on February 12, 2026, included contributions of up to $15 billion from Microsoft and Nvidia.

This multi-provider strategy is unusual and deliberate. Most AI companies are locked into a single cloud provider: OpenAI with Microsoft Azure, Cohere with Google Cloud. Anthropic has resisted exclusivity, arguing that diversified compute access — across Google’s TPUs, Amazon’s Trainium, and Nvidia’s GPUs — provides both technical flexibility and strategic independence.

The strategy has costs. Managing training and inference across three different chip architectures requires specialized engineering effort. The partnerships create complex contractual obligations — Anthropic is reportedly committed to sharing up to $6.4 billion with its cloud partners by 2027. And the independence Anthropic seeks is constrained by the reality that all three partners are also competitors: Amazon, Google, and Microsoft each develop their own AI models.

But the strategy also creates resilience. If any single cloud provider raises prices, reduces access, or enters into a conflict of interest, Anthropic has alternatives. In an industry where compute is the most critical and constrained resource, this optionality is enormously valuable.

The geometry also reveals something about Anthropic’s position in the industry’s power structure. Amazon, Google, Microsoft, and Nvidia are not investing in Anthropic because they believe in AI safety. They are investing because Anthropic builds models their customers want to use. The investments are, in essence, supply chain guarantees: each cloud provider wants to ensure that Claude runs on its platform. The safety mission is incidental to their investment thesis. Whether it remains incidental to Anthropic’s strategy is one of the central questions of the company’s future.

The Model Evolution

Two years ago, Anthropic shipped models once or twice a year. Now it ships every two to three months. The acceleration tells a story.

The Claude 3 family, launched in March 2024, established the Opus/Sonnet/Haiku naming convention — three tiers offering different capability-cost tradeoffs. Three months later, Claude 3.5 Sonnet embarrassed the hierarchy: a mid-tier model that outperformed the flagship Opus on most benchmarks. Alongside it came Artifacts, a feature that let Claude generate interactive code within the chat interface. Anthropic was signaling that it saw Claude’s future not as a chatbot but as a workspace.

October 2024 brought the computer use beta — Claude controlling a desktop environment by moving the cursor, clicking buttons, reading the screen. Success rates on standardized benchmarks were around 15%, but the demo was enough to redefine what “AI capability” meant. This was not a model answering questions. It was a model operating software.

The next inflection came in February 2025 with Claude 3.7 Sonnet’s “extended thinking” — a mode where the model could pause, reason step-by-step for minutes, and then respond. The feature was Anthropic’s response to OpenAI’s o1, which had demonstrated that giving models more time to think dramatically improved their performance on complex problems. Extended thinking turned out to be Claude Code’s secret weapon: the ability to sustain coherent reasoning over long coding sessions owes more to this architectural choice than to raw scale.

Claude 4, released in May 2025, marked a different kind of milestone. Opus 4 was the first Anthropic model to trigger ASL-3 safety protections — the first time the company’s own evaluations judged a model capable enough to require enhanced safeguards. The fact that Anthropic shipped it anyway, with the additional constraints, was a test of whether the RSP framework could survive contact with a commercially significant product. It could, but the deployment took longer than planned.

Since then, the cadence has been relentless: Opus 4.1 in August, Haiku 4.5 and Opus 4.5 in the fall, Opus 4.6 and Sonnet 4.6 in February 2026. Each release extends the range of tasks Claude can perform autonomously. The trajectory is unmistakable: Anthropic is building toward general-purpose AI agents that can operate independently across diverse environments. OpenAI is pursuing the same goal with different technical strengths — GPT-4o excels at multimodal generation, while Claude leads in sustained reasoning and instruction following. Google’s Gemini competes on scale but trails in developer adoption. The frontier is not a single point. It is a landscape, and each company occupies different peaks.

The MCP Gambit

In November 2024, Anthropic made a move that appeared, at first glance, counterintuitive for a company in an intensely competitive market. It published the Model Context Protocol (MCP) — an open standard for connecting AI models to external tools, data sources, and services — and released it as open source.

The strategic logic becomes clearer on examination. MCP is to AI agents what HTTP was to the early web: a standardized protocol that allows any AI system to connect to any external service. Before MCP, every AI-tool integration required custom engineering. Each AI model had its own format for tool descriptions, its own method for handling responses, its own way of managing errors. Developers building applications that used AI had to write different integration code for each model they wanted to support.

MCP standardized these interactions. A tool server built for MCP works with Claude, but it also works with GPT-4, Gemini, and any other model that implements the protocol. This appears to benefit competitors as much as it benefits Anthropic.

The bet is that standardization grows the total market faster than it erodes Anthropic’s share. And there is evidence the bet is paying off: the MCP Registry now has close to two thousand entries, growing 407% since September 2025. OpenAI officially adopted MCP in March 2025. Google DeepMind confirmed support in April 2025. By December 2025, Anthropic donated MCP to the Agentic AI Foundation under the Linux Foundation, co-founded by Anthropic, Block, and OpenAI.

But there is a subtler strategic layer. Anthropic was first to build deep MCP integration into its products. Claude Code, Claude’s computer use capabilities, and the Claude Agent SDK all leverage MCP natively. Developers who build on MCP naturally test their integrations against Claude first. The standard is open, but the best implementation is Anthropic’s. This is the same playbook that made Android dominant despite being open source: the protocol is free, but the ecosystem gravitates toward the company that maintains it.

The MCP gambit will either entrench Anthropic’s position in the agentic AI ecosystem or create a level playing field that erodes its advantage. The outcome depends on a narrow question: can Anthropic continue to offer the best model for agentic tasks? Right now, it does. That lead is real, but a two-to-three-month model cycle means the next competitor release is always close.

The Governance Question

Anthropic’s corporate structure is one of its most unusual features and, potentially, one of its most important.

The company is organized as a Delaware public benefit corporation (PBC). Unlike a standard corporation, which is legally obligated to maximize shareholder value, a PBC’s board of directors can simultaneously consider the interests of shareholders, the people materially affected by the company’s conduct, and a specific public benefit stated in the company’s charter. Anthropic’s stated public benefit is “the responsible development and maintenance of advanced AI for the long-term benefit of humanity.”

The PBC structure is reinforced by the Long-Term Benefit Trust (LTBT), established as a purpose trust with the mission of ensuring responsible AI development. The LTBT holds a special class of shares (Class T) that give it the power to elect an increasing number of Anthropic’s directors: initially one of five, eventually growing to a majority of the board. The trustees are selected for their expertise in AI safety, policy, and ethics, and they are prohibited from having financial interests in Anthropic.

The governance design is intended to create a structural counterweight to commercial pressure. As Anthropic grows and its investors demand returns, the LTBT’s increasing board representation ensures that safety considerations retain formal authority in corporate decision-making. In theory, even if every investor and executive wanted to abandon safety commitments to pursue growth, the LTBT could block the decision.

In practice, governance structures are only as strong as the humans who operate them. The LTBT is new, untested, and operating in an industry that moves faster than any governance framework can adapt to. The trustees have no operational role in the company. Their information about model capabilities and safety evaluations comes filtered through management. And the escalation triggers that would give them majority control have not yet been activated.

The contrast with OpenAI is instructive. OpenAI’s original nonprofit board was designed to provide similar oversight, but in November 2023, it spectacularly failed when the board’s attempt to fire Sam Altman resulted in a staff revolt, Altman’s reinstatement, and the board’s replacement with figures more sympathetic to management. The episode demonstrated that governance structures designed to constrain AI companies are fragile when they collide with the economic forces those companies generate.

Anthropic’s leaders have studied the OpenAI crisis carefully. They believe the LTBT design avoids the specific failure modes that doomed OpenAI’s board — the trustees have more limited but more clearly defined authority, they cannot be fired by management, and their escalating control mechanism creates a gradual rather than sudden shift in power.

The design has never been tested at scale, under extreme commercial pressure, with hundreds of billions of dollars on the line. That test is coming.

The Amodei Paradox

Dario Amodei does not give the impression of someone running a $380 billion company. In interviews, he speaks slowly, qualifies his claims, and frequently pauses to reconsider his own sentences. He wears the same rotation of plain t-shirts. He holds a PhD in computational neuroscience from Princeton and spent years in academic research before joining OpenAI in 2016. He is not a salesman.

But his written output tells a different story. His October 2024 essay, “Machines of Loving Grace,” ran to 15,000 words and outlined, with unusual specificity, how AI could accelerate progress in biology, health, poverty reduction, and democratic governance. It was not a cautious document. It was a vision statement for a post-scarcity future, and it read like something written by a person who had thought about little else for a decade. His January 2026 follow-up, “The Adolescence of Technology,” was darker. It identified five categories of AI risk and included a detail that made headlines: Anthropic’s own testing had detected behaviors in advanced models that were consistent with misaligned goals — the model pursuing objectives it had not been given.

The admission was remarkable. The CEO of a company valued at $380 billion publicly acknowledged that his products were exhibiting exactly the kind of dangerous behavior that AI safety researchers had spent years warning about. In any other industry, such a statement would be career-ending or, at minimum, stock-destroying. In the AI industry, it was received as evidence of integrity.

But the admission exists in tension with Amodei’s other public statements. In a February 2026 interview with podcast host Dwarkesh Patel, Amodei was blunt about the commercial pressures Anthropic faces: “We’re under an incredible amount of commercial pressure and make it even harder for ourselves because we have all this safety stuff we do that I think we do more than other companies.” The statement acknowledges what external observers have long suspected: safety work is a cost center, and Anthropic’s competitive position would be stronger without it.

Amodei’s response to this tension is that safety and capability are not in opposition — that the discipline required to build safe systems also produces better systems. The argument has empirical support: Claude’s instruction-following accuracy, its reliability on long tasks, and its resistance to adversarial manipulation are all byproducts of safety-focused training methods that competitors have been slower to adopt.

But the argument has limits. There are tasks where a less constrained model outperforms a more constrained one. There are customers who find Claude’s safety guardrails frustrating. And the precedent is not encouraging: OpenAI’s safety team lost its most senior leaders in 2024 — Jan Leike, Ilya Sutskever — precisely because they felt the balance had shifted too far toward commercial priorities. The same dynamic could unfold at Anthropic if the commercial pressure continues to compound.

Amodei occupies a position that may be unique in the history of technology: he is simultaneously building one of the most powerful technologies ever created and publicly warning that it might be catastrophically dangerous. He is racing to win a competition he believes could end badly for everyone. He is making money — enormous amounts of money — from a product he genuinely worries could cause harm.

Call it hypocrisy if you want. But the label does not capture what is actually happening. Amodei’s argument, at its core, is that the safest outcome is for a safety-focused company to be at the frontier, because the alternative is a frontier dominated by companies that care less. It is the logic of the arms race applied to arms control: you build the weapon so you can control the weapon. The argument is coherent. Whether it is correct depends on whether Anthropic’s safety practices produce measurably safer outcomes than the competition’s — and as of February 2026, the tools to measure that comparison barely exist.

The Enterprise Bet

The person most responsible for Anthropic’s commercial trajectory is not Dario Amodei. It is his sister Daniela.

Daniela Amodei, Anthropic’s president and co-founder, spent seven years at Stripe building financial infrastructure before joining OpenAI to run operations and policy. After the 2021 departure, she built Anthropic’s commercial organization from scratch: the enterprise sales team, the cloud provider partnerships, the pricing and go-to-market strategy. In public appearances, she consistently emphasizes the operational side — hiring, revenue, customer relationships — while Dario focuses on research and safety philosophy. The division of labor is stark. Dario publishes 15,000-word essays about existential risk. Daniela closes multi-million-dollar deals with Fortune 500 procurement departments. Anthropic needs both, and the company’s growth reflects the effectiveness of the combination.

The numbers she has built are striking. As of late 2025, Anthropic had more than 300,000 business customers, with 80% of revenue coming from enterprise and developer relationships. The number of customers spending over $100,000 annually on Claude has grown sevenfold in the past year. Two years ago, twelve clients spent more than $1 million annually. Today, more than 500 exceed that threshold. Eight of the Fortune 10 use Claude.

The enterprise focus creates a revenue profile that Wall Street loves: high margins, predictable growth through account expansion, low churn rates, and long sales cycles that create competitive moats. Once a Fortune 500 company integrates Claude into its compliance workflows, its customer service operations, or its software development pipeline, the switching costs are substantial. Migrating prompts, retraining staff, rebuilding integrations, and re-validating safety compliance creates years of friction.

But the enterprise strategy also creates vulnerabilities. Consumer products generate network effects — every new ChatGPT user makes the platform more valuable for existing users through shared conversations, community plugins, and cultural relevance. Enterprise products do not generate these effects. Each customer’s deployment is private and siloed.

Consumer products also generate brand awareness that feeds enterprise adoption. Many enterprise AI deployments begin because a CTO or VP of Engineering used ChatGPT personally and recognized its potential for their organization. Anthropic, with its smaller consumer footprint, has less of this organic demand generation.

The company is aware of this dynamic and has taken steps to address it. The Claude consumer product has improved significantly, with subscriptions growing steadily. But Anthropic has chosen not to compete for consumer dominance — it has not launched an equivalent to ChatGPT’s viral marketing, its consumer-facing mobile experience, or its integration with consumer platforms like Apple’s ecosystem.

The bet is that enterprise revenue is more durable than consumer revenue, and that the enterprise market for AI is large enough to build a $70 billion business without needing to win the consumer war. The bet may prove correct. But it depends on Anthropic maintaining its technical edge in the capabilities that enterprise customers value most: reliability, safety, instruction following, and sustained reasoning.

The Headcount Explosion

Anthropic’s workforce has grown from approximately 1,035 employees in September 2024 to 4,074 as of January 2026 — a quadrupling in sixteen months. The company has announced plans to triple its international workforce and expand its applied AI team fivefold.

The hiring spree spans geographies. Anthropic is opening its first Asia office in Tokyo, scaling operations across Europe with more than 100 new roles in Dublin and London, establishing a research hub in Zurich, and recruiting country leads for India, Australia, New Zealand, Korea, and Singapore. Nearly 80% of Claude’s usage now comes from outside the United States, and the expansion reflects a strategy to build enterprise relationships in the markets where demand is growing fastest.

The applied AI team — the group that helps customers deploy Claude at scale — is set to grow fivefold in 2026. This team is critical to Anthropic’s enterprise strategy: large companies do not simply sign up for an API and start using it. They need custom integrations, safety evaluations, compliance reviews, performance optimization, and ongoing support. The applied AI team provides all of this, and its current size is the primary bottleneck on Anthropic’s enterprise growth.

The scaling creates challenges. Anthropic’s culture, forged in a small team of researchers united by a shared safety mission, is difficult to maintain in a 4,000-person organization spread across multiple continents. The researchers who built Constitutional AI and the sales teams who pitch Claude to Fortune 500 procurement departments operate in different professional universes, with different incentives, different metrics, and different definitions of success.

The question of whether Anthropic can maintain its safety-first culture at scale is not academic. It is the question on which the company’s long-term differentiation depends. If Anthropic becomes just another enterprise AI vendor — a company that talks about safety in its marketing but prioritizes growth in its decisions — the narrative that justifies its premium valuation collapses. Investors are paying for the belief that Anthropic is different. If the difference fades, the premium fades with it.

The Valuation Ladder

Anthropic’s valuation trajectory is a vertical line:

  • March 2025 (Series E): $61.5 billion. Led by Lightspeed Venture Partners, with participation from Bessemer, Cisco, D1 Capital, Fidelity, General Catalyst, Jane Street, Menlo Ventures, and Salesforce Ventures.

  • September 2025 (Series F): $183 billion. Led by ICONIQ, co-led by Fidelity and Lightspeed. The $13 billion round was the largest private fundraise in AI history at the time.

  • February 2026 (Series G): $380 billion. The $30 billion round was led by GIC (Singapore’s sovereign wealth fund) and Coatue Management, with participation from Sequoia Capital, ICONIQ, Lightspeed, Microsoft, and Nvidia.

The company’s valuation has increased sixfold in eleven months. At $380 billion, Anthropic is worth more than Walmart, more than JPMorgan Chase, more than the entire GDP of Denmark. The valuation is based on a revenue multiple that, even accounting for the company’s extraordinary growth rate, implies sustained 100%+ annual growth for multiple years.

The bull case for this valuation is straightforward: the enterprise AI market is in its infancy, Anthropic has a defensible position within it, and the company’s revenue growth shows no signs of decelerating. If Anthropic reaches its $70 billion revenue target by 2028 with margins consistent with enterprise software, the $380 billion valuation is arguably reasonable.

The bear case is equally straightforward: the AI market is subject to technological disruption, commodity economics, and regulatory risk. A breakthrough by a competitor could erode Anthropic’s technical edge. Price competition could compress margins. And the $30 billion in capital raised creates dilution that limits returns for late-stage investors.

The valuation also embeds a specific assumption about the AI safety premium. Investors are paying not just for Anthropic’s current revenue and growth rate, but for the belief that Anthropic’s safety-first approach creates durable competitive advantages. If that belief proves wrong — if safety turns out to be a cost rather than an asset — the valuation premium disappears.

Anthropic vs. OpenAI: The Strategic Divergence

The competition between Anthropic and OpenAI is the defining rivalry of the AI industry, but characterizing it as a two-horse race misses the structural differences between the companies.

OpenAI is a consumer platform company that also serves enterprises. Anthropic is an enterprise infrastructure company that also serves consumers. This distinction shapes every strategic decision both companies make.

OpenAI’s strengths: brand recognition (ChatGPT is the generic term for AI chatbots), consumer scale (100+ million users), content partnerships (with media companies, educational institutions, and entertainment platforms), and a broader product surface area (image generation, video, music, custom GPTs).

Anthropic’s strengths: enterprise penetration (32% of production workloads vs. OpenAI’s 25%), developer tools (Claude Code’s dominance), safety reputation (critical for regulated industries), and technical leadership in sustained reasoning and instruction following.

The market share data suggests a convergence: OpenAI’s enterprise share has dropped from 50% in 2023 to 25% in 2025, while Anthropic’s has grown from 12% to 32% over the same period. But the convergence is misleading, because the total market is expanding so rapidly that both companies are growing in absolute terms. The real competition is not for share of today’s market but for position in tomorrow’s.

OpenAI’s consumer-first strategy creates a moat through network effects and brand. Anthropic’s enterprise-first strategy creates a moat through integration depth and switching costs. Both moats are real but vulnerable to different threats. OpenAI’s consumer moat is vulnerable to a new entrant with a viral product — the way ChatGPT itself disrupted Google’s search dominance in 2023. Anthropic’s enterprise moat is vulnerable to a technological shift that makes current integrations obsolete — the way cloud computing made on-premise software integrations irrelevant.

The framing of this as a two-company race is itself misleading. OpenAI and Anthropic are not competing for the same customers in the same way. A closer analogy is AWS and Salesforce in the 2010s: both cloud companies, both growing fast, both valuable, but serving fundamentally different needs in the same broad ecosystem. The rivalry is real, but the outcome is more likely to be coexistence than winner-take-all.

The Safety Researchers vs. The Sales Team

Inside Anthropic, the tension between safety and growth manifests in ways that are rarely visible to the outside world.

The company’s organizational structure places safety researchers in a position of unusual authority. The Alignment Science team — responsible for understanding how models behave and why — has formal power to delay or block model deployments that fail safety evaluations. This is not a suggestion box. When Alignment Science flags a concern, the deployment timeline stops until the concern is resolved. No other major AI company grants comparable authority to its safety function.

In practice, this authority creates friction. The applied AI team, which works directly with enterprise customers, operates under market pressure that alignment researchers do not face. When a Fortune 500 client needs a specific capability by a specific date, and safety testing identifies an edge case that requires additional evaluation, the resulting conversation is uncomfortable. The sales team has a revenue commitment. The safety team has a principle. The resolution depends on the specific individuals involved, the severity of the concern, and — increasingly — the size of the deal at stake.

The tension became visible in May 2024, when Jan Leike resigned as head of OpenAI’s Superalignment team and posted a public statement accusing OpenAI of letting “safety culture and processes have taken a backseat to shiny products.” Within weeks, Leike joined Anthropic to lead its alignment science efforts. The hire was a statement: the person who had publicly accused the industry leader of deprioritizing safety chose Anthropic as the place where safety still mattered. But it also raised expectations. If even Anthropic’s safety culture starts to erode under commercial pressure, where does someone like Leike go next?

Anthropic points to concrete evidence that standards have held: the ASL-3 activation for Opus 4, the expansion of the safety research team in proportion to overall headcount, and safety budgets that have grown faster than revenue. These are verifiable claims, and they matter.

But the organizational reality is that safety practices designed for a 200-person research lab must now function in a 4,000-person company with offices on four continents, shipping multiple products simultaneously. The researchers who built Constitutional AI and the enterprise sales teams pitching Claude to Fortune 500 procurement departments operate in different professional worlds. They have different incentives, different timelines, different definitions of what “good enough” means. Managing that gap is Anthropic’s most important internal challenge, and it will only get harder as the company grows.

The IPO Question

Anthropic has not announced plans for an initial public offering, but the trajectory points unmistakably in that direction.

The company has raised approximately $43 billion in total funding across seven rounds. Its investors include sovereign wealth funds, the world’s largest asset managers, and strategic technology companies — all of which need liquidity. The typical venture capital fund has a ten-year lifespan. Anthropic’s earliest investors are approaching year five. The $380 billion valuation limits the pool of potential acquirers to a handful of companies (Apple, Microsoft, Google, Amazon), all of which face antitrust scrutiny that would likely block such an acquisition. An IPO is, for practical purposes, the only exit path for most investors.

The timing is speculative, but industry analysts have pointed to late 2026 or early 2027 as the most likely window. By that point, Anthropic would have multiple years of revenue data showing the growth trajectory that public market investors demand, a governance structure that can be explained to institutional shareholders, and a product portfolio (Claude API, Claude Code, enterprise deployments) that generates the predictable recurring revenue that public markets value.

An IPO would create new pressures. Public companies face quarterly earnings expectations, activist investors, and SEC disclosure requirements that constrain strategic flexibility. Safety commitments that are easy to maintain in a private company — where the CEO can tell investors to be patient — become harder in a public company, where a missed earnings target triggers board-level conversations about cost reduction.

The Long-Term Benefit Trust was designed partly to address this scenario. Its escalating board control is intended to provide a structural counterweight to the short-term pressures of public markets. But no PBC with a comparable governance structure has ever operated at Anthropic’s scale as a public company. The experiment is unprecedented.

There is also the question of what an IPO signals about Anthropic’s identity. The company was founded as an alternative to the commercialization of AI. Its earliest researchers joined because they wanted to work at a place where safety came first and profits were secondary. Going public does not necessarily contradict that ethos, but it introduces a new constituency — public shareholders — whose interests may diverge from the safety mission in ways that are difficult to predict.

Anthropic’s leadership has been careful to avoid public discussion of IPO plans, saying only that the company will consider all options when the time is right. But the time is approaching faster than most people expected. The $380 billion valuation creates its own gravity: at that scale, the question is not whether Anthropic will go public, but how it will preserve its distinctive character when it does.

The China Variable

One factor that receives insufficient attention in discussions of Anthropic’s strategy is the geopolitical dimension of AI competition.

In his February 2026 interview with Dwarkesh Patel, Dario Amodei articulated a vision for expanding AI infrastructure into Africa and other emerging markets, framing it partly as a strategy for maintaining Western AI leadership against China. The statement was striking for its explicitness: the CEO of a safety-focused AI company publicly identified geopolitical competition as a motivation for scaling.

The geopolitical framing matters because it changes the calculus around safety. If AI development is a race between democratic and authoritarian systems, then slowing down for safety carries a different cost than it would in a purely commercial competition. Amodei has argued that the United States and its allies need frontier AI capabilities to maintain strategic advantage, and that Anthropic’s safety practices make it a more reliable partner for government use cases than competitors with less rigorous testing.

This argument has gained traction in Washington. Anthropic has expanded its government affairs team and engaged with policymakers on AI regulation and national security applications. The company’s safety reputation gives it credibility with legislators who are skeptical of the technology industry’s self-governance claims. At the same time, the government engagement creates a potential conflict: defense and intelligence applications of AI may require capabilities that Anthropic’s safety framework would normally restrict.

The China variable became concrete in January 2025, when DeepSeek released its R1 model — an open-source reasoning model that matched or exceeded the performance of OpenAI’s o1 on several benchmarks, at a fraction of the cost. DeepSeek claimed to have trained R1 for approximately $5.6 million, a figure that, if accurate, demolished the assumption that frontier AI required billions in compute investment. The stock market reacted immediately: Nvidia lost $593 billion in market capitalization in a single day, the largest single-day loss in U.S. stock market history.

For Anthropic, the DeepSeek shock carried a specific threat. If Chinese companies can produce competitive models at dramatically lower cost, the commercial case for paying a premium for Anthropic’s safety features erodes. Enterprise customers facing cost pressure may conclude that a cheaper model without Constitutional AI is “good enough” for their compliance workflows and customer service bots. The safety premium only holds if customers believe safety produces measurably better outcomes — and measuring AI safety outcomes remains more art than science.

Anthropic’s counter-argument is that safety is not just an ethical choice but a technical one. Models trained with Constitutional AI are more reliable in production, more predictable under stress, and more resistant to adversarial manipulation — qualities that matter more in high-stakes enterprise deployments than benchmark scores. The argument has merit. But it will be tested as Chinese models continue to improve and Western enterprise customers continue to face budget scrutiny.

What Could Go Wrong

Growth curves like Anthropic’s tend to feel inevitable right up until they break. The company faces several specific risks that its valuation does not appear to price in.

The commodity risk: AI models are becoming increasingly similar in capability. If Claude’s technical advantages narrow — if a competitor matches its reasoning quality, its safety features, and its developer experience — then pricing power erodes and the enterprise business becomes a commodity. The $380 billion valuation assumes sustained differentiation. Commodity economics do not support $380 billion.

The safety paradox deepens: As Anthropic’s models become more capable, the tension between safety and capability will intensify. Customers will demand capabilities that safety evaluations flag as risky. Competitors will offer those capabilities without restriction. Anthropic will face an escalating series of decisions about where to draw the line. Each decision will either anger customers (too restrictive) or compromise the mission (too permissive).

The concentration risk: Anthropic’s revenue is concentrated among large enterprise customers. The loss of a handful of major accounts could materially impact growth. And enterprise customers are sophisticated negotiators — as the market matures and alternatives proliferate, they will extract pricing concessions that compress margins.

The governance test: The Long-Term Benefit Trust has never been tested under extreme pressure. If Anthropic faces a moment where safety concerns conflict with a decision worth billions of dollars, the LTBT’s untested governance mechanisms will determine the outcome. There is no precedent for how this structure performs under that kind of stress.

The talent risk: Anthropic’s most important asset is its researchers. The AI talent market is ferociously competitive, and researchers who believe the company is drifting from its safety mission may leave — as they left OpenAI before. The rapid scaling from 1,000 to 4,000 employees dilutes the cultural cohesion that attracted those researchers in the first place.

The regulatory unknown: Governments worldwide are developing AI regulations that could significantly impact Anthropic’s business. The European Union’s AI Act imposes compliance requirements on frontier models. The United States is debating federal AI legislation. China’s AI regulations create market access barriers. Anthropic’s safety reputation positions it well for a regulatory environment, but the specific shape of regulation is unpredictable, and compliance costs could be substantial.

The Bigger Picture

Anthropic’s story is ultimately a story about whether capitalism can be harnessed to solve problems that capitalism creates.

The AI industry’s incentive structure rewards speed, scale, and capability. Companies that build more powerful systems faster attract more investment, more talent, and more customers. Companies that slow down to ensure safety risk being overtaken by competitors who do not. This dynamic — the classic race to the bottom — is the central problem of AI governance, and no amount of principled leadership can fully overcome it.

Anthropic’s answer is to try to make safety pay. If Constitutional AI produces better models, if the Responsible Scaling Policy builds enterprise trust, if the Long-Term Benefit Trust provides governance that investors value, then the incentive structure aligns: doing the right thing is also the profitable thing. The company’s extraordinary financial performance is evidence — not proof, but evidence — that this alignment is possible.

But the alignment is fragile. It requires safety practices to keep producing measurably better products — not just safer ones, but more reliable, more accurate, more useful. It requires enterprise customers to keep paying a premium for those qualities when cheaper alternatives exist. It requires investors to tolerate a governance structure that could, in theory, overrule them. And it requires a growing workforce to maintain commitments that were easier to hold when everyone fit in a single room.

None of these conditions are guaranteed. Some of them are already under strain.

The numbers are remarkable: $1 billion to $14 billion in fourteen months, $61.5 billion to $380 billion in eleven months, 1,000 to 4,000 employees in sixteen months, 4% of global public code commits authored by a nine-month-old product. These are not the metrics of a company held back by safety. They suggest that safety, in Anthropic’s implementation, has functioned as a competitive advantage.

The harder question is what happens when safety stops being advantageous. When a major customer demands a capability that the safety team flags. When a competitor ships that capability first and starts winning deals. When the quarterly revenue target depends on a deployment the alignment researchers want to delay. Those decisions have not arrived yet — or if they have, they have been resolved quietly. They will get louder.

Dario Amodei walked out of OpenAI because he believed a safety-first company could win at the frontier. Five years later, his company is at the frontier. What remains to be proven is whether the safety-first part survives what the frontier does to every company that reaches it.

What happens at Anthropic over the next two years will shape more than one company’s stock price. If the safety-first model works — if Anthropic can sustain its growth while maintaining meaningful safety commitments — it creates a template that other companies can adopt. The incentive structure changes: safety becomes the smart bet, not the sacrifice. If the model fails — if safety erodes quietly under quarterly pressure, if the LTBT proves toothless, if the researchers leave — it will be taken as proof that the market cannot regulate itself.

Anthropic’s $380 billion valuation prices in the optimistic scenario. The next few years will determine if the price was right.


This article provides a deep analysis of Anthropic’s business strategy, technical approach, and organizational structure as of February 2026. Published February 18, 2026.

About the Author

Gene Dai is the co-founder of OpenJobs AI, an AI-powered recruitment platform. He writes about the intersection of artificial intelligence, enterprise technology, and the future of work.