The $5 Billion Valuation That Couldn't Last
In October 2025, Intel Corporation quietly approached SambaNova Systems with acquisition talks. The discussions represented a dramatic fall for what was once Silicon Valley's best-funded AI chip startup—a company that had raised $1.13 billion and achieved a $5 billion valuation just four years earlier.
For Rodrigo Liang, SambaNova's co-founder and CEO, the Intel conversations marked an inflection point. The 55-year-old chip veteran had spent over two decades building SPARC processors at Sun Microsystems and Oracle before launching SambaNova in 2017 with a bold thesis: that artificial intelligence demanded an entirely new computing architecture, one that would render NVIDIA's GPU dominance obsolete.
But by late 2025, SambaNova's implied valuation had cratered to $2.13 billion—a 57% decline from its 2021 peak. The company had explored multiple funding rounds throughout 2024 and early 2025, only to watch talks stall repeatedly as investors questioned whether any AI chip startup could profitably compete with NVIDIA's entrenched ecosystem.
According to people familiar with the matter, SambaNova's management began exploring a sale after talks for a new funding round stalled in mid-2025. Intel emerged as the most serious buyer, drawn to SambaNova's dataflow architecture and existing customer relationships with government agencies and Fortune 500 enterprises.
The potential deal, if completed, would likely value SambaNova at significantly below its $5 billion 2021 valuation—a markdown that would crystallize painful losses for late-stage investors including SoftBank Vision Fund 2, BlackRock, and Temasek.
The SPARC Processor Veteran's Second Act
Rodrigo Liang's journey to founding SambaNova began in the crucible of enterprise computing's most demanding performance challenges. After earning both bachelor's and master's degrees in electrical engineering from Stanford University, Liang joined Hewlett-Packard before moving to Afara Websystems, a startup focused on multi-core processor design.
When Sun Microsystems acquired Afara in 2002, Liang's career trajectory accelerated. He became director of engineering for Sun's UltraSPARC processor development, working on the Niagara line of multi-core chips that would define enterprise computing for the next decade. Industry veterans credit Liang with instrumental contributions to the world's first truly scalable multi-core processors—chips that balanced compute density with power efficiency long before these metrics became fashionable in AI workloads.
Oracle's 2010 acquisition of Sun brought Liang into Larry Ellison's orbit. Promoted to Senior Vice President of SPARC Processor and ASIC Development, Liang led one of the industry's largest chip engineering organizations, releasing 12 major SPARC processors and ASICs for Oracle's enterprise servers over 15 years.
One former Oracle executive who worked with Liang during this period told analysts that his team's work on memory subsystem optimization and interconnect fabric design would later prove foundational to SambaNova's differentiated architecture: "Rodrigo understood better than most that for AI workloads, data movement kills you. At Oracle, we optimized for throughput and latency in database operations. Those same principles—minimizing data movement, maximizing local compute—became the core of SambaNova's dataflow approach."
The Founding: A Stanford Reunion and $56 Million Bet
In 2017, Liang left Oracle to reunite with several Stanford colleagues and chip industry veterans. The co-founding team included Kunle Olukotun, a Stanford professor and pioneer in chip multiprocessor design, and several other Sun/Oracle alumni who had spent decades optimizing compute architectures for demanding workloads.
SambaNova emerged from stealth in April 2018 with a $56 million Series A round led by Walden International and GV (formerly Google Ventures). The pitch was ambitious but technically grounded: traditional CPU and GPU architectures, with their fixed instruction sets and limited on-chip memory, had reached computational limits for AI workloads. What AI needed was a reconfigurable dataflow architecture that could reshape itself for each specific neural network, eliminating wasted compute cycles and data movement overhead.
According to SambaNova's early technical whitepapers, the Reconfigurable Dataflow Unit (RDU) would differ fundamentally from GPUs. Rather than executing instructions sequentially on thousands of cores, the RDU would map an entire AI model's computational graph directly onto hardware, creating a custom dataflow pipeline for each application. This approach promised dramatic efficiency gains—particularly for inference workloads where the same model ran millions of times.
The technical vision attracted immediate Silicon Valley enthusiasm. By December 2019, SambaNova had raised an additional $250 million in Series B funding, reaching "unicorn" status with a valuation exceeding $1 billion. Investors included Intel Capital—a strategic backer that would later explore acquiring the company—alongside BlackRock, GV, and Walden International.
The $676 Million SoftBank Bet: 2021's AI Chip Exuberance
April 2021 marked SambaNova's apex. SoftBank Vision Fund 2 led a $676 million Series D round, valuing the company at $5 billion and crowning it "the world's best-funded AI startup" in breathless press coverage. The round included participation from Temasek, GIC, and existing investors, bringing SambaNova's total raised capital to over $1 billion.
Deep Nishar, Senior Managing Partner at SoftBank Investment Advisers, praised SambaNova's "leading systems architecture that is flexible, efficient and scalable," calling it "a holistic software and hardware solution for customers."
The $5 billion valuation reflected 2021's frothy AI investment climate, where investors bet enormous sums on companies promising to dethrone NVIDIA before they had proven commercial viability at scale. SambaNova's customer list at the time included impressive names—the U.S. Department of Energy's Oak Ridge and Lawrence Livermore National Laboratories, plus early enterprise pilots with financial institutions—but actual revenue remained modest relative to the valuation.
One venture capital partner who passed on participating in the Series D later reflected: "The valuation assumed SambaNova would capture 10-15% of the AI training market and meaningful inference share. But NVIDIA's CUDA ecosystem was already so entrenched, and their roadmap—Ampere, then Hopper, now Blackwell—kept extending their performance lead. We couldn't model a realistic path to $500 million in revenue, let alone the $2-3 billion you'd need to justify a $5 billion valuation."
The Technology: Dataflow Architecture's Promise and Limitations
At SambaNova's core lies the Reconfigurable Dataflow Unit, a chip architecture that represents genuine technical innovation in an industry often characterized by incremental GPU improvements.
The SN40L, SambaNova's flagship chip announced in 2024, combines three tiers of memory hierarchy: 520 MB of on-chip SRAM, 64 GB of HBM3 high-bandwidth memory, and 1.5 TB of DDR5 DRAM directly attached to each chip. This massive memory capacity—far exceeding typical GPUs—allows SambaNova to keep entire models in-memory, eliminating the performance-crushing need to shuffle weights between chips during inference.
Fabricated on TSMC's 5nm process and packaged using advanced Chip-on-Wafer-on-Substrate (CoWoS) technology, the SN40L delivers 10.2 bf16 petaflops of compute performance across 1,040 reconfigurable cores. More impressively, a single SN40L chip can theoretically serve models up to 5 trillion parameters—vastly larger than competitive offerings from Cerebras or Groq.
SambaNova's benchmark claims were aggressive: running Meta's Llama 3.1 70B model at 461 tokens per second and the massive 405B variant at 132 tokens per second, all at full bf16/fp32 precision without quantization tricks that sacrifice accuracy. The company claimed "40X better performance per area than Groq and 10X better than Cerebras" on these workloads.
The architecture's elegance lay in its reconfigurability. Unlike GPUs with fixed instruction sets, the RDU could be programmed to create application-specific dataflow pipelines for each AI model. Pattern Compute Units (PCUs) executed innermost parallel operations, Pattern Memory Units (PMUs) provided intelligent on-chip storage, and a high-speed 3D switching fabric connected everything with minimal latency.
But this same flexibility created profound software challenges. NVIDIA's CUDA programming environment had two decades of optimization, documentation, and community support. Every major AI framework—PyTorch, TensorFlow, JAX—ran natively on CUDA with vast libraries of pre-optimized operations. SambaNova's SambaFlow software stack, by contrast, required teams to learn new programming paradigms and often restructure models to achieve optimal performance.
A former SambaNova customer told trade publications: "The peak performance numbers were real—we measured them. But getting there required our ML engineers to spend weeks understanding dataflow principles and refactoring our models. For production deployment, we needed confidence that any future model would run well, not just the one we'd optimized. That breadth of support just didn't exist yet."
The Market Pivot: From Training to Inference
By early 2024, SambaNova faced a strategic inflection point. The company had initially positioned its RDU architecture for both AI training and inference workloads, competing directly with NVIDIA's full-stack dominance. But the economics of AI training—already dominated by clusters of tens of thousands of H100 and H200 GPUs—proved insurmountable for a startup with limited production capacity and a nascent software ecosystem.
Liang made a decisive strategic pivot: SambaNova would focus exclusively on AI inference, the process of running trained models to generate predictions and responses. The logic was compelling. According to McKinsey projections, AI inference hardware in data centers would reach $9-10 billion by 2025—double the size of training hardware markets—and would consume 90% of AI computing needs by 2030.
More importantly, inference presented architectural advantages for SambaNova's dataflow approach. Inference workloads run the same model millions or billions of times, precisely the use case where custom dataflow pipelines excel. The RDU's massive on-chip memory could keep entire models resident, eliminating the inter-chip communication overhead that plagued GPU-based inference clusters.
In practice, the inference focus meant targeting two customer segments: cloud service providers building AI inference infrastructure, and enterprises wanting on-premise AI capabilities without NVIDIA dependence.
SambaNova launched "SambaNova Cloud" in 2024, offering API access to models running on SN40L hardware. The platform served Meta's Llama models at impressive speeds—over 100 tokens per second for the 405B parameter variant at full precision, substantially faster than GPU-based alternatives. When DeepSeek released its R1 reasoning models in early 2025, SambaNova became the only provider offering the massive 671B parameter version, delivering 231 tokens per second.
For on-premise customers, SambaNova developed "SambaManaged," promising fully deployed AI data centers in just 90 days compared to the typical 18-24 month timeline for GPU-based infrastructure. The pitch emphasized energy efficiency: SambaNova racks consumed just 10 kilowatts versus 120 kilowatts for equivalent GPU configurations, eliminating expensive liquid cooling requirements and power upgrades.
These product launches attracted genuine customer traction. In October 2025, SambaNova announced three "sovereign AI cloud" partnerships: with SCX in Australia, Infercom in Germany, and Argyll in the United Kingdom. Each partnership positioned SambaNova as the infrastructure provider for privacy-conscious, energy-efficient national AI clouds complying with local data sovereignty requirements.
But customer wins remained tactical rather than transformational. The sovereign AI partnerships represented promising beachheads in regional markets where NVIDIA's dominance faced regulatory and political headwinds. Yet total contract values remained modest—measured in tens of millions of dollars annually rather than the hundreds of millions SambaNova needed to justify its billion-dollar-plus valuation.
The Competitive Battlefield: NVIDIA's Unshakeable Moat
SambaNova's struggle illuminates the fundamental challenge facing all AI chip startups: NVIDIA's ecosystem advantage has become nearly insurmountable.
By 2025, NVIDIA controlled over 80% of the AI accelerator market, with estimates suggesting 90%+ share of training workloads. The H100 and H200 GPUs, despite premium pricing, remained supply-constrained well into 2024. NVIDIA's November 2024 launch of the Blackwell architecture—featuring the GB200 Grace Blackwell Superchip with 20 petaflops of FP4 performance—extended the company's technology lead for another generation.
But NVIDIA's dominance wasn't merely about raw chip performance. The company's true moat lay in CUDA, the programming environment that had accumulated two decades of optimizations. Every major AI framework ran natively on CUDA. Cloud providers offered CUDA-optimized instances. Universities taught CUDA programming. Startups building AI applications assumed CUDA availability.
This ecosystem inertia meant that even when alternative chips demonstrated superior performance on specific benchmarks, customers faced switching costs that often outweighed potential benefits. One AI startup CTO explained the calculus: "We benchmarked SambaNova, Groq, and Cerebras against NVIDIA. All three showed better tokens-per-second on inference. But our entire codebase assumed CUDA. Our ML engineers knew CUDA. Our cloud budget included negotiated NVIDIA discounts. To switch meant rewriting code, retraining staff, and negotiating new contracts—all for maybe 30% better performance. The math didn't work."
SambaNova's competitors faced identical challenges. Cerebras Systems, with its wafer-scale WSE-3 chip containing 900,000 cores and 44 GB of on-chip SRAM, went public in October 2024 but saw its stock struggle as investors questioned profitability. Groq, with its Language Processing Unit (LPU) architecture delivering extreme inference speeds, raised $750 million at a $6.9 billion valuation but similarly struggled to convert technical superiority into market share.
The competitive dynamics created a trap: AI chip startups needed massive scale to achieve manufacturing cost parity with NVIDIA, but couldn't reach massive scale without first displacing NVIDIA's installed base. This chicken-and-egg problem left startups burning capital on R&D and sales while NVIDIA extracted monopoly profits to fund even more aggressive roadmaps.
The Funding Winter: When Capital Markets Lost Faith
Throughout 2024 and early 2025, SambaNova quietly sought additional funding to extend its runway and scale production. Multiple sources familiar with the fundraising process described a brutal environment where investors who had eagerly backed AI chip startups in 2021 now demanded path-to-profitability within 18-24 months.
The macroeconomic backdrop didn't help. Rising interest rates through 2023 and 2024 had fundamentally shifted venture capital return hurdles. A $5 billion valuation required eventual exit multiples that seemed increasingly fantastical as public market comparables compressed. Cerebras's underwhelming public debut—the stock traded below its IPO price for months—spooked private investors contemplating late-stage rounds in competing chip startups.
According to people familiar with SambaNova's fundraising conversations, several factors repeatedly stalled talks:
First, revenue scale remained elusive. While SambaNova had secured impressive customer logos and sovereign cloud partnerships, total annual recurring revenue by mid-2025 was estimated at under $50 million—a fraction of what investors expected given the company's $1 billion-plus capital base and seven-year operating history.
Second, unit economics looked challenging. The SN40L's advanced packaging and massive memory configurations resulted in high manufacturing costs. Without NVIDIA's production volumes, SambaNova couldn't negotiate comparable pricing from TSMC and HBM suppliers. One semiconductor analyst estimated SambaNova's gross margins at 30-40%, versus NVIDIA's 70%+ margins on data center GPUs—a structural disadvantage that would persist until SambaNova reached vastly greater scale.
Third, the customer acquisition cycle remained painfully slow. Enterprises evaluating AI infrastructure typically ran 6-12 month pilots before committing to production deployments. Each customer required hands-on technical support to port models to SambaFlow and optimize performance. This high-touch sales model limited SambaNova's ability to scale revenue quickly, even as the company's burn rate—funding R&D, manufacturing, and a growing sales organization—exceeded $200 million annually.
Finally, investor patience was exhausting. SoftBank Vision Fund 2, which had led SambaNova's $5 billion valuation round in 2021, faced its own challenges. The Vision Fund's massive losses on investments in WeWork, Katerra, and other "unicorns" that never achieved profitability created internal pressure to mark down troubled portfolio companies and avoid throwing good money after bad.
By October 2025, when Bloomberg reported that SambaNova had retained advisors to explore strategic alternatives including a potential sale, the company's implied valuation had fallen to $2.13 billion—a 57% decline that crystallized the market's loss of faith in pure-play AI chip startups.
The Intel Conversations: A Lifeline or Acquihire?
Intel's interest in acquiring SambaNova reflects the chip giant's own desperate struggle to remain relevant in AI computing. Despite decades of dominance in CPUs, Intel had been comprehensively outmaneuvered by NVIDIA in AI accelerators and risked permanent marginalization as data center workloads shifted to AI.
For Intel, SambaNova offered several strategic attractions:
Differentiated Architecture: Intel's internal AI efforts—including the Gaudi line of AI accelerators acquired through the $2 billion Habana Labs purchase in 2019—had failed to gain meaningful market traction. Gaudi 3, launched in 2024, delivered respectable performance but suffered from the same ecosystem challenges plaguing all NVIDIA alternatives. SambaNova's dataflow architecture represented a genuinely different technical approach that could complement Intel's existing portfolio.
Customer Relationships: SambaNova's deployments at national laboratories, government agencies, and sovereign cloud providers aligned with Intel's enterprise and public sector strengths. These customers often preferred non-NVIDIA alternatives for strategic reasons—supply diversity, national security considerations, or data sovereignty requirements—creating natural synergies with Intel's positioning.
Executive Connections: Lip-Bu Tan, who became Intel's CEO in 2025, maintained close ties to SambaNova through his venture firm Walden International, which led SambaNova's $56 million Series A in 2018. These relationships facilitated deal discussions and technical due diligence.
Talent Acquisition: Even if SambaNova's technology proved difficult to integrate, Intel would gain access to one of Silicon Valley's most accomplished chip design teams, including Liang and numerous former Sun/Oracle veterans with deep expertise in high-performance computing architectures.
For SambaNova, an Intel acquisition would represent both failure and vindication. Failure, because the company couldn't achieve the independent scale necessary to challenge NVIDIA as envisioned. Vindication, because the technology's value was sufficiently recognized that a $185 billion chip incumbent deemed it worth acquiring.
People familiar with the negotiations described price as the primary sticking point. Intel faced its own financial pressures—the company had announced massive layoffs and restructuring through 2024—limiting its acquisition budget. SambaNova's late-stage investors, led by SoftBank, faced substantial paper losses on a sale below $5 billion but might accept a deal rather than risk complete write-offs if SambaNova's cash runway expired.
As of November 2025, talks remained preliminary. Other potential acquirers reportedly included cloud providers exploring vertical integration into AI chips and international semiconductor firms seeking to access SambaNova's dataflow intellectual property.
The Inference Revolution That Never Quite Arrived
SambaNova's strategic pivot to inference was predicated on a market thesis that has proven slower to materialize than anticipated. Industry analysts had predicted that inference workloads would explode in 2024-2025 as enterprises deployed AI applications at scale, creating massive demand for cost-effective inference acceleration.
The thesis contained truth: inference workloads were indeed growing exponentially. OpenAI's ChatGPT alone served hundreds of millions of queries daily. Meta deployed Llama models across its 3+ billion users for content recommendations and ad optimization. Every major enterprise piloted AI applications that would eventually require enormous inference capacity.
But the inference revolution's economics played out differently than SambaNova anticipated. Rather than displacing GPUs, inference workloads largely ran on the same H100 and H200 clusters used for training, because cloud providers had already deployed these assets and customers benefited from unified infrastructure. NVIDIA's TensorRT inference optimization software extracted impressive performance from the same GPUs used for training, reducing the urgency to adopt specialized inference chips.
Moreover, the rise of "reasoning models"—AI systems like OpenAI's o1 and DeepSeek's R1 that generated thousands of tokens per query while "thinking"—shifted inference economics. These models required such massive compute that even SambaNova's efficiency advantages provided modest total cost of ownership improvements. A 2X performance improvement on a workload requiring $1 million in annual compute was valuable. But that same 2X improvement meant little if customers faced multi-million-dollar migrations costs to SambaNova's platform.
The "agentic AI" trend that Liang frequently highlighted in 2025 interviews—autonomous AI systems generating 10X to 100X more tokens per task than simple chatbots—theoretically benefited SambaNova's high-throughput architecture. In practice, these agentic systems remained early-stage experiments rather than production workloads generating procurement budgets.
Rodrigo Liang's Leadership: Technical Brilliance Meets Commercial Reality
Current and former SambaNova employees describe Liang as a technically brilliant but operationally cautious leader, whose strengths in chip architecture didn't always translate to the software-intensive and fast-moving AI market.
One former executive praised Liang's technical vision: "Rodrigo correctly identified that AI workloads have fundamentally different data movement patterns than traditional HPC. The RDU architecture isn't incremental—it's a genuine rethinking of how compute should work. Very few CEOs have the chip design depth to make those architectural choices correctly."
But the same executive noted execution challenges: "SambaNova came from the enterprise hardware world where you spend three years perfecting a chip, launch it, and support it for five years. AI moved faster. By the time we perfected SN40L, model architectures had evolved. Transformer variants emerged that didn't map as cleanly to our dataflow approach. The market needed continuous iteration, but our DNA was big-bang releases."
Another challenge was go-to-market aggressiveness. NVIDIA's Jensen Huang cultivated personal relationships with every major AI lab CEO, offering early access to next-generation hardware and engineering resources. Groq's Jonathan Ross personally demoed the LPU architecture to dozens of startups. Cerebras's Andrew Feldman pitched sovereign AI infrastructure directly to national governments.
Liang, by contrast, maintained a lower profile, focusing on technical partnerships with national laboratories and established enterprises. This approach generated credible deployments but struggled to create the viral momentum that AI chip startups needed to overcome ecosystem inertia.
To Liang's credit, he recognized SambaNova's inference opportunity earlier than competitors. The 2024 strategic pivot, the rapid deployment SambaManaged product, and the 2025 sovereign AI cloud partnerships all demonstrated strategic adaptability. But these moves came after NVIDIA had already cemented multi-year GPU purchase commitments with the largest cloud providers and AI labs—foreclosing the most lucrative market opportunities.
The Broader Reckoning: AI Chip Startup Economics Don't Work
SambaNova's struggles illuminate a brutal truth: the AI chip startup market may not support multiple independent players, regardless of technical merit.
Building competitive AI chips requires world-class expertise spanning chip architecture, compiler optimization, system integration, and software ecosystems—skills that cost hundreds of millions of dollars annually to retain. Manufacturing leading-edge chips demands access to TSMC's most advanced nodes and CoWoS packaging capacity, with minimum order quantities measured in tens of thousands of units. Reaching customers requires field sales teams, technical support organizations, and continuous software updates—infrastructure that only pays for itself at massive scale.
These fixed costs mean that AI chip economics resemble aircraft manufacturing more than software: you must reach significant production volumes to achieve acceptable unit economics, but can't reach those volumes without first displacing an entrenched incumbent with 10X your production capacity.
NVIDIA's 2025 market position—$3.5 trillion market capitalization, $100+ billion annual data center revenue, 80%+ market share—creates gravitational pull that alternative chip architectures struggle to escape. Every data center optimized for NVIDIA's NVLink interconnects, every model optimized for CUDA, every engineer trained on NVIDIA tools reinforces the moat.
For SambaNova, Cerebras, Groq, and other AI chip startups, the playbook that worked in prior semiconductor waves—build better technology, prove it in benchmarks, scale production, challenge the incumbent—meets an opponent with unprecedented advantages. NVIDIA isn't just the performance leader; it's the standard, the ecosystem, and the safe choice for risk-averse IT organizations.
This dynamic explains why all three major AI chip challengers faced similar trajectories by late 2025: Cerebras's post-IPO stock struggles, Groq's reported difficulty raising follow-on funding, and SambaNova's exploration of a sale all reflected the same fundamental problem. Technical innovation alone couldn't overcome the switching costs, ecosystem effects, and capital intensity required to displace an entrenched platform.
What Comes Next: Three Possible Futures
As of November 2025, SambaNova Systems faces three plausible paths forward, each with profound implications for Rodrigo Liang's legacy and the broader AI hardware landscape.
Scenario 1: Acquisition by Intel or Another Strategic Buyer
The most likely outcome involves SambaNova's sale to Intel or another established semiconductor company at a significant discount to its $5 billion 2021 valuation. Such a deal would value the company at $2-3 billion, providing partial returns to early investors but crystallizing substantial losses for SoftBank and other late-stage backers.
For Intel, the acquisition would represent a realistic path to credible AI inference capabilities, combining SambaNova's technology with Intel's manufacturing capacity and enterprise relationships. The integration risks would be substantial—Intel's track record with acquisitions (including the troubled Altera and Mobileye deals) suggests execution challenges. But CEO Lip-Bu Tan's personal familiarity with SambaNova might smooth the process.
Alternative acquirers could include cloud providers (AWS, Google Cloud, Oracle) seeking to reduce NVIDIA dependence, or international chip firms (Samsung, SK Hynix, TSMC) wanting to secure advanced AI architecture IP.
Scenario 2: Independence via Cost Restructuring and Niche Focus
A second path involves SambaNova dramatically cutting costs, extending its cash runway, and focusing exclusively on profitable niches where its architecture provides defensible advantages. This might mean abandoning broad cloud inference ambitions in favor of specific verticals: government agencies requiring air-gapped AI infrastructure, financial institutions demanding ultra-low-latency inference, or scientific research applications where SambaNova's ability to serve trillion-parameter models on single systems provides genuine breakthroughs.
This strategy would require slashing headcount by 40-50%, abandoning unprofitable customer segments, and accepting modest revenue growth in exchange for path to profitability. It's the playbook that numerous enterprise infrastructure companies—from Cloudera to Hortonworks to MapR—pursued after failing to achieve anticipated scale, with mixed results.
Scenario 3: Wind-Down or Distressed Asset Sale
The darkest scenario involves SambaNova exhausting its remaining capital—estimated at $150-200 million as of late 2025—without securing additional funding or completing a sale. In this outcome, the company would conduct a structured wind-down, selling IP assets piecemeal to the highest bidders and laying off its workforce.
This path, while painful, would allow core technology to find homes at larger organizations capable of integrating it into broader product portfolios. SambaNova's dataflow architecture patents, compiler technology, and system integration know-how would retain substantial value even if the company itself couldn't sustain operations.
The Verdict: Technical Innovation Meets Economic Reality
Rodrigo Liang's SambaNova journey represents a cautionary tale about the limits of technical innovation in markets with entrenched platforms and winner-take-most economics.
The RDU architecture wasn't vaporware or hype. It demonstrated genuine performance advantages on specific workloads, pioneered dataflow computing principles that may influence future chip designs, and attracted deployments from sophisticated customers including national laboratories and cloud providers. Engineers who worked on the SN40L describe it as one of the most technically impressive chips of the AI era.
But technical excellence proved insufficient against NVIDIA's ecosystem moat, the AI chip market's brutal economics, and the 2024-2025 capital markets' loss of patience with unprofitable infrastructure startups. SambaNova raised $1.13 billion—more than nearly any hardware startup in history—yet still couldn't achieve the scale necessary to compete sustainably.
For Liang personally, the outcome represents partial vindication and partial setback. His thesis that AI demanded new computing architectures was correct. His leadership assembling world-class talent and securing customer deployments in notoriously difficult markets (government, enterprise, cloud providers) demonstrated operational capability. But the inability to convert these achievements into a self-sustaining business will define how history judges SambaNova's impact.
The broader lesson extends beyond one company. If SambaNova—with its billion-dollar-plus capital base, technical pedigree, and strong customer relationships—couldn't independently disrupt the AI chip market, it's unclear which startups can. Cerebras, Groq, Tenstorrent, Graphcore, and numerous other NVIDIA challengers face identical headwinds, suggesting that the AI computing landscape may consolidate around a small number of platform providers rather than fragmenting into competitive diversity.
This concentration carries risks. NVIDIA's near-monopoly on AI training, and increasing dominance of inference, gives the company enormous pricing power and influence over AI development priorities. Alternative architectures like SambaNova's dataflow approach might unlock breakthroughs in efficiency, energy consumption, or specialized applications—but these benefits may never materialize if the economics of competing with NVIDIA remain prohibitive.
Epilogue: The Inference Market Awaits Its Disruptor
In November 2025, Rodrigo Liang remained publicly optimistic, giving interviews emphasizing SambaNova's sovereign AI partnerships and the company's leadership in serving massive reasoning models like DeepSeek R1. He described the agentic AI revolution as demanding "10X to 100X" more inference compute, positioning SambaNova perfectly for the next wave of AI deployment.
Behind closed doors, discussions with Intel and other potential acquirers continued. People familiar with Liang's thinking described him as pragmatic about SambaNova's options: "Rodrigo knows that building a successful chip company in 2025 looks different than it did in 2000. If the path to maximum impact runs through Intel or another strategic, he's open to it. But he also believes the RDU architecture deserves a chance to prove itself at real scale."
For Silicon Valley's remaining AI chip startups, SambaNova's trajectory offers sobering lessons. Technical innovation alone won't overcome NVIDIA's ecosystem advantages. Billion-dollar capital raises buy time but not guaranteed success. The inference market opportunity is real, but capturing it requires not just better chips but fundamentally superior economics at production scale—a bar that may prove insurmountable for venture-backed challengers.
The question facing the industry as 2025 draws to a close: Can any independent AI chip company sustainably compete with NVIDIA, or is consolidation inevitable? SambaNova's outcome—whether acquisition, restructuring, or wind-down—will provide a definitive answer, one that will shape AI hardware competition for the remainder of the decade.
In the meantime, the $5 billion AI chip dream that Rodrigo Liang and his co-founders pursued confronts the harsh reality of market economics, platform effects, and the unprecedented challenge of displacing an entrenched infrastructure standard. The story isn't over. But the ending is coming into focus.