Table of Contents
Former Google engineer Rainer Pope has secured $500 million in Series B funding to develop a specialized AI chip designed to challenge Nvidia’s dominance in the large language model (LLM) market. The funding round, led by Jane Street and Situational Awareness, will accelerate the development of a "blank slate" hardware architecture that prioritizes computational density and low-latency throughput. Pope, who left Google in 2022, intends to address the "insatiable" demand for silicon as frontier AI labs reach the physical limits of current hardware.
Key Points
- The $500 million Series B round was led by Jane Street and Leopold Aschenbrenner’s Situational Awareness fund.
- The startup’s hardware utilizes a unique hybrid memory architecture, combining High Bandwidth Memory (HBM) with SRAM to optimize for both capacity and speed.
- By breaking backward compatibility with legacy software, the company aims to achieve the highest throughput per square millimeter of silicon in the industry.
- Manufacturing and shipping are targeted for 2027, with the final design expected to be completed by the end of 2024.
A "Blank Slate" Approach to AI Hardware
The core philosophy behind Pope’s venture is the abandonment of legacy constraints. Most established players, including Nvidia and Google (with its TPU line), maintain strict backward compatibility to ensure that software written years ago can run on new hardware. However, Pope argues that this requirement forces compromises in chip design that hinder performance for modern LLM workloads.
To "nail the LLM workload," the startup is building from a blank slate, focusing on very large matrices and low-precision support. This allows the hardware to split large systolic arrays into smaller, more efficient pieces. According to Pope, this specific focus is necessary to solve the looming silicon shortage facing major AI laboratories.
"If you want to absolutely nail the LLM workload, you have to be willing to break compatibility with previous chips... A lot of what that means is there are constraints on my chip has to support all of the previous number formats I supported. We felt that it would be necessary—if you really want to just absolutely nail this workload—something of a blank slate design is required."
The Hybrid Memory Breakthrough
A significant portion of the new capital will be directed toward perfecting a hybrid memory system that bridges the gap between existing industry solutions. Currently, the market is split between HBM-based chips used by Nvidia and Amazon—which offer high capacity but higher latency—and SRAM-only chips from startups like Cerebras and Groq, which offer extreme speed but limited memory capacity.
Pope’s architecture integrates both HBM and SRAM, allowing the chip to handle long-context models without the memory bottlenecks that plague SRAM-only designs. This hybrid approach aims to match the latency of Groq while maintaining the massive throughput required for enterprise-scale AI deployment.
"It is possible to do both very high throughput as you get from HBM, but also very low latency as you get from SRAM and do that in the same product. What it gives you is actually a product that is better than any other product in the market at throughput... while also matching some of the best, like the Cerebras and the Groq at latency."
Scaling for 2027 Deployment
While the design phase is nearing completion, the transition to mass manufacturing remains a capital-intensive challenge. The company plans to leverage TSMC for logic wafers and the "Big Three" memory providers—SK Hynix, Samsung, and Micron—for HBM components. This global supply chain strategy is essential for meeting the multi-gigawatt deals currently emerging in the data center space.
Following the finalization of the chip design this year, the company will focus on setting up supply chains capable of delivering large-scale rack build-outs. As frontier labs continue to express concerns regarding the availability of silicon, the successful rollout of this hardware in 2027 could provide a critical alternative to the current Nvidia-centric infrastructure. The next three years will be defined by the company's ability to translate this $500 million investment into a physical product capable of outperforming the industry's most established incumbents.