Updated March 2026

AI Infrastructure
Pre-IPO Companies 2026

Chips, cloud, and custom silicon. The hardware layer powering every AI model is being built by a handful of pre-IPO companies that could define the next decade of compute.

6
Companies Tracked
$53.5B
Combined Valuation
$16.2B
Total Funding Raised
2026–27
IPO Window

The AI boom of the mid-2020s has produced a defining dynamic: the largest language models run on infrastructure that does not yet exist at scale. Every GPU cluster, photonic interconnect, and inference chip deployed today is a down payment on a compute buildout that analysts estimate will consume over $400 billion in capital investment through 2028. The companies building that infrastructure are still private — and many are approaching the public markets.

From Lightmatter's photonic AI chips to CoreWeave's GPU cloud empire, the six companies profiled on this page represent the most important hardware and cloud bets in pre-IPO technology. Understanding their technology, funding trajectories, and investor backing is essential for anyone tracking AI infrastructure stocks pre-IPO in 2026.

This page provides up-to-date valuations, total funding raised, CEO information, technology focus, and IPO outlook for each company. Use the comparison table and FAQ sections to understand how these businesses differentiate against incumbents like Nvidia and AMD.

The $400B AI Infrastructure Race

AI models require specialized compute at every layer — training, fine-tuning, and inference. The market for AI infrastructure hardware and cloud services is projected to grow from roughly $90B in 2024 to over $400B by 2028, creating a generational window for pre-IPO companies to capture lasting market share.

$400B+
Capital Deployment (2024–28)
Hyperscalers, enterprises, and governments are committing unprecedented capital to AI compute buildout, driven by competitive pressure and national security mandates.
🧮
10–100x
Efficiency Advantage Target
Custom AI silicon from Lightmatter, Cerebras, and Groq targets 10x to 100x improvements in performance-per-watt over general-purpose GPUs for specific AI workloads.
☁️
$23B+
CoreWeave GPU Cloud Valuation
GPU cloud providers are capturing massive value as enterprises seek Nvidia H100 and H200 access without the 12–18 month wait for direct hardware allocation from the chip maker.
💡
Photonics
The Next Compute Frontier
Lightmatter's photonic interconnect technology aims to solve the memory bandwidth bottleneck that limits today's electronic AI chips, potentially unlocking the next order-of-magnitude scaling.
🏎️
Sub-ms
Inference Latency Arms Race
Groq's LPU and Cerebras's Wafer-Scale Engine are purpose-built for inference speed, targeting use cases where millisecond response times are critical: financial trading, autonomous vehicles, real-time agents.
🌎
2026–27
IPO Window Opening
Improved public market conditions and maturing AI revenue streams are aligning to create an IPO window for the best-positioned AI infrastructure companies, with CoreWeave widely expected to lead the pack.

AI Infrastructure Pre-IPO Companies

Six pre-IPO companies building the hardware and cloud layer beneath every major AI model deployment. All data reflects the most recent publicly available funding rounds and valuations as of March 2026.

💡
Chips Pre-IPO
Lightmatter
Lightmatter is pioneering photonic AI computing — using light instead of electrons to move data between chips. Its Passage interconnect platform enables chip-to-chip communication at the speed of light, targeting the memory bandwidth wall that limits large language model scaling. Founded in 2017 and headquartered in Mountain View, Lightmatter has raised funding from Google Ventures, Spark Capital, and other top-tier investors.
Valuation
$4.4B
Total Raised
$850M
Focus
Photonic Chips
Stage
Late-Stage
🧠
Cloud Pre-IPO
Together AI
Together AI operates a high-performance AI cloud platform optimized for open-source model inference and fine-tuning. The company provides developers API access to Llama, Mistral, Qwen, and hundreds of other open-source models at competitive pricing, positioning itself as the go-to inference cloud for enterprises that do not want to be locked into proprietary model providers. Backed by Andreessen Horowitz, Nvidia, and Salesforce Ventures.
Valuation
$3.3B
Total Raised
$534M
Focus
AI Cloud
Stage
Series B+
🖥️
Cloud Pre-IPO
CoreWeave
CoreWeave is the largest independent GPU cloud provider in the world, offering Nvidia H100 and H200 clusters to AI labs, enterprises, and hyperscalers at scale. Originally a cryptocurrency mining operation, CoreWeave pivoted to GPU cloud in 2019 and has since built out massive data center capacity. It counts Microsoft and Nvidia among its key customers and strategic investors, with Microsoft committing multi-billion dollar spend agreements.
Valuation
$23B
Total Raised
$12.7B
Focus
GPU Cloud
Stage
IPO-Ready
🧬
Chips Pre-IPO
Cerebras Systems
Cerebras Systems builds the world's largest chip — the Wafer-Scale Engine (WSE) — which integrates an entire silicon wafer into a single processor with 900,000 AI-optimized cores and 44 GB of on-chip SRAM. This architecture eliminates inter-chip communication bottlenecks and delivers exceptional throughput for LLM training and inference. Cerebras filed for IPO in late 2024, making it one of the most closely watched AI hardware IPO candidates.
Valuation
$4B+
Total Raised
$720M
Focus
AI Chips (WSE)
Stage
IPO Filed
Chips Pre-IPO
Groq
Groq has built the Language Processing Unit (LPU) — a deterministic, low-latency chip architecture that achieves industry-leading inference speeds for large language models. Groq's GroqCloud platform offers public API access to Llama and Mixtral models at speeds exceeding 500 tokens per second, dramatically faster than GPU-based inference. Founded by a former Google TPU architect, Groq is backed by Tiger Global, Neuberger Berman, and D1 Capital.
Valuation
$2.8B
Total Raised
$300M+
Focus
LPU Inference
Stage
Series D
💾
Chips Pre-IPO
SambaNova Systems
SambaNova Systems designs reconfigurable dataflow architecture chips (RDA) specifically optimized for AI training and inference at enterprise scale. Unlike Nvidia's CUDA-centric ecosystem, SambaNova's SN40L chip offers a software-configurable compute fabric that adapts dynamically to different AI workloads. The company targets large enterprises and government agencies, and counts the US Department of Energy's Argonne National Laboratory among its deployments.
Valuation
$5.1B
Total Raised
$1.1B
Focus
AI Training Chips
Stage
Late-Stage

Lightmatter vs Cerebras vs Groq

Three distinct approaches to custom AI silicon — photonics, wafer-scale integration, and LPU dataflow — each targeting different bottlenecks in the AI compute stack.

Category Lightmatter Cerebras Systems Groq
Core Technology Photonic interconnects (Passage) Wafer-Scale Engine (WSE-3) Language Processing Unit (LPU)
Primary Use Case Chip-to-chip data movement, training clusters Large model training & fast inference Ultra-low latency inference
Valuation $4.4B $4B+ $2.8B
Total Funding $850M $720M $300M+
Key Investors GV, Spark Capital, HPE Benchmark, Foundation Capital, Altimeter Tiger Global, Neuberger Berman, D1
Key Differentiator Bandwidth without heat; optical vs. copper Largest chip ever; eliminates chip-to-chip latency Deterministic architecture; fastest token throughput
IPO Outlook 2026–2027 likely S-1 filed; IPO delayed pending market conditions 2027+ (revenue scaling phase)
Nvidia Relationship Complementary (interconnects Nvidia GPUs) Competitive (direct GPU alternative) Competitive (inference layer alternative)

AI Infrastructure IPO Questions Answered

As of March 2026, Lightmatter has not filed an S-1 or announced a formal IPO date. The company completed a $400M funding round in early 2024, which extended its private runway considerably. CEO Nicholas Harris has spoken publicly about the company's focus on revenue growth before pursuing public markets.

Most analysts tracking the Lightmatter IPO expect a potential public offering no earlier than late 2026 or 2027, contingent on continued enterprise adoption of its Passage photonic interconnect platform and macro market conditions. Follow Lightmatter's profile for real-time updates on any IPO filing activity.

Together AI reached a $3.3B valuation in its most recent funding round and has not announced a Together AI IPO date. The company is focused on scaling its AI inference cloud platform and expanding enterprise contracts before considering the public markets.

The AI cloud infrastructure space is maturing rapidly, and Together AI's positioning around open-source model hosting gives it differentiation from AWS, Azure, and Google Cloud. An IPO in 2027–2028 is the most commonly cited analyst expectation. See the full Together AI company profile for funding details and investor list.

Together AI operates an AI cloud platform that allows developers and enterprises to run, fine-tune, and deploy open-source large language models via API. Unlike OpenAI or Anthropic, Together AI does not build its own models — instead, it provides optimized infrastructure to run models like Meta's Llama, Mistral, and others.

The company makes money through usage-based API pricing (per-token fees for inference), dedicated GPU cluster rentals for large enterprises, and fine-tuning services. Its key value proposition is significantly lower cost than proprietary model APIs combined with greater model flexibility — allowing enterprises to avoid vendor lock-in while still accessing state-of-the-art models.

Lightmatter and Nvidia are largely complementary rather than competitive in the current market. Lightmatter's Passage platform is a photonic interconnect that improves how existing chips — including Nvidia GPUs — communicate with each other within a data center. Rather than replacing GPU compute, Lightmatter removes the bandwidth bottleneck between chips.

Think of it this way: Nvidia makes the engines; Lightmatter makes a better highway for those engines to work together. This complementary positioning is actually a strategic advantage for Lightmatter's near-term go-to-market — it can sell to hyperscalers already committed to Nvidia infrastructure without asking them to replace anything. In the longer term, Lightmatter's compute-in-light technology could evolve into a more direct competitive position as photonic computing matures.

CoreWeave's most recent private market valuation stands at $23 billion, making it the highest-valued company in this AI infrastructure pre-IPO cohort and one of the most valuable private technology companies in the United States. The company raised $12.7 billion in total funding, including a $7.5B debt facility and equity rounds led by Magnetar Capital and including Nvidia.

CoreWeave is widely considered the most likely near-term IPO in the AI infrastructure sector. The company has filed confidential S-1 paperwork with the SEC and has reportedly held investment bank roadshow discussions. A public offering in 2025–2026 is the base case for most IPO analysts, pending continued revenue growth and market window conditions. View the full CoreWeave profile for investor details and revenue estimates.

AI hardware startup valuations are driven by a combination of factors distinct from traditional software companies. Revenue multiples play a role, but investors place significant weight on: (1) technology differentiation and defensibility of the chip architecture, (2) design wins and signed customer contracts, (3) fabrication access and supply chain security, and (4) the size of the total addressable market the technology can penetrate.

Companies like Cerebras and Lightmatter command premium valuations despite relatively early revenue because their chip designs represent multi-year R&D moats that competitors cannot quickly replicate. CoreWeave, by contrast, is valued primarily on its contracted revenue backlog and data center asset base — a more capital-intensive but more predictable business model. For a deeper analysis of valuation methodology, see our sector deep dive reports.

The six companies profiled on this page represent the strongest best AI infrastructure stocks pre-IPO candidates based on valuation, funding runway, technology differentiation, and IPO timeline signals. For investors with access to pre-IPO secondary markets, CoreWeave, Lightmatter, and Cerebras are the most discussed names.

However, direct pre-IPO investment requires accredited investor status and access to secondary market platforms. Most retail investors gain exposure through public market proxies such as Nvidia (NVDA), TSMC (TSM), and infrastructure-focused ETFs until these companies list. When they do list, we will cover the IPO on the TechStackIPO Pipeline page.

Nvidia's GPUs are designed for massively parallel floating-point computation — excellent for AI training, but not architecturally optimized for the sequential nature of text generation in language models. Each token must be generated one at a time, which means GPU parallelism is only partially utilized during LLM inference.

Groq's Language Processing Unit (LPU) is a deterministic, single-threaded sequential processor with extremely high memory bandwidth. Because token generation is inherently sequential, the LPU's architecture aligns perfectly with the workload — enabling throughput speeds of 300–500+ tokens per second on models like Llama, compared to 40–100 tokens per second on equivalent GPU setups. The tradeoff is that LPUs are less general-purpose than GPUs, making Groq a specialist inference play rather than a full training platform competitor. See Groq vs Cerebras for a deeper comparison.

Track Every AI Infrastructure IPO

Get alerts when Lightmatter, CoreWeave, or Cerebras files an S-1. We cover every major pre-IPO company in real time.