The AI boom of the mid-2020s has produced a defining dynamic: the largest language models run on infrastructure that does not yet exist at scale. Every GPU cluster, photonic interconnect, and inference chip deployed today is a down payment on a compute buildout that analysts estimate will consume over $400 billion in capital investment through 2028. The companies building that infrastructure are still private — and many are approaching the public markets.
From Lightmatter's photonic AI chips to CoreWeave's GPU cloud empire, the six companies profiled on this page represent the most important hardware and cloud bets in pre-IPO technology. Understanding their technology, funding trajectories, and investor backing is essential for anyone tracking AI infrastructure stocks pre-IPO in 2026.
This page provides up-to-date valuations, total funding raised, CEO information, technology focus, and IPO outlook for each company. Use the comparison table and FAQ sections to understand how these businesses differentiate against incumbents like Nvidia and AMD.
The $400B AI Infrastructure Race
AI models require specialized compute at every layer — training, fine-tuning, and inference. The market for AI infrastructure hardware and cloud services is projected to grow from roughly $90B in 2024 to over $400B by 2028, creating a generational window for pre-IPO companies to capture lasting market share.
AI Infrastructure Pre-IPO Companies
Six pre-IPO companies building the hardware and cloud layer beneath every major AI model deployment. All data reflects the most recent publicly available funding rounds and valuations as of March 2026.
Lightmatter vs Cerebras vs Groq
Three distinct approaches to custom AI silicon — photonics, wafer-scale integration, and LPU dataflow — each targeting different bottlenecks in the AI compute stack.
| Category | Lightmatter | Cerebras Systems | Groq |
|---|---|---|---|
| Core Technology | Photonic interconnects (Passage) | Wafer-Scale Engine (WSE-3) | Language Processing Unit (LPU) |
| Primary Use Case | Chip-to-chip data movement, training clusters | Large model training & fast inference | Ultra-low latency inference |
| Valuation | $4.4B | $4B+ | $2.8B |
| Total Funding | $850M | $720M | $300M+ |
| Key Investors | GV, Spark Capital, HPE | Benchmark, Foundation Capital, Altimeter | Tiger Global, Neuberger Berman, D1 |
| Key Differentiator | Bandwidth without heat; optical vs. copper | Largest chip ever; eliminates chip-to-chip latency | Deterministic architecture; fastest token throughput |
| IPO Outlook | 2026–2027 likely | S-1 filed; IPO delayed pending market conditions | 2027+ (revenue scaling phase) |
| Nvidia Relationship | Complementary (interconnects Nvidia GPUs) | Competitive (direct GPU alternative) | Competitive (inference layer alternative) |
Side-by-Side Company Analysis
AI Infrastructure IPO Questions Answered
As of March 2026, Lightmatter has not filed an S-1 or announced a formal IPO date. The company completed a $400M funding round in early 2024, which extended its private runway considerably. CEO Nicholas Harris has spoken publicly about the company's focus on revenue growth before pursuing public markets.
Most analysts tracking the Lightmatter IPO expect a potential public offering no earlier than late 2026 or 2027, contingent on continued enterprise adoption of its Passage photonic interconnect platform and macro market conditions. Follow Lightmatter's profile for real-time updates on any IPO filing activity.
Together AI reached a $3.3B valuation in its most recent funding round and has not announced a Together AI IPO date. The company is focused on scaling its AI inference cloud platform and expanding enterprise contracts before considering the public markets.
The AI cloud infrastructure space is maturing rapidly, and Together AI's positioning around open-source model hosting gives it differentiation from AWS, Azure, and Google Cloud. An IPO in 2027–2028 is the most commonly cited analyst expectation. See the full Together AI company profile for funding details and investor list.
Together AI operates an AI cloud platform that allows developers and enterprises to run, fine-tune, and deploy open-source large language models via API. Unlike OpenAI or Anthropic, Together AI does not build its own models — instead, it provides optimized infrastructure to run models like Meta's Llama, Mistral, and others.
The company makes money through usage-based API pricing (per-token fees for inference), dedicated GPU cluster rentals for large enterprises, and fine-tuning services. Its key value proposition is significantly lower cost than proprietary model APIs combined with greater model flexibility — allowing enterprises to avoid vendor lock-in while still accessing state-of-the-art models.
Lightmatter and Nvidia are largely complementary rather than competitive in the current market. Lightmatter's Passage platform is a photonic interconnect that improves how existing chips — including Nvidia GPUs — communicate with each other within a data center. Rather than replacing GPU compute, Lightmatter removes the bandwidth bottleneck between chips.
Think of it this way: Nvidia makes the engines; Lightmatter makes a better highway for those engines to work together. This complementary positioning is actually a strategic advantage for Lightmatter's near-term go-to-market — it can sell to hyperscalers already committed to Nvidia infrastructure without asking them to replace anything. In the longer term, Lightmatter's compute-in-light technology could evolve into a more direct competitive position as photonic computing matures.
CoreWeave's most recent private market valuation stands at $23 billion, making it the highest-valued company in this AI infrastructure pre-IPO cohort and one of the most valuable private technology companies in the United States. The company raised $12.7 billion in total funding, including a $7.5B debt facility and equity rounds led by Magnetar Capital and including Nvidia.
CoreWeave is widely considered the most likely near-term IPO in the AI infrastructure sector. The company has filed confidential S-1 paperwork with the SEC and has reportedly held investment bank roadshow discussions. A public offering in 2025–2026 is the base case for most IPO analysts, pending continued revenue growth and market window conditions. View the full CoreWeave profile for investor details and revenue estimates.
AI hardware startup valuations are driven by a combination of factors distinct from traditional software companies. Revenue multiples play a role, but investors place significant weight on: (1) technology differentiation and defensibility of the chip architecture, (2) design wins and signed customer contracts, (3) fabrication access and supply chain security, and (4) the size of the total addressable market the technology can penetrate.
Companies like Cerebras and Lightmatter command premium valuations despite relatively early revenue because their chip designs represent multi-year R&D moats that competitors cannot quickly replicate. CoreWeave, by contrast, is valued primarily on its contracted revenue backlog and data center asset base — a more capital-intensive but more predictable business model. For a deeper analysis of valuation methodology, see our sector deep dive reports.
The six companies profiled on this page represent the strongest best AI infrastructure stocks pre-IPO candidates based on valuation, funding runway, technology differentiation, and IPO timeline signals. For investors with access to pre-IPO secondary markets, CoreWeave, Lightmatter, and Cerebras are the most discussed names.
However, direct pre-IPO investment requires accredited investor status and access to secondary market platforms. Most retail investors gain exposure through public market proxies such as Nvidia (NVDA), TSMC (TSM), and infrastructure-focused ETFs until these companies list. When they do list, we will cover the IPO on the TechStackIPO Pipeline page.
Nvidia's GPUs are designed for massively parallel floating-point computation — excellent for AI training, but not architecturally optimized for the sequential nature of text generation in language models. Each token must be generated one at a time, which means GPU parallelism is only partially utilized during LLM inference.
Groq's Language Processing Unit (LPU) is a deterministic, single-threaded sequential processor with extremely high memory bandwidth. Because token generation is inherently sequential, the LPU's architecture aligns perfectly with the workload — enabling throughput speeds of 300–500+ tokens per second on models like Llama, compared to 40–100 tokens per second on equivalent GPU setups. The tradeoff is that LPUs are less general-purpose than GPUs, making Groq a specialist inference play rather than a full training platform competitor. See Groq vs Cerebras for a deeper comparison.
Track Every AI Infrastructure IPO
Get alerts when Lightmatter, CoreWeave, or Cerebras files an S-1. We cover every major pre-IPO company in real time.