Signals intelligence data and analytics leader HawkEye 360 launched satellite cluster 13
HawkEye 360 successfully launched Cluster 13 on Sunday morning and has confirmed initial communications with the satellites.
This deployment advances their ability to support U.S. Government and international partners with consistent, high-quality RF data across multi-domain mission environments. Alongside recent strategic investments, Cluster 13 reflects a continued focus on strengthening the technology and capabilities their customers rely on.

The global leader in signals intelligence data and analytics, HawkEye 360, successfully launched its latest satellite trio, Cluster 13, and confirmed initial communications with the satellites. Cluster 13, integrated via Exolaunch, launched into a sun-synchronous orbit as part of the Twilight rideshare mission aboard a SpaceX Falcon 9 rocket.
This successful deployment and initial contact advance HawkEye 360's ability to support U.S. Government and international partners with consistent, high-quality radio-frequency insights across multi-domain mission environments. Operating in a sun-synchronous orbit, the satellites provide consistent opportunities to collect RF data over key regions, strengthening the delivery of RF insights worldwide.
"Cluster 13 strengthens our ability to provide the critical RF insights our partners need to navigate today's complex mission landscape," said John Serafini, CEO of HawkEye 360. "Alongside our recent acquisition and funding milestone, this launch reflects a continued investment in the technology, people, and capabilities our customers rely on, reinforcing HawkEye 360's role as a leader in signals intelligence."
The payload leverages advanced RF detection, enhanced onboard processing, and upgraded waveform-collection capabilities first introduced across recent launches. Together, these technologies capture a broader range of signals with greater clarity, improve geolocation performance, enhance onboard processing, and increase overall collection capacity. As a result, HawkEye 360 strengthens multi-domain mission support and enables customers to access RF insights more efficiently.
Following commissioning and on-orbit checkout, the satellites will integrate into HawkEye 360's space-derived signals intelligence architecture. This integration advances the company's use of scalable signal processing and AI-enabled analytics to detect, characterize, and geolocate radio-frequency activity worldwide, reinforcing HawkEye 360's defense-tech mission to deliver trusted domain awareness and support critical operations for defense and government partners.
About HawkEye 360
HawkEye 360 is equipping defense, intelligence, and national security leaders with mission-critical signals intelligence to enable faster, better decision-making. By detecting, geolocating, and characterizing radio-frequency emissions worldwide, HawkEye 360 delivers trusted domain awareness and early-warning indicators to the US Government and allied partners. Our space-based collection, proprietary signal processing, and AI-powered analytics transform knowledge of RF spectrum into a strategic advantage. Proven by operational mission success, HawkEye 360 is redefining how signals intelligence strengthens national and global security.
Nvidia's $20 Billion Groq Deal Reshapes AI Infrastructure for the age of inference
A record-breaking licensing transaction signals the dawn of the inference era and validates Groq's meteoric rise
Nvidia's $20 Billion Groq Deal: A Strategic Masterstroke in the Inference Era
As inference emerges as the defining battleground in AI's global expansion, Nvidia has made a decisive move—acquiring a $20 billion licensing agreement with Groq, one of the fastest-growing and best-executing companies in our portfolio.
The numbers speak for themselves. Since launch, CEO Jonathan Ross and his team have been on a remarkable trajectory: over 2.5 million developers onboarded to their cloud, major global partnerships secured, and data centers deployed at a pace unmatched by any modern AI infrastructure company. For context, OpenAI took years to reach 4 million developers—Groq accomplished a comparable feat in months.
While we're thrilled to witness this record-setting outcome for the AI industry, we'll admit to some surprise. Based on Groq's momentum, we had been anticipating an IPO path with a potential $100 billion valuation within the next 24 months.

The Deal Structure
Understanding the mechanics of a $20B licensing transaction
Nvidia's $20 billion licensing transaction with Groq represents a strategic pivot in how the AI giant approaches the rapidly evolving inference market. Rather than a traditional acquisition, this licensing structure allows Nvidia to integrate Groq's revolutionary LPU (Language Processing Unit) technology into its ecosystem while enabling Groq to maintain operational independence.
The deal structure is particularly notable for its focus on intellectual property licensing rather than equity acquisition. This approach provides Nvidia with immediate access to Groq's inference acceleration technology—critical as AI deployment shifts from training-heavy workloads to inference-dominated production environments.
"As inference becomes the next biggest phase in global AI scale and adoption, Groq became a must-have to add an inference layer to the Nvidia ecosystem."
Company & Employee Impact
What this means for Groq's trajectory and team
CEO and founder Jonathan Ross has led Groq on what can only be described as an epic trajectory. The company's execution has been remarkable: acquiring over 2.5 million developers on their cloud platform, closing major global partnerships, and building data centers at a pace that outstrips any modern AI infrastructure company.
For context, OpenAI accumulated 4 million developers over several years—Groq achieved more than half that figure in mere months. This velocity of adoption speaks to both the performance advantages of Groq's LPU architecture and the team's exceptional go-to-market execution.
Team Continuity
The licensing structure allows Groq's team to continue operating independently, preserving the culture and velocity that made them attractive to Nvidia in the first place.
Operational Scale
With $20B in licensing capital, Groq can accelerate data center buildout and talent acquisition without the constraints of traditional venture funding cycles.
Developer Adoption: Groq vs OpenAI

The Investor Perspective
A record outcome with an unexpected path
Many believed Groq was on a clear path to IPO, with projections suggesting a potential $100 billion valuation within 24 months post-listing. The decision to pursue a licensing arrangement with Nvidia rather than the public markets signals either an extraordinarily compelling offer or a strategic calculation about market timing and partnership value.
"We were thrilled to see this record AI industry outcome, but also surprised based on our belief that Groq was heading to IPO and a potential $100 billion valuation in the next 24 months."
Investment Thesis Validated
Groq has been described as "one of the fastest growing, best executing companies" in the AI infrastructure space. This deal validates the thesis that purpose-built inference hardware would become critical infrastructure as AI moves from research labs to production deployment at scale.
Technology & Industry Implications
Reshaping the AI infrastructure landscape
The strategic rationale for Nvidia is clear: as AI workloads shift from training to inference, the company needs to maintain its dominant position across the entire AI compute spectrum. Groq's LPU architecture offers deterministic latency and exceptional throughput for inference workloads—capabilities that complement Nvidia's GPU-centric approach.
LPU vs GPU: Performance Comparison

For Nvidia
Adds a critical inference layer to the ecosystem, positioning Nvidia to capture value across both training and production AI workloads without cannibalizing GPU sales.
For Groq
Gains access to Nvidia's enterprise relationships and global distribution, accelerating LPU adoption while maintaining technological independence.
For the Industry
Signals that inference optimization is now a first-class concern, validating investments in purpose-built inference infrastructure by other startups.
For Developers
The 2.5M+ developers on Groq's platform now have a pathway to deeper Nvidia integration, potentially simplifying hybrid training-inference deployments.
The Groq Journey From Google TPU spinout to record-breaking $20B deal

Looking Ahead
The inference era begins
Nvidia's willingness to pay $20 billion for licensing rights to inference technology validates what many in the industry have long believed: the real value in AI isn't in training models, but in deploying them at scale. Groq's LPU architecture, with its deterministic performance and exceptional throughput, is now positioned to be the inference backbone of the world's most valuable technology company's AI ecosystem.
For Groq, its team, and its investors, this is more than a transaction—it's validation of a vision that bet big on inference when the world was still focused on training.
Aerospace innovator Natilus expands globally into the fastest-growing aviation markets with SpiceJet India
Natilus and Spicejet have partnered with SpiceJet buying 100 of Natilu's innovative blended-wing aircraft, aiming for greater fuel efficiency, lower costs, and reduced emissions in India's growing market. Natilus established an Indian subsidiary (Natilus India) in Mumbai to support operations, as SpiceJet helps with local certification for these futuristic, sustainable planes, potentially transforming domestic air travel.
- The company's new subsidiary in Mumbai - Natilus India - will serve Indian airlines' need for new planes amid country's rapid economic growth
- SpiceJet places purchase order for 100 of Natilus's HORIZON passenger aircraft to reduce airline's emissions and operational costs
- The move reflects India's appetite for new aircraft that supplement commercial fleets and open the door to new domestic and international routes
Natilus, a U.S. aerospace manufacturer of blended-wing body aircraft, today debuted Natilus India, a subsidiary headquartered in Mumbai and led by Ravi Bhatia, Regional Director, to support in-country operations. Simultaneously, Natilus announced its first commercial partnership with one of India's largest passenger airlines, SpiceJet Ltd., which plans to purchase 100 of Natilus's flagship passenger plane, HORIZON, once the plane is certified in India. SpiceJet sees HORIZON as an opportunity to advance its sustainability efforts and drive forward aviation innovation in India and will be the first airline in India to add HORIZON to its fleets.
Natilus India will prioritize the expansion of Natilus's family of BWB aircraft into Indian markets and will coordinate closely with SpiceJet. This development also positions Natilus to begin exploratory sourcing of Indian-made manufactured components.
"We see immense opportunity to deliver a superior and more-efficient airplane for Indian airlines, such as SpiceJet," said Ravi Bhatia, Regional Director of Natilus India. "The establishment of the subsidiary in India is the first step in Natilus establishing roots in India and ultimately securing more commercial airlines customers who want to better serve the needs of their passengers."
Natilus is developing a family of blended-wing aircraft including its flagship passenger plane HORIZON and cargo plane KONA. Through improvements in aerodynamics, Natilus's blended-wing aircraft offer 40% greater capacity, 50% lower operating costs and 30% less fuel. Natilus's HORIZON, which can transport up to 240 passengers in its high-capacity configuration, is a prime solution for Indian commercial airlines looking to grow their fleets and keep flights affordable. Deliveries of the HORIZON will begin in the 2030s, at which point the HORIZON is poised to become the most cost- and fuel-efficient aircraft in existing commercial and cargo fleets.
"Today, India has one of the fastest growing markets in the world and it has an appetite for new aircraft that can both supplement its commercial carrier fleets, while opening the door to new domestic and international routes," said Aleksey Matyushev, CEO and Co-Founder of Natilus. "Our HORIZON serves an ideal solution and represents an innovative path forward for the industry, for India and for the world."
HawkEye 360 closes $150 millions Series E and strategic acquisition of Innovative Signal Analysis ISA
ALSO READ
HawkEye 360 Acquires ISA, Expanding One of the Industry's Most Advanced Signal-Processing Platforms

HawkEye 360, the global leader in signals intelligence data and analytics, today announced the completion of its acquisition of Innovative Signal Analysis (ISA), supported by equity and debt financings totaling $150 million.
The acquisition of ISA significantly expands HawkEye 360's signal-processing capabilities, bringing advanced algorithms, mission-ready systems, and deep engineering expertise that enhance the company's ability to detect, characterize, and analyze complex RF activity. ISA's technology and team strengthen HawkEye 360's end-to-end platform by accelerating data processing, improving performance in challenging RF environments, and supporting more scalable delivery of insights to customers.
This Series E preferred equity financing round was co-led by existing investor NightDragon and Center15 Capital, with additional secured and mezzanine debt financing from Silicon Valley Bank, a division of First Citizens Bank, Pinegrove Venture Partners, and Hercules Capital, Inc. The funding supports HawkEye 360's acquisition of ISA and strengthens the company's financial position, reinforcing the company's disciplined approach to growth and long-term financial management.
"This transaction marks an important step forward for HawkEye 360 as we continue to scale our platform and integrate highly complementary technical capabilities," said John Serafini, CEO of HawkEye 360. "The acquisition of ISA cements our position as the leading provider of RF data, signal processing, and analysis. The leadership of NightDragon and Center15 in this Series E round, alongside our lending partners, reflects confidence in our strategy and the value our capabilities bring to customers and partners worldwide."
"NightDragon is proud to continue our support of HawkEye 360's mission and growth strategy by leading this Series E round," said Dave DeWalt, Founder and CEO, NightDragon. "Since our initial investment, the company has made exceptional progress, and this funding and acquisition represent an important step in accelerating growth and advancing a platform we believe is essential to the market and to enduring national and global security." NightDragon has been an investor in HawkEye 360 since 2021.
"This financing supports the integration of ISA while maintaining a balanced and deliberate approach to scaling the business," said Craig Searle, CFO of HawkEye 360. "It strengthens our balance sheet and positions the company to execute on our operational priorities."
"Hawkeye360 is delivering mission-critical signals intelligence for the United States and its allies. We are proud to support Hawkeye 360's next phase of growth," said Ian Winer, Founder & CEO at Center15 Capital.
With the integration of ISA underway, HawkEye 360 continues to advance its platform and deliver signals intelligence capabilities that defense, government, and international partners rely on to support critical missions.
Cooley LLP represented HawkEye 360 in this transaction.
About HawkEye 360
HawkEye 360 is equipping defense, intelligence, and national security leaders with mission-critical signals intelligence to enable faster, better decision-making. By detecting, geolocating, and characterizing radio-frequency emissions worldwide, HawkEye 360 delivers trusted domain awareness and early-warning indicators to the US Government and allied partners. Our space-based collection, proprietary signal processing, and AI-powered analytics transform knowledge of RF spectrum into a strategic advantage. Proven by operational mission success, HawkEye 360 is redefining how signals intelligence strengthens national and global security.
Fintech leader Octane raises $100M in Series F Funding round to fuel record growth
ALSO READ:
Octane and Adventure Lifestyle Launch Adventure Lifestyle Finance
Equity Financing to Drive Continued Growth, Innovation, and Market Expansion.
The fintech revolutionizing the buying experience for the powersports industry Octane®, announced it has closed its Series F funding round of $100 million in equity capital. The raise includes new equity capital to be used for growth initiatives as well as amounts to be used for secondary share transfers.
The capital builds on Octane’s strong originations growth and enables the Company to further accelerate market penetration and deepen its product offering, positioning the Company even more favorably for long-term success. The Series F raise attracted a mix of returning and new investors; Valar Ventures led the round with participation from Upper90, Huntington Bank, Camping World and Good Sam, Holler-Classic, and others. Prior to the Series F, Octane had raised $242 million in total equity funding since inception, including its Series E, which closed in 2024.
“Building on our strong foundation, this capital allows us to move more quickly on key initiatives that will further differentiate us in existing markets and speed up our entrance into new ones,” said Jason Guss, CEO and Co-Founder of Octane. “We’re grateful to our existing investors for their continued support and belief in our vision, as well as to new investors for their partnership. We look forward to strengthening these relationships as we expand our offerings and unlock the full potential of financial products for merchants and consumers.”
“One of the investing lessons of the past two decades is that the best tech companies can compound for far longer than expected,” said James Fitzgerald, Founding Partner of Valar Ventures. “Octane’s unique offering supports dealers and OEMs with software and financing solutions unavailable elsewhere. We expect Octane to continue to take market share — both in its existing markets and in those it’s only begun to enter — for a very long time. We are excited to continue backing this team and to partner with them for another decade, or longer.”
“It’s been impressive to watch Octane’s execution in becoming a clear leader in the powersports market,” said Billy Libby, Managing Partner at Upper90. “Now the company is scaling its proprietary underwriting engine and end-to-end technology platform as it expands into new markets and helps dealers grow their profits and deliver better financing experiences to consumers. Few public or private companies are growing as rapidly — and profitably — as Octane, and we’re excited to be part of their continued growth.”
Thus far in 2025, Octane has launched a myriad of new products and technology enhancements, including groundbreaking updates for both merchants and consumers. Notably, Octane strengthened its industry-leading financing portal to provide even faster, easier customer acquisition and closing processes for merchants, helping them reach more buyers and increase profitability. At the same time, customers can access simplified payment options, expedited question resolution, and increased flexibility within the Customer Portal.
Since its founding in 2014, Octane has originated over $7 billion in loans through its in-house lender Roadrunner Financial®, Inc., issued more than $4.7 billion in asset-backed securities, and has sold or committed to sell $3.3 billion of secured consumer loans since December 2023. The Company grew originations by more than 30% from Q3 2024 to Q3 2025 and is GAAP net income profitable. Octane works with 60 original equipment manufacturer (OEM) partner brands and serves markets worth a combined $150 billion with its innovative technology solutions and fast, easy financing experience.
Senior Vice President, Communications and People at Octane
Press@octane.co
Investor Relations:
IR@octane.co
Forbes and Lightmatter CEO talk light based photonic compute and the future of AI infrastructure
Meet The Founder Betting On Light As The Future Of AI Chip Technology and Infrastructure.
Nicholas Harris, CEO of Lightmatter, sat down with Forbes to discuss building AI chip technology relying on light rather than electrical signals. Harris also discussed why the end of Moore's Law and the AI boom created the perfect moment for photonics, enabling massive acceleration in AI model training and increased data center efficiency.

- Addressing Bottlenecks: As AI chip processing power increases, the connections (interconnects) between chips become the primary limitation, causing GPUs to sit idle while waiting for data. Lightmatter's photonic solutions significantly increase bandwidth and reduce latency and power consumption compared to conventional copper interconnects.
- Photonic Interconnects: The company's core product is the Passage platform, a 3D-stacked silicon photonics engine that facilitates high-speed data transfer between processors. This technology uses light (photons) to move data, offering up to 100 times faster data movement between chips in some cases.
- Products & Integration:
- Passage M1000: An active photonic interposer (a layer that AI chips sit atop) with record-breaking optical bandwidth for large AI model training.
- Passage L200: A 3D co-packaged optics (CPO) chiplet that integrates directly with XPU (accelerator/processor) and switch designs for improved performance and energy efficiency.
- Envise: A complete photonic AI accelerator platform designed to run AI computations with greater power efficiency than traditional electronic systems.
- Scalability & Partnerships: Lightmatter's technology enables large-scale AI clusters with up to millions of nodes. The company partners with major semiconductor and packaging firms like GlobalFoundries, Amkor, and ASE to ensure high-volume manufacturing and industry standardization. They are also part of the UALink Consortium, which aims to standardize AI interconnect solutions.
- Higher Speed and Bandwidth: Light-based interconnects can transfer data at much faster speeds and higher densities than electrical signals, which is crucial for the massive datasets used in AI.
- Energy Efficiency: Photonic interconnects consume significantly less power, helping data centers manage their growing energy consumption and operating costs.
- Performance Scaling: By eliminating interconnect bottlenecks, Lightmatter enables faster training times for advanced AI models, allowing for the development of larger and more capable neural networks.
AlphaSense AI makes Inc.'s 2025 Best in Business list for a major year for AI innovation and enterprise adoption
Company recognized for leadership in "Best AI Implementation" and "Best in Innovation" categories, validating the impact of its enterprise AI workflow enhancements
AlphaSense, the AI platform redefining market intelligence for the business and financial world, today announced it has been named to Inc.'s 2025 Best in Business list in the "Best AI Implementation" and "Best in Innovation" categories. This recognition solidifies a record-breaking year of growth for the company, which recently surpassed $500 million in Annual Recurring Revenue (ARR), driven by accelerated adoption of AlphaSense's AI workflow capabilities now used by over 6,500 enterprises.
AlphaSense is pioneering how businesses in every industry integrate generative AI into critical workflows such as go-to-market planning, investment banking, M&A, investor relations, and competitive intelligence. The company's AI innovation is underscored by the rapid adoption of its Generative Search capabilities, which enable users to instantly find insights across 500 million documents with natural language queries. It understands industry-specific terminology, anticipates relevant questions, and delivers granularly cited, analyst-level insights in seconds.
These AI capabilities have since expanded to include sophisticated AI workflow agents that act like a team of trusted, domain-specific analysts to guide decision making.
The Inc. Best in Business award celebrates companies that have demonstrated the most impactful and tangible business wins of the year. This accolade is the latest in AlphaSense's notable industry recognitions already this year, including being named to the Forbes 2025 Cloud 100 and listed as No. 8 in the CNBC Disruptor 50.
"This recognition by Inc. and the AI advancements we've made this year validates the work we are doing to democratize access to elite market intelligence for our customers, with trustworthy AI at the helm," said Kiva Kolstein, President and Chief Revenue Officer of AlphaSense. "The companies winning in this new era aren't just experimenting with AI, they're using AI to earn trust and grow revenue. Our mission has never been more critical. We enable customers to shift from complexity to clarity, deliver intelligence that drives real impact, and empower users to move from insight to action before the competition."
AlphaSense's AI platform enhancements over the past year include Generative Search, Generative Grid, Deep Research, Financial Data and AI Agent Interviewer, demonstrating the company's evolution from AI search to fully automated, end-to-end AI workflows and addressing the growing need among enterprises for domain-specific AI that is trustworthy and scalable.
AlphaSense is accelerating its vision to deliver trusted intelligence at unprecedented speed. Upcoming enhancements will be focused on unlocking deeper insights across quantitative and qualitative data, automating complex workflows with customized agents, and generating decision-ready deliverables—from pitchbooks to newsletters—on-demand.
About AlphaSense
AlphaSense is the AI platform redefining market intelligence and workflow orchestration, trusted by thousands of leading organizations to drive faster, more confident decisions in business and finance. The platform combines domain-specific AI with a vast content universe of over 500 million premium business documents — including equity research, earnings calls, expert interviews, filings, news, and internal proprietary content. Purpose-built for speed, accuracy, and enterprise-grade security, AlphaSense helps teams extract critical insights, uncover market-moving trends, and automate complex workflows with high-quality outputs. With AI solutions like Generative Search, Generative Grid, and Deep Research, AlphaSense delivers the clarity and depth professionals need to navigate complexity and obtain accurate, real-time information quickly. For more information, visit www.alpha-sense.com.
Media Contact
Remi Duhe for AlphaSense
Email: media@alpha-sense.com
The AI tech IPO window is reopening with Anthropic and OpenAI moving preparing to IPO in 2026
Anthropic has hired Wilson Sonsini to prepare for a potential 2026 IPO while simultaneously raising a major private round (Bloomberg, The Information).
OpenAI is taking similar steps: the company has reportedly begun internal preparation for a 2026 listing, including discussions with banks, legal advisors, and early examinations of financial disclosures (Reuters, FT, Bloomberg).
These are concrete moves from two of the most capital-intensive AI companies — not speculation.
Private Capital Has Hit Its Limits
- AI scale-up looks more like energy infrastructure than software.
- IDC and Gartner estimate AI-related data-center capex in the hundreds of billions for 2025.
- Bloomberg Intelligence projects multi-trillion-dollar cumulative AI infra investment through 2030.
OpenAI and Anthropic both face heavy compute and power requirements. Private rounds alone cannot fund frontier model development at this scale. This is the structural reason top AI firms are preparing public filings.
IPO Activity Is Already Back
The US IPO market — frozen in 2022–2023 — has materially reopened:
- IPO volume in Q1–Q3 2025 nearly matched the entire year of 2024 (Renaissance Capital).
- ServiceTitan surged more than 40% on its December debut (WSJ).
- Anthropic and OpenAI both advancing IPO groundwork for 2026 (Bloomberg, Reuters, FT).
This activity shows a functioning pipeline, not a hypothetical one.
Why AI Is Driving the Shift to Public Markets
Major tech companies continue massive infrastructure expansion:
- AI capex for 2025 is measured in the hundreds of billions (IDC, Gartner).
- OpenAI’s annualized revenue has grown sharply, but the company still operates with substantial losses due to compute costs (FT).
- Microsoft has paused or slowed select data-center builds, signalling early supply constraints (Bloomberg).
These pressures create a simple reality: Only public markets can supply capital at the required scale.
Risks the Market Is Closely Watching
Monetization: OpenAI, Anthropic, and others show strong revenue growth but remain capital-intensive and unprofitable (FT).
Infrastructure bottlenecks: GPU availability, power constraints, and paused hyperscaler projects (Bloomberg).
Macro sensitivity: IPO activity reacts immediately to liquidity conditions; any broad market shock could slow issuance.
These risks are real and visible in filings, analyst notes, and infrastructure reports.
Bottom Line
Between Anthropic’s IPO preparation and OpenAI’s internal listing work, the direction of travel is clear:
late-stage AI companies are returning to public markets because they have to.
With IPO activity rising, strong debut performance, and unprecedented infrastructure spending needs, the tech financing environment has already shifted.
The IPO window isn’t “about to open.”
It’s open.
Heather Planishek joins the Lambda superintelligence cloud team as Chief Financial Officer
Big next step to IPO, and the public markets for Lambda and the AI industry. Today, Heather Planishek joins Lambda as Chief Financial Officer. Most recently, she served as Chief Operating and Financial Officer at Tines, the intelligent workflow platform, and has been Lambda's Audit Chair since July 2025. Heather brings deep company insight to our leadership team, as we accelerate the deployment of AI factories to meet demand from hyperscalers, enterprises, and frontier labs building superintelligence.
Heather is an industry veteran who brings deep financial and operational expertise as Lambda accelerates the deployment of AI factories to meet demand from hyperscalers, enterprises, and frontier labs building superintelligence.
Lambda, the Superintelligence Cloud, announced the appointment of Heather Planishek as Chief Financial Officer. Planishek brings deep experience scaling high-growth technology companies and will lead Lambda’s global finance organization as the company continues to expand its infrastructure footprint and operations.
Planishek most recently served as Chief Operating and Financial Officer at Tines, the intelligent workflow platform. She previously served as Chief Accounting Officer at Palantir Technologies Inc. (NASDAQ: PLTR), where she helped guide the company through a period of significant growth and its transition to the public markets. Earlier in her career, she held key leadership roles at Hewlett Packard Enterprise and Ernst & Young, building extensive expertise in enterprise-scale finance, operations, and governance. Planishek joined Lambda’s Board of Directors in July of this year, and as part of the transition, will step down from her board position.
“It’s awesome to have Heather join as our CFO,” said Stephen Balaban, co-founder and CEO of Lambda. “I’ve gotten to know her through our work together on the board, and it’s great to have her take on an executive leadership role at Lambda.”
In her new position, Planishek will oversee Lambda’s financial strategy, planning, accounting, treasury, investor relations, and business systems. Her appointment strengthens the company’s leadership team as Lambda continues to serve some of the world’s leading AI labs and largest hyperscalers, as well as tens of thousands of AI developers across research, startups, and enterprise.
“Lambda sits at the center of one of the most important shifts in technology, with infrastructure that I believe is rapidly becoming essential for AI at scale,” said Planishek. “I’m excited to join the leadership team and help strengthen the financial and operational foundation that will support Lambda’s continued growth and long-term market leadership.”
About Lambda
Lambda, The Superintelligence Cloud, is a leader in AI cloud infrastructure serving tens of thousands of customers.
Founded in 2012 by machine learning engineers who published at NeurIPS and ICCV, Lambda builds supercomputers for AI training and inference.
Our customers range from AI researchers to enterprises and hyperscalers.
Lambda’s mission is to make compute as ubiquitous as electricity and give everyone the power of superintelligence. One person, one GPU.
Contacts
Lambda Media Contact
pr@lambda.ai
Lambda revolutionizes light based AI infrastructure with NVIDIA Quantum‑X Photonics for next-gen AI factories
Frontier AI training and inference now operate at unprecedented scale. The new co-packaged optics networking from NVIDIA delivers 3.5x higher efficiency and 10x resiliency for large-scale GPU clusters.
Lambda, the Superintelligence Cloud, today announced it is among the first AI infrastructure providers to integrate NVIDIA's silicon photonics–based networking.
The shift to co-packaged optics CPO, addresses a critical bottleneck in AI infrastructure. As AI models now train on hundreds of thousands of GPUs, and beyond, the network connecting them has become as important as the GPUs themselves. Traditional networking approaches are not keeping pace with this scale.
Why CPO Matters for AI Compute Networks
NVIDIA Quantum-X Photonics InfiniBand and NVIDIA Spectrum-X Photonics Ethernet switches use co-packaged optics (CPO) with integrated silicon photonics to provide the most advanced networking solution for massive-scale AI infrastructure. CPO addresses the demands and constraints of GPU clusters across multiple vectors:
- Lower power consumption: Integrating the silicon photonic optical engine directly next to the switch ASIC eliminates the need for transceivers with active components that require additional power. At launch, NVIDIA mentioned a 3.5x in power efficiency improvement over traditional pluggable networks.
- Increased reliability and uptime: Fewer discrete optical transceiver modules, one of the highest failure rate components in a cluster, mean fewer potential failure points. NVIDIA cites 10x higher resilience and 5x longer AI application runtime without interruption over traditional pluggable networks.
- Lower latency communication: Placing optical conversion next to the switch ASIC minimizes electrical trace lengths. This simplified data and electro-optical conversion path provides lower latency than traditional pluggable networks.
- Faster deployment at scale: Fewer separate components, simplified optics cabling, and fewer service points mean that large-scale clusters can be deployed, provisioned, and serviced more quickly. NVIDIA cites 1.3x faster time to operation versus traditional pluggable networks.
"NVIDIA Quantum‑X Photonics is the foundation for high-performance, resilient AI networks. It delivers superior power efficiency, improved signal integrity, and enables AI applications to run seamlessly in the world’s largest datacenters,” said Ken Patchett, VP of DC Infrastructure, at Lambda. “By integrating optical components directly next to the network switches, we believe our customers can deploy AI infrastructure faster while significantly reducing operational costs – essential as we continue to scale to support frontier AI workloads."
NVIDIA reports that NVIDIA Photonics delivers 3.5x better power efficiency, 5x longer sustained application runtime, and 10x greater resiliency than traditional pluggable transceivers. Co-packaged optics can provide increased compute per watt and enhanced network reliability, enabling faster model training and inference.
Built for the era of real-time AI, NVIDIA Photonics networking features a simplified design, resulting in fewer components to install and maintain. Lambda continues to provide scalable AI infrastructure, helping enterprises, research labs, and startups build multi-site, large-scale GPU AI factories.
“AI factories are a fundamentally new class of infrastructure – defined by their network architecture and purpose-built to generate intelligence at massive scale,” said Gilad Shainer, senior vice president of networking at NVIDIA. “By integrating silicon photonics directly into switches, NVIDIA Quantum-X silicon photonics networking switches enable the kind of scalable network fabric that makes massive-GPU AI factories possible.”
How Lambda Plans to Leverage It
Lambda is preparing its next-generation GPU clusters to integrate CPO networking using NVIDIA Quantum-X Photonics InfiniBand and Spectrum-X Photonics Ethernet switches. These advances in silicon-photonics switching are critical as we design massive-scale training and inference systems. For Lambda’s NVIDIA GB300 NVL72 and NVIDIA Vera Rubin NVL144 clusters, we are adopting CPO-based networks to deliver higher reliability and performance for customers while simplifying large-scale deployment operations and improving power efficiency.

This builds on Lambda's cooperation with NVIDIA. Lambda recently achieved NVIDIA Exemplar Cloud status, validating its ability to deliver consistent performance for large-scale training workloads on NVIDIA Hopper GPUs. Over the past decade, Lambda has earned six NVIDIA awards, affirming its position as a trusted collaborator in NVIDIA’s ecosystem.
About Lambda
Lambda, The Superintelligence Cloud, is a leader in AI cloud infrastructure serving tens of thousands of customers.
Founded in 2012 by machine learning engineers who published at NeurIPS and ICCV, Lambda builds supercomputers for AI training and inference.
Our customers range from AI researchers to enterprises and hyperscalers.
Lambda’s mission is to make compute as ubiquitous as electricity and give everyone the power of superintelligence. One person, one GPU.
Contacts
Lambda Media Contact
pr@lambdal.com











