Groq Founder CEO Jonathan Ross Says Speed to Deployment Sets Its AI Inference Infrastructure Apart
Groq CEO Jonathan Ross explains how the company differentiates itself from the competition in an interview with Bloomberg’s Caroline Hyde. With great insights into how Groq's incredible rate of trajectory enabled their AI-inference platform to close an exclusive partnership with Bell AI Fabric, Canada’s largest sovereign AI infrastructure project. Speed matters in how you build, deploy, and market to be one of the fastest-growing AI companies.
Conduit Raises $36 Million Series A to Scale Use of Stablecoins for Cross-Border Payments
ALSO READ:
Dragonfly co-leads $36 million Series A in cross-border payment startup Conduit - The Block
Exclusive: Stablecoin company Conduit raises $36 million from Dragonfly Capital and Altos Ventures - Fortune
Conduit transaction volumes surged 16x in 2024, surpassing $10 billion in annualized payment volume. It will use new funding to expand its geographic reach and increase the range of fiat and digital currencies supported through its innovative real-time global payment rails.

Conduit, a leading cross-border payments platform powered by stablecoins, announced today it closed a $36 million Series A funding round. The round was co-led by Dragonfly and Altos Ventures, with participation from Sound Ventures, Commerce Ventures, DCG, Circle Ventures (the issuer of the USDC stablecoin) and existing investors Helios Digital Ventures, and Portage Ventures.
Conduit’s cross-border payment network seamlessly integrates stablecoins, USD and local currencies, providing businesses with a faster, cheaper, and more reliable alternative to the legacy SWIFT system. Already connected into multiple local banks across North America, Latin America, Europe, Africa, and Asia, Conduit will use the capital to fuel expansion into additional markets and support an even broader range of traditional and digital currencies through its real-time payment rails.
This round of funding comes on the back of Conduit’s exceptional growth, with transaction volumes growing 16x through its platform between 2023 and 2024. To date, Conduit has saved clients over 60,000 hours in settlement times and generated fee savings worth over $55 million. The platform bridges crypto-native infrastructure with traditional finance, offering nearly instant, programmable global transactions with integrated AML, sanctions screening, and transaction monitoring.
Clients choose Conduit for:
- Speed and Efficiency: Unlike payment platforms that rely on slow and disjointed networks of correspondent banks, Conduit has direct partnerships with two dozen banks across the world, enabling transactions to settle in seconds rather than days.
- Broad Geographic Coverage: Conduit natively supports a diverse range of currencies and payment methods, including highly inflationary local currencies in Latin America, Africa, and Asia.
- Deep Liquidity: Conduit’s robust network of institutional-grade FX providers ensures large transactions can be processed seamlessly without liquidity constraints.
"This fresh capital injection will enable us to accelerate our mission to build the next generation global payments infrastructure, to promote fairer economic opportunities across the world," said Kirill Gertman, Conduit CEO. "Traditional cross-border payment systems do not meet the demands of modern businesses. Conduit’s platform seamlessly bridges the gap between traditional banking and stablecoin technology, offering unparalleled speed, affordability, transparency and reliability."
Conduit’s platform enables nearly instant global transfers across multiple payment rails, including USD-denominated payment networks (SWIFT, ACH, FedWire) and local payment systems throughout Europe, the UK, and countries such as China, Hong Kong, Mexico, Brazil, Colombia, Nigeria, and Kenya, among others. Businesses in these regions often face restricted access to USD, lack of SWIFT connectivity, limited interoperability between stablecoins and fiat currencies, slow settlement times, high fees, and complex regulatory requirements. While stablecoins can have a significant impact on how businesses can manage their treasuries, most market participants still expect invoices to be settled in fiat currencies, creating a need for seamless interoperability between fiat and digital currencies. Conduit enables clients in those jurisdictions to transition between stablecoins and local currencies in real-time to more efficiently settle commercial invoices.
As part of this investment, Dragonfly Capital’s Rob Hadick will join the Conduit board. "We're thrilled to lead this investment round and support Kirill and his team as they reimagine how money moves across borders. With billions of annual transaction volume already flowing through Conduit’s platform, it has proven there’s a better way to move money globally and that stablecoins are the future of cross-border payments," Hadick stated. "What impressed us most was not just their innovative technology, but their remarkable traction and clear product-market fit. By addressing the real pain points businesses face with international transactions, particularly in emerging markets, Conduit has positioned itself as a critical infrastructure provider for the global economy.”
Founded in 2021, Conduit currently has 57 employees and serves more than 100 clients, experiencing 105% year-over-year client growth. The company plans to expand its product offering into Asia, strengthen its footprint in Mexico, as well as other geographies.
About Conduit
Conduit is a next-generation payment network for businesses that move money across borders. We provide fast, reliable global payments by combining instant local payment rails with the efficiency of stablecoins. With a single API, Conduit connects banks, local payment rails, and blockchains to create a resilient network spanning key markets worldwide — including deep connectivity across Latin America, Africa and Asia. To learn more visit https://conduitpay.com/.
Contacts
Julie Bishop
julie@walkercomms.com
Airspace Link Launches the Operations Center in AirHub® Portal for Drone Operations
Leading FAA-approved UAS (Drone) Service Supplier Airspace Link of B4UFLY and LAANC, today announced the official launch of the Operations Center in its flagship product AirHub® Portal. Designed to give organizations real-time operational oversight, the Operations Center enables strategic insight, situational awareness, and live tracking of drone and crewed aircraft activity — all in one powerful system.
"The Operations Center transforms AirHub® Portal into a true command center for organizations managing drone operations at scale," said Tyler Dicks, Head of Product at Airspace Link. "From public safety teams and federal agencies to state and local governments and commercial enterprises, we're helping a wide range of users gain the operational clarity they need to deliver safer, smarter, and more coordinated drone operations."
With the new Operate tab activated in AirHub® Portal, users can visualize:
- Active and planned drone operations for the day
- Live crewed aircraft traffic and uncrewed drone activity, with supporting sensor partner integration
- Current weather conditions across mission areas
- B4UFLY airspace briefings
- [Coming Soon] Real-time telemetry from user organizations' own connected drone flights
Airspace Link's new Operations Center in AirHub® Portal delivers a single-pane-of-glass solution for organizations that require coordinated, real-time airspace oversight. Built for flight operations managers, public safety agencies, airfield managers, and state and local authorities, the Operations Center offers a unified, interactive map display that provides a complete common operational picture of all active and planned missions, ensuring safer, more informed, and more efficient decision-making.
Also designed to meet the needs of IT leaders and security professionals, the Operations Center includes robust post-mission analytics, audit trails, and automatic reporting to support regulatory compliance, internal governance, and continuous operational improvement. Critically, Airspace Link safeguards its technology with enterprise-grade security and privacy protocols, backed by SOC 2 and ISO 27001 certifications — ensuring sensitive operational data is protected by industry-leading standards and rigorous best practices.
At the core of the Operations Center is its open platform architecture, which integrates with a range of airspace awareness data sources, including crewed aircraft ADS-B detection systems from industry leader uAvionix.
"We're proud to partner with Airspace Link in delivering high integrity live aircraft traffic data for the Operations Center," said Cyriel Kronenburg, Vice President for UAS and Aviation Networks from uAvionix, Airspace Link's trusted sensor partner. "This integration ensures organizations have access to a high quality and accurate view of their surrounding airspace on a single pane of glass — a critical component for safe and effective drone operations."
Now available for AirHub® Portal organization accounts, the Operations Center joins Airspace Link's full suite of capabilities – from preflight planning and LAANC authorization to internal operation approvals, crew and asset management, and flight logging. Together, these tools form a comprehensive Drone Operations Management System (DOMS) purpose-built for the needs of modern, connected drone programs.
"Whether you're overseeing a city-wide drone program or scaling enterprise operations, the Operations Center delivers the situational awareness and accountability today's teams demand," Dicks added. "It's about empowering diverse stakeholders with the tools to operate smarter, safer, and with total confidence."
As both an FAA-approved UAS Service Supplier of LAANC and B4UFLY, and a provider of advanced drone operations software, Airspace Link offers one of the only fully integrated Drone Operations Management Systems in the market, eliminating the need to manage multiple systems or vendors.
See It Live at XPONENTIAL 2025
Airspace Link will be exhibiting at XPONENTIAL 2025 in Houston. Visit us at Booth #4320 for a live demonstration of the Operations Center and to explore how AirHub® Portal can elevate your organization's drone operations.
Learn more and book your personalized demo here: https://airspacelink.com/xponential2025
About Airspace Link
Founded in Detroit in 2018, Airspace Link is a leading FAA-approved UAS Service Supplier of LAANC and B4UFLY, creating the digital infrastructure for the safe integration of drones into the national airspace and local communities. As SOC 2 compliant and ISO 27001-certified, Airspace Link's drone operations management system, AirHub® Portal, empowers government entities, commercial fleets, certified drone pilots, and the broader drone industry with the tools needed to enable safe, compliant, and efficient drone operations. For more information about Airspace Link and AirHub® Portal, visit www.airspacelink.com.
Media Contact
Rich Fahle
Rich.fahle@airspacelink.com
Sam Stewart Gacaferi
Sam.stewart@airspacelink.com
Groq and Bell Canada partner to build 6 AI inference infrastructure data centers
Groq Becomes Exclusive Inference Provider for Bell Canada's Sovereign AI Network
ALSO READ
Bell announces plans to open six AI data centres in B.C. as part of Bell AI Fabric - BNN Bloomberg
New data centers across North America expand Groq's network, now serving over 20 million tokens per second
Groq, the pioneer in fast AI inference, today announced an exclusive partnership with Bell Canada to power Bell AI Fabric, the country's largest sovereign AI infrastructure project.
Bell AI Fabric will establish a national AI network across six sites, targeting 500MW of clean, hydro-powered compute. It begins with a 7MW Groq facility in Kamloops, British Columbia, coming online in June.
"As AI moves into production, nations are rethinking where inference runs and who controls it," said Jonathan Ross, CEO and Founder of Groq. "We're building infrastructure that's fast, affordable, and sovereign by design, already powering some of the largest inference deployments in the world."
This month, Groq also brought new data centers online in Houston (DataBank) and Dallas (Equinix), pushing total global network capacity to over 20 million tokens per second.
The momentum reflects rising demand for Groq's LPU-based systems—built for real-time inference with unmatched speed and efficiency. Groq delivers the lowest cost per token without compromise, making large-scale AI viable for governments and enterprises worldwide.
Groq builds fast. Data centers go live in weeks, bringing AI closer to users and giving partners more control over where and how inference runs. Local infrastructure means lower latency, stronger data governance, and faster response times at scale.
"Through Bell AI Fabric, we're building the backbone for Canada's AI economy," said Mirko Bibic, President & CEO, BCE and Bell Canada. "Groq's technology delivers the speed and efficiency our customers need—now, not years from now."
About Groq
Groq is the AI inference platform redefining price performance. Its custom-built LPU and cloud have been specifically designed to run powerful models instantly, reliably, and at the lowest cost per token—without compromise. Over 1.6 million developers and Fortune 500 companies trust Groq to build fast and scale smarter.
Groq Media Contact
pr-media@groq.com
Databricks acquires Neon data for $1billion to deliver serverless postgres for AI Agents and developers
ALSO READ:
Databricks, the Data and AI company, announced its intent to acquire Neon, a leading serverless Postgres company. As the $100-billion-plus database market braces for unprecedented disruption driven by AI, Databricks plans to continue innovating and investing in Neon's database and developer experience for existing and new Neon customers and partners.
Neon: An Open, Serverless Foundation for Developers and AI Agents
AI agents are becoming increasingly integral components for modern developers, and Neon is purpose-built to support their agentic workflows. Recent internal telemetry showed that over 80 percent of the databases provisioned on Neon were created automatically by AI agents rather than by humans, underscoring how explosively agentic workloads are growing. These workloads differ from human-driven patterns in three important ways:
- Speed + flexibility: Agents operate at machine speed and traditional database provisioning often becomes a bottleneck — Neon can spin up a fully isolated Postgres instance in 500 milliseconds or less and supports instant branching and forking of not only database schema but also data, so experiments never disturb production.
- Cost proportionality: Agents demand a cost structure that scales precisely with usage — Neon's full separation of compute and storage keeps the total cost of ownership for thousands of ephemeral databases proportional to the queries they actually run.
- Open source ecosystem: Agents expect to leverage the rich Postgres community — Neon is 100 percent Postgres-compatible and works out of the box with popular extensions.
"The era of AI-native, agent-driven applications is reshaping what a database must do," said Ali Ghodsi, Co-Founder and CEO at Databricks. "Neon proves it: four out of every five databases on their platform are spun up by code, not humans. By bringing Neon into Databricks, we're giving developers a serverless Postgres that can keep up with agentic speed, pay-as-you-go economics and the openness of the Postgres community."
Databricks and Neon's Shared Vision
Together, Databricks and Neon will work to remove the traditional limitations of databases that require compute and storage to scale in tandem — an inefficiency that hinders AI workloads. The integration of Neon's serverless Postgres architecture with the Databricks Data Intelligence Platform will help developers and enterprise teams efficiently build and deploy AI agent systems. This approach not only prevents performance bottlenecks from thousands of concurrent agents but also simplifies infrastructure, reduces costs and accelerates innovation — all with Databricks' security, governance and scalability at the core.
"Four years ago, we set out to build the best Postgres for the cloud that was serverless, highly scalable, and open to everyone. With this acquisition, we plan to accelerate that mission with the support and resources of an AI giant," said Nikita Shamgunov, CEO of Neon. "Databricks was founded by open source pioneers committed to making it easier for developers to work with data and AI at any scale. Together, we are starting a new chapter on an even more ambitious journey."
Neon's talented team is expected to join Databricks after the transaction closes, and the team brings deep expertise and continuity for Neon's vibrant community. Together, Neon and Databricks will empower organizations to eliminate data silos, simplify architecture and build AI agents that are more responsive, reliable and secure.
We plan to share more at Data + AI Summit in San Francisco, taking place June 9–12.
Details Regarding the Proposed Acquisition
The proposed acquisition is subject to customary closing conditions, including any required regulatory clearances.
About Neon
Neon was founded in 2021 by a team of experienced database engineers and Postgres contributors with a singular goal: to build a serverless Postgres platform that helps developers build reliable and scalable applications faster, from personal projects to startups, all the way to enterprises.
About Databricks
Databricks is the Data and AI company. More than 10,000 organizations worldwide — including Block, Comcast, Condé Nast, Rivian, Shell and over 60% of the Fortune 500 — rely on the Databricks Data Intelligence Platform to take control of their data and put it to work with AI. Databricks is headquartered in San Francisco, with offices around the globe and was founded by the original creators of Lakehouse, Apache Spark™, Delta Lake and MLflow. To learn more, follow Databricks on X, LinkedIn and Facebook.
Groq is official hyperscaler inference provider for Saudi AI datacenter Humain
Groq, the pioneer in AI inference, today announced a major global expansion, accelerating its rise as an emerging hyperscaler. With significant new deployments and growing momentum with partners like HUMAIN, Groq continues to set the standard for AI infrastructure speed, scalability, and cost-efficiency.
Groq has been named an official inference provider for HUMAIN, a newly launched AI company headquartered in Saudi Arabia and designed to operate across the full AI value chain. HUMAIN's mission is to transform economies through large-scale AI capabilities, from infrastructure to state-of-the-art models, and Groq's ultra-efficient inference technology will be central to that mission.1.5M+ developers and leading global organizations trust Groq to build AI applications with speed, reliability, & scale.
The announcement builds on Groq's opening of a data center in Dammam, Saudi Arabia, which has been serving traffic since February. It's part of a $1.5 billion commitment from the Kingdom to supercharge AI development in the region and expand Groq's presence in global markets.
Groq prioritizes U.S.-based development for its systems and has scaled rapidly to meet the needs of a global AI ecosystem—filling a strategic void that might otherwise be met by foreign providers. With new data centers coming online across North America this month, Groq has the capacity to serve today's most demanding workloads and continues to expand to meet tomorrow's.
More than 1.5 million developers and leading global organizations now trust Groq to build AI applications with speed, reliability, and scale.
To learn more about Groq's partnership with HUMAIN, visit: https://groq.humain.ai/

About Groq
Groq is the AI inference platform redefining price performance. Its custom-built LPU and cloud have been specifically designed to run powerful models instantly, reliably, and at the lowest cost per token—without compromise. Over 1.5 million developers trust Groq to build fast and scale smarter.
Media Contact
Groq
pr-media@groq.com
Groq powers Meta AI to deliver fast inference for the official Llama API
Meta and Groq have joined forces to deliver blazing-fast, zero-setup access to Llama 4. Developers can request early access to the official Llama API.
ALSO READ:
Introducing the fastest way to run the world's most trusted openly available models with no tradeoffs
Groq, a leader in AI inference, announced today its partnership with Meta to deliver fast inference for the official Llama API – giving developers the fastest, most cost-effective way to run the latest Llama models.
Now in preview, the Llama 4 API model accelerated by Groq will run on the Groq LPU, the world's most efficient inference chip. That means developers can run Llama models with no tradeoffs: low cost, fast responses, predictable low latency, and reliable scaling for production workloads.
"Teaming up with Meta for the official Llama API raises the bar for model performance," said Jonathan Ross, CEO and Founder of Groq. "Groq delivers the speed, consistency, and cost efficiency that production AI demands, while giving developers the flexibility and control they need to build fast."
Unlike general-purpose GPU stacks, Groq is vertically integrated for one job: inference. Builders are increasingly switching to Groq because every layer, from custom silicon to cloud delivery, is engineered to deliver consistent speed and cost efficiency without compromise.
The Llama API is the first-party access point for Meta's openly available models, optimized for production use.
With Groq infrastructure, developers get:
- Speeds of up to 625 tokens/sec throughput
- Minimal lift to get started – just three lines of code to migrate from OpenAI
- No cold starts, no tuning, no GPU overhead
Fortune 500 companies and more than 1.4 million developers already use Groq to build real-time AI applications with speed, reliability, and scale.
The Llama API is available to select developers in preview here with broader rollout planned in the coming weeks.
For more information on the Llama API x Groq partnership, please visit here.
About Groq
Groq is the AI inference platform redefining price and performance. Its custom-built LPU and cloud run powerful models instantly, reliably, and at the lowest cost per token—without compromise. Over a million developers use Groq to build fast and scale smarter.
Media Contact
Groq PR
pr-media@groq.com
Lightmatter announces the fastest co-packaged optics for AI Passage L200
Lightmatter's revolutionary 3D photonic interconnect solution eliminates bandwidth bottlenecks in AI infrastructure for datacenters.
Lightmatter, the leader in photonic (super)computing, today announced Passage™ L200, the world’s first 3D co-packaged optics (CPO) product. Designed to integrate with the latest XPU and switch silicon designs, Passage L200 unleashes unprecedented AI performance scaling by eliminating interconnect bandwidth bottlenecks. The L200 3D CPO family includes both 32 Tbps and 64 Tbps versions, representing a 5 to 10x improvement over existing solutions. This enables over 200 Tbps of total I/O bandwidth per chip package, resulting in up to 8X faster training time for advanced AI models.
Bandwidth scaling has significantly trailed gains in compute performance. Continued advances in AI compute require fundamental changes in interconnect technology. Current connectivity solutions, including electrical, optical and conventional CPO, are bandwidth limited because their I/O interfaces are restricted to the “shoreline,” or edge of the chip. Passage L200 overcomes these constraints with the world’s first edgeless I/O, scaling bandwidth across the entire die area. This modular 3D CPO solution leverages a standard interoperable UCIe die-to-die (D2D) interface, and facilitates scalable chiplet-based architectures to seamlessly integrate with next generation XPUs and switches.
The Passage L200 3D CPO integrates the latest of Alphawave Semi’s chiplet technology portfolio, combining silicon-proven low power and low latency UCIe and optics-ready SerDes with Lightmatter’s category-defining photonic integrated circuit (PIC). Alphawave Semi’s advanced-node electrical integrated circuit (EIC) is 3D integrated on the Passage PIC using standard chip-on-wafer (CoW) techniques. Passage 3D integration enables SerDes I/O to be positioned anywhere on the die, rather than being confined to its shoreline, delivering the equivalent bandwidth of 40 pluggable optical transceivers per L200. Additionally, multiple L200s can be integrated in a package to serve a broad range of XPU and switch applications.
“Bandwidth scaling has become the critical impediment to AI advancement,” said Nick Harris, founder and CEO of Lightmatter. “The engineering breakthroughs represented by our L200 family of 3D CPO solutions provide the fundamental building blocks that will pave the way for next-gen AI processors and switches.”
“We are thrilled to collaborate with Lightmatter on the delivery of the L200,” said Tony Pialis, president and Chief Executive Officer of Alphawave Semi. “Our extensive portfolio, featuring proven chiplets, optical DSPs, and connectivity silicon subsystems, synergizes with Lightmatter’s 3D photonics to create a dynamic solution that propels the next generation of AI infrastructure forward.”
“AI data center interconnects face growing bandwidth and power challenges,” said Andrew Schmitt, founder and directing analyst at Cignal AI. “Co-Packaged Optics (CPO) – integrating optics directly onto XPUs and switches – is the inevitable solution. Lightmatter’s bold approach delivers the essential elements of CPO and gives hyperscalers and chip manufacturers a path to deliver high-performance systems.”
The L200 is engineered for high-volume manufacturing with industry-leading silicon photonics fab and OSAT partners including Global Foundries, ASE, and Amkor as well as advanced node CMOS foundries. Built with advanced redundancy and resiliency, the L200 is powered by Lightmatter’s Guide light engine, delivering exceptional laser integration and total optical power per module to support the full bandwidth of L200.
Lightmatter offers two product SKUs: the L200 (32Tbps) and L200X (64Tbps) 3D CPO engines. These solutions build upon the company’s proven Passage technology platform, offering 16 WDM wavelengths per waveguide/fiber with the most advanced and fully integrated photonics control capabilities.
Key features of the L200 and L200X include:
- Advanced node CMOS EIC
- 32Gbps UCIe D2D interface (IP offered royalty-free by Alphawave Semi for accompanying XPU/Switch die)
- 320 Optically-optimized multi-rate/multi-protocol SerDes
- Passage PIC
- L200: 56 Gbps NRZ, 32 Tbps total (Tx+Rx)
- L200X: 106/112 Gbps PAM4, 64 Tbps total (Tx+Rx)
- 16 wavelength WDM per waveguide/fiber for 800Gbps/1.6Tbps per fiber
- Pluggable fiber connectors for lasers and data
Available in 2026, Lightmatter’s L200 and L200X 3D CPO chips are designed to accelerate time to market and performance of next generation XPUs and switches for the next wave of foundation AI models.
Lightmatter will showcase its latest innovations in its booth (#5145) at the Optical Fiber Conference in San Francisco, from April 1-3, 2025.
For more information, please visit https://lightmatter.co/
About Lightmatter
Lightmatter is leading a revolution in AI data center infrastructure, enabling the next giant leaps in human progress. The company’s groundbreaking Passage™ platform—the world’s first 3D-stacked silicon photonics engine—connects thousands to millions of processors at the speed of light. Designed to eliminate critical data bottlenecks, Lightmatter’s technology enables unparalleled efficiency and scalability for the most advanced AI and high-performance computing workloads, pushing the boundaries of AI infrastructure.
Media Contact:
Lightmatter
John O’Brien
press@lightmatter.co
Lightmatter unveils the world’s fastest AI interconnect Passage M1000 3D Photonic Superchip
Lighmatters breakthrough 3D photonic interposer enables the highest bandwidth and largest die complexes for next-gen AI infrastructure silicon designs
ALSO READ:
Lightmatter announces the fastest co-packaged optics for AI Passage L200
In the rapidly evolving landscape of AI, the demand for efficient and high-speed interconnect solutions is paramount. Lightmatter, a leading startup in silicon photonics technology, has introduced the Passage M1000, an innovative optical interposer designed to meet the increasing bandwidth needs of AI applications, thereby transforming data center operations.
- The Passage M1000 is a silicon photonic interposer that enables high-bandwidth communication between AI chips, achieving speeds of petabits per second and addressing the demands of modern AI applications.
- This feature allows electro-optical I/O across the entire surface of the chip, eliminating bottlenecks and facilitating seamless communication between stacked dies through a reconfigurable waveguide network.
- With a record-breaking 114 terabits per second of optical bandwidth, the M1000 outperforms conventional solutions, thanks to its 256 fiber optic attach points.
- By utilizing silicon photonics, the M1000 reduces energy consumption compared to traditional interconnects, making it a sustainable choice for data centers as AI workloads grow.
Lightmatter, the leader in photonic supercomputing, today announced Passage™ M1000, a groundbreaking 3D Photonic Superchip designed for next-generation XPUs and switches. The Passage™ M1000 enables a record-breaking 114 Tbps total optical bandwidth for the most demanding AI infrastructure applications. At more than 4,000 square millimeters, the M1000 reference platform is a multi-reticle active photonic interposer that enables the world’s largest die complexes in a 3D package, providing connectivity to thousands of GPUs in a single domain.

In existing chip designs, interconnects for processors, memory, and I/O chiplets are bandwidth limited because electrical input/output (I/O) connections are restricted to the edges of these chips. The Passage M1000 overcomes this limitation by unleashing electro-optical I/O virtually anywhere on its surface for the die complex stacked on top. Pervasive interposer connectivity is enabled by an extensive and reconfigurable waveguide network that carries high-bandwidth WDM optical signals throughout the M1000. With fully integrated fiber attachment supporting an unprecedented 256 fibers, the M1000 delivers an order of magnitude higher bandwidth in a smaller package size compared to conventional Co-Packaged Optics (CPO) and similar offerings.
Lightmatter has worked closely with industry leaders, including GlobalFoundries (GF) and Amkor, to facilitate production readiness for customer designs based on the M1000 reference platform, while ensuring the highest standards of quality and performance. The Passage M1000 utilizes the GF Fotonix™ silicon photonics platform which offers seamless integration of photonic components with high-performance CMOS logic into a single die, creating a production-ready design that can scale effectively with AI demands.
“Passage M1000 is a breakthrough achievement in photonics and semiconductor packaging for AI infrastructure,” said Nick Harris, founder and CEO of Lightmatter. “We are delivering a cutting-edge photonics roadmap years ahead of industry projections. Shoreline is no longer a limitation for I/O. This is all made possible by our close co-engineering with leading foundry and assembly partners and our supply chain ecosystem.”
“GF has a long-standing strategic partnership with Lightmatter to commercialize its breakthrough photonics technology for AI data centers,” said Dr. Thomas Caulfield, president and CEO of GF. “The M1000 photonic interposer architecture, built on our GF Fotonix platform, sets the pace for photonics performance and will transform advanced AI chip design. Our advanced manufacturing capabilities and highly flexible, monolithic silicon photonics solution are instrumental in bringing this technology to market, and we look forward to continuing our close collaboration with Lightmatter.”
“The insatiable demand for scale-up bandwidth is fueling interconnect innovation and momentum, with in-package optical integration at the forefront,” said Vlad Kozlov, founder and CEO, LightCounting. “Lightmatter’s unique 3D active photonic interposer presents a compelling advancement, with capabilities that surpass existing CPO solutions.”
Key features of the M1000 include:
- 8-tile 3D active interposer with integrated programmable waveguide network
- 3D integrated electrical integrated circuits containing a total of 1024 Electrical SerDes
- 56 Gbps NRZ modulation
- 8 wavelength WDM transmission on waveguides and fibers.
- 256 optical fibers edge attached with 448 Gbps bandwidth per fiber
- 1.5 kW power delivery in integrated advanced package (7,735 mm2)
The Passage M1000 and Passage L200, also announced today, accelerate advances in AI by enabling larger and more capable AI models to be trained faster than ever before. Passage M1000 will be available in the summer of 2025, accompanied by the world’s most powerful light engine: Guide™, from Lightmatter.
Lightmatter will showcase its latest innovations in its booth #5145 at the Optical Fiber Conference in San Francisco, from April 1-3, 2025.
For more information on Passage M1000, please visit https://lightmatter.co/
About Lightmatter
Lightmatter is leading a revolution in AI data center infrastructure, enabling the next giant leaps in human progress. The company’s groundbreaking Passage™ platform—the world’s first 3D-stacked silicon photonics engine—connects thousands to millions of processors at the speed of light. Designed to eliminate critical data bottlenecks, Lightmatter’s technology enables unparalleled efficiency and scalability for the most advanced AI and high-performance computing workloads, pushing the boundaries of AI infrastructure.
Contacts
Media Contact:
Lightmatter
John O’Brien
press@lightmatter.co
All Things Photonics podcast spotlight with Lightmatter Founder & CEO Nick Harris
Lightmatter is pushing the envelope of AI interconnect and networking standards. It’s all in an effort to bring AI infrastructure into a photonic era. MIT graduate Nick Harris founded Lightmatter in 2017. In this Spotlight conversation, he offers a look into the formation, and the future of his company, as well as the breakthroughs driving the photonic supercomputing group today. Spotlight from "All Things Photonics" is a special series of intimate sit-down conversations with preeminent figures in our industry.
Photonics Media publishes business-to-business magazines, buyers’ guides, and websites for individuals working with light-based technologies in the photonics industry. A pioneering publisher in the relatively new discipline of photonics, Photonics Media has built a large global audience comprising academics and researchers, manufacturers, and end-users. The Photonics Media YouTube channel features video coverage of news, products, and events in the photonics industry.












