Nixxy (NASDAQ:NIXX) acquires EDGE Data Center and Telecom to accelerate their AI Infrastructure market expansion
ALSO READ:
Nixxy Inc. Reports Q2 2025 Revenue of $13.47 Million from Telecommunications Growth; Advances AI Platform with Strategic IP Acquisitions - NASDAQ
Nixxy (NASDAQ:NIXX), a technology company developing and providing AI-powered business services, today announced it has acquired the EDGE data center assets of Everythink Innovations Limited, ("EIL"), a telecom and edge infrastructure provider with existing operations in Fremont, CA, and Vancouver, Canada. This strategic acquisition accelerates Nixxy's rollout of its AI data infrastructure layer and provides immediate revenue scale and infrastructure capacity aligned with communications, AI, and data monetization applications by securing its own facilities and its ability to host its AI applications.
The transaction, valued at $3.65 million in cash and stock, includes the acquisition of EIL's infrastructure and telecom assets, which includes more than $48 million in contracted annual recurring revenue ("ARR") from wholesale data, VPN, and interconnect services, which will immediately add to Nixxy's July's $7.5mm and factor into August's projected $10mm revenue base.
"This strategic asset purchase is a transformative step forward for Nixxy," said Mike Schmidt, CEO of Nixxy. "It accelerates our AI data infrastructure rollout and positions us with operational assets across two key North American data center hubs, while serving as the blueprint for future edge data center expansion in North America, Europe and Asia."
Strategic Highlights:
- AI-Ready Infrastructure: Immediate access to edge-optimized Tier 3 infrastructure in Vancouver and Fremont, with owned servers, routers, and active IX connectivity.
- Accelerated Market Entry: The transaction enables Nixxy to immediately deploy AI-enhanced services, including high-performance compute, DeFi integrations, and data-based systems.
- Cross-Border Platform: With dual presence in Canada and the U.S., Nixxy is positioned for low-latency service delivery, robust disaster recovery, and North American market reach.
- Live Revenue and Profitability: The acquired telecom assets contribute $48M in ARR and strategic deal flow.
- Technology Synergy: Integration of EIL's carrier-grade routing, switching, and Amazon Web Services (AWS) hybrid failover assets will power Nixxy's data infrastructure services across enterprise, telecom, and AI sectors.
This transaction marks a critical inflection point in Nixxy's strategic transformation into a global AI Data Infrastructure provider. It not only adds scale and operational maturity, but also significantly improves time to market for the company's AI-powered services and partnerships.
About Nixxy, Inc.
Nixxy Inc. (NASDAQ:NIXX) a technology company developing and providing AI-powered business services, powering the next generation of intelligent services across telecom, healthcare, and enterprise markets. Anchored by its proprietary AI Infrastructure platform, Nixxy provides scalable, secure, and LLM-agnostic infrastructure for deploying private AI at scale. From global voice and messaging to AI-enhanced diagnostics, Nixxy delivers solutions where infrastructure, intelligence, and monetizable data converge. With a strategy focused on platform extensibility, data monetization, and data access models, Nixxy is building the foundation for the future of enterprise AI deployment and private data economy.
Filings and press releases can be found at http://www.nixxy.com/investor-relations.
Contact Information
Investor Contact: Nixxy, Inc.Investor Relations Email: IR@nixxy.comPhone: (877) 708-8868
Forward-Looking Statements Disclaimer
This press release may contain forward-looking statements within the meaning of the Private Securities Litigation Reform Act of 1995. All statements other than statements of historical fact are forward-looking statements, including those regarding the Company's business strategy, future operations, acquisition strategy, financial position, potential growth, spin-out transactions, and market opportunities. Words such as 'anticipates,' 'believes,' 'expects,' 'intends,' 'plans,' and 'will,' or similar expressions, are intended to identify forward-looking statements. These statements are based on the Company's current expectations and beliefs and involve significant risks and uncertainties that could cause actual results to differ materially from those expressed or implied by the forward-looking statements. Readers are cautioned not to place undue reliance on these forward-looking statements. The Company disclaims any obligation to update or revise forward-looking statements, whether as a result of new information, future events, or otherwise, except as required by law.
No Offer or Solicitation Disclaimer
This communication is for informational purposes only and is not intended to and does not constitute an offer to sell or the solicitation of an offer to buy any securities or a solicitation of any vote or approval. No offer of securities shall be made except by means of a prospectus meeting the requirements of Section 10 of the Securities Act of 1933, as amended.
Risk Factors
Investors should carefully consider the risks associated with the Company's business and the transaction described herein, including but not limited to: the uncertainty surrounding the timing of the transaction or proposed spin-out; the ability to successfully execute acquisitions and integrate acquired companies; the impact of technological changes on the Company's operations; and other risks detailed in the Company's filings with the Securities and Exchange Commission, including those risk factors contained in the Company's Form 10-K for the year ended December 31, 2024.
Bluefish AI raises $20M Series A led by NEA to power AI marketing for the Fortune 500
We’re excited to announce that Laconia Capital portfolio company Bluefish raised a $20M Series A led by New Enterprise Associates (NEA) and Salesforce Ventures.
Bluefish is the AI marketing platform for the Fortune 500. From day one, Bluefish has been helping brands like Adidas, Omnicom, and Tishman Speyer gain visibility and influence across AI platforms like ChatGPT, Google AI, Meta AI, and Amazon Rufus. With 10x revenue growth in the past six months and over 80% of its customer base coming from the Fortune 500, Bluefish is quickly becoming the go-to enterprise solution in this space.
Laconia Capital have been early backers of CEO Alex Sherman and COO Jing Feng, first as seed investors in PromoteIQ (acquired by Microsoft) and later as pre-seed investors in Bluefish. Their track record of building at Fortune 500 scale is unmatched, and the journey is just the beginning. Congratulations to Alex Sherman, Jing Feng, Andrei Dunca, and the whole team on this milestone.

NEA and Salesforce Ventures lead round to help enterprise marketers gain visibility and influence over brand performance across the AI internet
Bluefish, the leading AI marketing platform for the Fortune 500, announced a $20M Series A funding round led by NEA, with participation from Salesforce Ventures. Additional investors include Crane Venture Partners, Swift Ventures, and Bloomberg Beta, bringing total Bluefish funding to $24M within 12 months of launch. Bluefish also unveiled its new Custom AI Audiences feature, which enables brands to manage their AI performance with unprecedented granularity and precision.
As AI becomes the first stop for product discovery and purchasing decisions, Bluefish helps enterprise marketers gain visibility and influence over AI-generated responses. Over the last six months, Bluefish has grown revenue 10x and now counts Adidas and Tishman Speyer among its enterprise customers. This new financing will enable product expansion and help scale engineering and customer-facing teams.
"Over the past year, the way consumers find and buy new products has radically changed, migrating from conventional search to AI," said Bluefish CEO Alex Sherman. "Search marketers were the first to recognize this shift in consumer behavior, but it is increasingly clear that the entire enterprise marketing stack will need to be reimagined for AI. Successful marketers will need a suite of AI-native tools to track, measure, and optimize for this new channel. These tools will be critical to winning customers who are now spending more time in AI than on the open web. That's what we are building at Bluefish."
Rebuilding the enterprise marketing stack for AI
Bluefish analyzes millions of prompt responses for the world's largest brands, delivering robust insights into how large language models (LLMs) respond to consumers and present brand narratives. The Bluefish platform enables marketers to shape their AI presence with targeted optimizations that boost brand visibility, favorability, and message consistency across all major AI channels, including OpenAI's ChatGPT, Meta AI, and Google AI.
The Bluefish platform was designed to support the entire marketing organization, including search, content, brand, and communications teams. AI Monitoring, AI Optimization (AIO), and AIO Measurement are key offerings that enable brands to:
- Track – Monitor AI positioning and performance with real-time tracking of AI responses
- Optimize – Tune content strategies to address key opportunities surfaced by Bluefish data
- Measure – Track the impact of optimizations against custom AI segments and KPIs
Platform brings enterprise-grade sophistication to AI marketing
Bluefish has emerged as the enterprise choice for AI marketing. It delivers the level of control large brands expect, with full transparency into every prompt, response, and cited source. Unlike one-size-fits-all tools that lean on generic prompts or recycled data, Bluefish lets each customer build custom prompt methodologies—so insights and actions mirror their business, not someone else's.
"Our customers represent some of the most sophisticated marketers in the world," said Bluefish COO Jing Feng. "They need customized solutions that enable differentiation in order to stay ahead. Generic one-size-fits-all platforms will inevitably fall short."
This focus on enterprise is working. Bluefish has seen enormous customer demand since launch, with more than 80% of its customers coming from the Fortune 500, including category leaders across financial services, auto, CPG, and beauty brands. The two-year-old company already operates globally, supporting major customers across international markets and languages.
"We're living through a paradigm shift as AI transforms how consumers discover, evaluate, and buy—the stakes for global brands have never been higher. Bluefish was built from the ground up to support the needs of enterprises, and is led by a proven team that has guided CMOs and marketing teams through the last major transitions of the internet. We believe Bluefish is defining the enterprise category for AI marketing," said Ann Bordetsky, Partner at NEA.
Bluefish is led by a founding team of industry veterans with a 20-year track record building marketing technology for the world's largest brands. CEO Alex Sherman previously co-founded PromoteIQ, a major retail media platform acquired by Microsoft in 2019. CTO Andrei Dunca previously co-founded LiveRail, a leading video advertising platform acquired by Facebook in 2014. COO Jing Feng previously held senior leadership roles at Microsoft, PromoteIQ, and LiveRail.
"We have learned a lot about what it takes to deliver at Fortune 500 scale," said Jason Spinell, Partner at Salesforce Ventures. "Bluefish is one of the few AI marketing platforms we've seen that is purpose-built for enterprise complexity. For brands that are looking for a partner that actually understands enterprise, Bluefish stands out."
Introducing Bluefish Custom AI Audiences
Bluefish also unveiled the commercial release of its Custom AI Audiences capability, which allows enterprise marketers to define unique profiles and access tailored insights by customer segment. This feature enables brands to integrate their own proprietary approaches into the platform, creating a significant competitive advantage.
Marketers can now better assess AI discoverability, citation influence, and content narrative shifts by audience to drive smarter AI optimizations.
About Bluefish
Bluefish is the AI marketing platform for enterprise brands. As product discovery transitions to AI platforms like ChatGPT and Google AI, Fortune 500 brands use Bluefish to gain visibility and influence over this critical new channel. Bluefish is led by the team behind PromoteIQ (acquired by Microsoft) and LiveRail (acquired by Facebook) and is headquartered in New York City.
Learn more at bluefishai.com.
Aderant acquires HerculesAI legal technology assets accelerating AI capabilities in work-to-cash solution
Aderant + HerculesAI technology = intelligent automation for the world’s most innovative law firms. The acquisition of HerculesAI’s legal technology assets supercharges Aderant’s already industry leading legal technology suite.
For more than a decade, HerculesAI has been laser-focused on providing AI solutions that eliminate manual work, enable customers to scale beyond human capabilities, and deliver real ROI.
HerculesAI CEO Alex Babin said “I am proud of the work that our team has put into building our legal technology solutions and look forward to seeing them be incorporated into Aderant’s suite where they will be used by hundreds of thousands of people worldwide.”
Aderant, a leading global provider of business management solutions for law firms, today announced that it has signed a definitive agreement to acquire the legal technology assets from HerculesAI, a pioneer in AI-driven billing compliance and decision intelligence. This strategic move marks a significant leap forward in Aderant’s mission to deliver a fully automated, insight-rich, and agile work-to-cash solution for the legal industry.
By embedding HerculesAI’s advanced machine learning and decision intelligence into Aderant’s solutions—and supercharging MADDI, Aderant’s AI-powered virtual associate—Aderant is unlocking unprecedented levels of automation, insight, and agility for law firms around the globe. This powerful integration doesn’t just streamline workflows; it proactively prevents revenue leakage, boosts realization rates, and redefines compliance as a strategic advantage—all driven by AI that understands the context, intent, and urgency behind every decision.
“This acquisition is a game-changer for our clients,” said Chris Cartrett, President & CEO of Aderant. “By integrating HerculesAI’s advanced compliance engine with our industry-leading work-to-cash platform, we’re not just automating workflows—we’re enabling intelligent automation that drives measurable profitability. Law firms will gain unprecedented precision, speed, and confidence in their operations. Beyond the technology, we’re equally excited to welcome the entire HerculesAI legal technology team to Aderant. AI-centered development is at the core of our strategy, with many successful product releases in recent years. This acquisition will only enhance the AI engineering and data science talent within Aderant. That’s not just a win—it’s a strategic leap forward.”
Alex Babin, founder and CEO of HerculesAI, added, “Aderant is a leading force in legal technology, and each of HerculesAI’s legal products fits naturally into their portfolio. This move strengthens Aderant’s end-to-end work-to-cash offering and further transforms compliance from a bottleneck into a business accelerator. Just as important as product synergy is the cultural alignment between our teams - a shared commitment to solving real problems for customers. That alignment goes beyond technology and sets the foundation for long-term impact.”
Aderant has released nine AI driven products into the market in the past two years. The integration of HerculesAI technology into Aderant’s ecosystem will enable even greater real-time enforcement of outside counsel guidelines (OCGs), internal billing policies, and historical patterns—delivering audit-ready analytics and one-click resolution of billing issues. Combined with Aderant’s AR Automation and Cloud solutions, this creates a seamless, end-to-end work-to-cash experience.
Aderant will be exhibiting at ILTACON 2025, conference of the International Legal Technology Association, on August 11-14 at the Gaylord National Resort and Convention Center in National Harbor, Maryland (Aderant booth #501 & HerculesAI booth #1209).
For more information on Aderant, visit aderant.com.
About Aderant®
Aderant is dedicated to helping law firms run a better business. As a leading global provider of business management and practice-of-law solutions, the world’s best firms rely on Aderant to keep their businesses moving forward and inspire innovation. At Aderant, the “A” is more than just a letter. It represents how we fulfill our foundational purpose, serving our clients. Aderant operates as a business unit of Roper Technologies (Nasdaq: ROP), a constituent of the S&P 500 and Fortune 1000. The company is headquartered in Atlanta, Georgia, and has several other offices across North America, Europe, and Asia-Pacific. For more information, visit Aderant.com, email info@aderant.com, or follow the company on LinkedIn.
About HerculesAI
HerculesAI converts rules-heavy, document-bound workflows into real-time, auditable data flows for the world’s most regulated enterprises. Its AI agents capture data from any source, map it to company or industry standards, and validate every field against business rules—eliminating manual entry, cutting administration time by up to 70 percent, and ensuring continuous compliance. SOC 2 (Type II)-certified, the platform can be deployed in the customer’s cloud or on-premises. Founded in 2014, HerculesAI's headquarters are in Campbell, California, with global offices in Europe, the United Kingdom and Canada. Learn more at hercules.ai.
Contacts
Media Contacts
Will Ayers
Manager, Communications
Will.ayers@aderant.com
Vice President of Marketing, HerculesAI
216-208-4050
AlphaSense Launches Autonomous AI Agent Interviewer and Channel Checks to deliver real-time market signals across every sector of the economy
AlphaSense just changed the game for market intelligence with the launch of their AI Agent Interviewer and Channel Checks to give you real-time market signals across every sector. Think demand shifts, pricing moves, and supply chain changes before the rest of the market catches on.
Here’s why it matters:
✅ 200K+ investor-led expert interviews in the world’s largest transcript library
✅ AI-led expert calls that uncover fresh insights in hours
✅ Research synthesized in minutes so you can act faster than the competition
It's basically taking what was already the smartest, most comprehensive expert research platform out there and giving it superpowers. Instead of choosing between quality or speed, you finally get both.
Capabilities augment the world’s largest investor-led Tegus Expert Transcript Library with autonomous AI-led interviews, driving exponential scaling of insights in the AlphaSense platform
AlphaSense, the AI platform redefining market intelligence for the business and financial world, today launched its AI agent interviewer and debuted Channel Checks, game-changing capabilities that expand Tegus Expert Insights, the company’s comprehensive expert research offering. For the first time, users can now get a real-time pulse on the economy through Channel Check interviews, surfacing market-moving insights such as demand shifts, pricing changes, and supply chain disruptions.
This expansion unites the unrivaled Tegus Expert Transcript Library – the world’s largest and fastest-growing collection of investor-led expert calls, now surpassing 200,000 transcripts across 25,000 public and private companies – with the AI Interviewer’s ability to independently conduct and transcribe conversations with top-tier human experts. The result is a powerful blend of trusted, investor-generated research augmented with AI-driven interviews. By leveraging AlphaSense’s Deep Research AI agent, users can easily draw synthesized insights across the full content library, compressing days of research work into minutes.
Built on AlphaSense’s proprietary generative AI and fueled by the market’s deepest intelligence library – spanning 500 million premium business documents and the Tegus Expert Transcript Library – the AI Interviewer leverages the full breadth of prior relevant knowledge to engage senior industry experts in probing conversations, matching the precision of an experienced human interviewer while operating at the speed and scale only AI can deliver.
“With AI now running point on expert calls, AlphaSense has turned primary research into a learning engine. The more the system hears, the better it asks – and the more valuable the answers become. That virtuous cycle should raise the bar for market intelligence platforms everywhere,” said Jay Simons, General Partner at BOND.
Channel Checks are already live across dozens of high-value sub-industries within Technology, Media & Telecom, Energy & Industrials, and Healthcare – with hundreds of new interviews with human experts added every week and coverage continuing to expand.
Customers will soon be able to initiate AI-led expert calls for their own research targets by selecting a topic, approving an AI-generated question guide, and receiving a fully transcribed interview with key takeaways in hours – delivering broader coverage, faster insights, and expert research at unprecedented scale.
The Most Trusted Expert Insights, at Scale
At the core of Tegus Expert Insights is the Tegus Expert Transcript Library – the world’s largest and fastest-growing collection of investor-led interviews, more than double the combined size of all competing libraries, and growing by 7,000 transcripts each month. Trusted by thousands of firms, including more than 50% of Midas List VCs – the world’s top-ranked venture capital investors – it delivers insider perspectives from former executives, customers, competitors, and other key voices – making opaque markets transparent.
“It's my job to get up to speed on companies and markets faster and better than everyone else. Tegus makes me the expert in the room,” said Sai Senthilkumar, Partner, Growth at Redpoint Ventures.
“The investment community has built their trust in investor-led Tegus content over years, and that remains the cornerstone of our expert insights offering,” said Jack Kokko, CEO and Founder of AlphaSense. “Now by combining our unparalleled depth of Tegus content with AI agent-led interviews, we’re delivering the earliest and most accurate view of what’s actually unfolding in the market and helping decision-makers spot market shifts well before they are reflected in earnings reports, filings, or forecasts.”
Relentless Commitment to Quality, Scale, and Compliance
AlphaSense is investing aggressively in scaling proprietary expert content, with over $100 million in annual investment in the rapidly growing Tegus Expert Transcript Library, a global team of hundreds of expert recruiters across all major industries, and a market-leading compliance program led by a former Goldman Sachs executive.
Every expert call is rigorously pre-screened, monitored, and reviewed to protect against material non-public information and ensure strict adherence to global regulatory standards. With a dedicated compliance team of over 75 professionals, AlphaSense ensures clients get the insights they need – in a secure and compliant manner that has earned the market’s trust.
A Unified, End-to-End Agentic Workflow
AlphaSense is the only platform offering a fully autonomous, end-to-end agentic workflow, where AI agents:
- Autonomously source expert insights through live interviews
- Research and generate decision-ready research outputs and work-product
- Continuously learn from each new interview to make the next one even smarter
By pairing trusted investor-led calls with high-velocity, AI-led interviews, AlphaSense’s Tegus Expert Insights amplifies the power of the market’s most comprehensive and intelligent expert research solution.
About AlphaSense
AlphaSense is the AI platform redefining market intelligence and agentic workflow orchestration, trusted by thousands of leading organizations to drive faster, more confident decisions in business and finance. The platform combines domain specific AI with a vast content universe of over 500 million premium business documents – including equity research, earnings calls, expert interviews, filings, news, and clients’ internal proprietary content. Purpose-built for speed, accuracy, and enterprise-grade security, AlphaSense helps clients extract critical insights, uncover market-moving trends, and automate complex workflows with high quality outputs. With AI solutions like Generative Search, Generative Grid, and Deep Research, AlphaSense delivers the clarity and depth professionals need to navigate complexity and obtain accurate, real-time information quickly. For more information, visit www.alpha-sense.com.
Media Contact
Pete Daly for AlphaSense
Email: media@alpha-sense.com
Explore more
Fast AI inference infrastructure leader Groq and HUMAIN launch OpenAI's new Open Models Day Zero
Available worldwide with real-time performance, low cost, and local support in Saudi Arabia
Groq, the pioneer in fast inference, and HUMAIN, a PIF company and Saudi Arabia's leading AI services provider, today announced the immediate availability of OpenAI's two open models on GroqCloud. The launch delivers gpt-oss-120B and gpt-oss-20B with full 128K context, real-time responses, and integrated server-side tools live on Groq's optimized inference platform from day zero.
Groq has long supported OpenAI's open-source efforts, including large-scale deployment of Whisper. This launch builds on that foundation, bringing their newest models to production with global access and local support through HUMAIN.
"OpenAI is setting a new high performance standard in open source models," said Jonathan Ross, CEO of Groq. "Groq was built to run models like this, fast and affordably, so developers everywhere can use them from day zero. Working with HUMAIN strengthens local access and support in the Kingdom of Saudi Arabia, empowering developers in the region to build smarter and faster."
"Groq delivers the unmatched inference speed, scalability, and cost-efficiency we need to bring cutting-edge AI to the Kingdom," said Tareq Amin, CEO at HUMAIN. "Together, we're enabling a new wave of Saudi innovation—powered by the best open-source models and the infrastructure to scale them globally. We're proud to support OpenAI's leadership in open-source AI."
Built for full model capabilities
To make the most of OpenAI's new models, Groq delivers extended context and built-in tools like code execution and web search. Web search helps provide real-time relevant information, while code execution enables reasoning and complex workflows. Groq's platform delivers these capabilities from day zero with a full 128k token context length.
Unmatched price-performance
Groq's purpose-built stack delivers the lowest cost per token for OpenAI's new models while maintaining speed and accuracy.
gpt-oss-120B is currently running at 500+ t/s and gpt-oss-20B is currently running at 1000+ t/s on GroqCloud.
Groq is offering OpenAI's latest open models at the following pricing:
- gpt-oss-120B: $0.15 / M input tokens and $0.75 / M output tokens
- gpt-oss-20B: $0.10 / M input tokens and $0.50 / M output tokens
Note: For a limited time, tool calls used with OpenAI's open models will not be charged. Learn more at groq.com/pricing.
Global from day zero
Groq's global data center footprint across North America, Europe, and the Middle East ensures reliable, high-performance AI inference wherever developers operate. Through GroqCloud, OpenAI's open models are now available worldwide with minimal latency.
About Groq
Groq is the AI inference platform redefining price performance. Its custom-built LPU and cloud have been specifically designed to run powerful models instantly, reliably, and at the lowest cost per token—without compromise. Over 1.9 million developers trust Groq to build fast and scale smarter.
Contact: pr-media@groq.com
About HUMAIN
HUMAIN, a PIF company, is a global artificial intelligence company delivering full-stack AI capabilities across four core areas - next-generation data centers, hyper-performance infrastructure & cloud platforms, advanced AI Models, including the world's most advanced Arabic multimodal LLMs, and transformative AI Solutions that combine deep sector insight with real-world execution.
HUMAIN's end-to-end model serves both public and private sector organisations, unlocking exponential value across all industries, driving transformation and strengthening capabilities through human-AI synergies. With a growing portfolio of sector-specific AI products and a core mission to drive IP leadership and talent supremacy world-wide, HUMAIN is engineered for global competitiveness and national distinction.
WATCH Lightmatter's record breaking Passage M1000 Powering the Next 1000x in AI Performance
ALSO READ:
Doubling Down: World’s First 16-λ Single Fiber Bidirectional Link for AI
The computational needs of Artificial Intelligence (AI) have pushed hardware to its absolute limits. One of the most significant bottlenecks in building a massive, scale-up AI system is interconnect I/O—specifically, the number of optical fibers you can connect to a GPU or a switch. Co-Packaged Optics (CPO) has emerged as a leading solution, but it still faces the physical constraint of fiber real estate.
The Passage 3D Silicon Photonics Engine supports from single-chip to multi-die complexes and all the way to wafer-scale—the world’s fastest interconnect.

Explore the Passage M1000 Reference System, a groundbreaking platform built around the Passage M1000 3D Photonic Superchip. Discover how this revolutionary photonic interconnect technology is changing the game for Artificial Intelligence (AI) infrastructure by enabling massive scale-up bandwidth and radix, connecting GPUs, TPUs and data center switches in the largest AI model training clusters.
3D Photonic Interconnect for AI
The Passage™ M1000 3D Photonic Superchip reference platform demonstrates record-breaking 114 Tbps total bandwidth. A 4000 mm2 photonic interposer for massive die complexes. M1000 has 256 optical fibers and supports 1.5kW+ power delivery and the world’s first built-in solid state optical circuit switching.
Unleash I/O anywhere within your chip complex. Eliminate energy-draining, latency-heavy network-on-chip journeys. Outperform shoreline-limited technologies like conventional CPO, NPO, and pluggables.
Lightmatter's ground breaking 8X leap in bidirectional wavelengths per fiber paves the way for Next-Generation AI Data Centers
Lightmatter Achieves World-First 16-Wavelength Bidirectional Link on Single-Mode Optical Fiber
A historical massive leap in AI infrastructure. This is what scaling AI should look like.
Lightmatter has achieved a world-first in optical communications by demonstrating a 16-wavelength bidirectional link on a single strand of standard optical fiber, along with unprecedented thermal performance and polarization insensitivity.
ALSO READ:
Doubling Down: World’s First 16-λ Single Fiber Bidirectional Link for AI
This landmark achievement is an 8X leap in bidirectional fiber bandwidth density, directly solving the bottlenecks that are limiting the scale and performance of today’s AI data centers. Powered by our Passage™ interconnect and Guide™ laser technologies, this breakthrough delivers an unprecedented 800 Gbps per fiber. It’s an architectural leap forward that enables hyperscalers to dramatically increase efficiency and scalability, paving the way for the next generation of AI models.
Lightmatter, the leader in photonic (super)computing, today announced a groundbreaking achievement in optical communications: a 16-wavelength bidirectional Dense Wavelength Division Multiplexing (DWDM) optical link operating on one strand of standard single-mode (SM) fiber. Powered by Lightmatter’s industry-leading PassageTM interconnect and GuideTM laser technologies, this breakthrough shatters previous limitations in fiber bandwidth density and spectral utilization, setting a new benchmark for high-performance, resilient data center interconnects.
With the rise of complex trillion-parameter Mixture of Experts (MoE) models, scaling AI workloads is increasingly bottlenecked by bandwidth and radix (I/O port count) limitations in data center infrastructure. Lightmatter’s Passage technology delivers an unprecedented 800 Gbps bidirectional bandwidth (400 Gbps transmit and 400 Gbps receive) per single-mode fiber for distances of several hundred meters or more. This achievement advances chip I/O design by simultaneously increasing both radix and bandwidth per fiber compared to existing co-packaged optics (CPO) solutions.
While commercial bidirectional (BiDi) transmission on a single fiber has been limited mainly to two wavelengths, achieving 16 wavelengths (also referred to as “lambdas”) has historically required multiple or specialized fibers. This Lightmatter milestone addresses significant technical challenges related to managing complex wavelength-dependent propagation characteristics, power budget constraints, optical nonlinearity, and mitigating crosstalk and backscattering in a single fiber. Such innovations pave the way for the next major advances in AI model development, which demand more extensive and efficient high-bandwidth networking than exists today.
“Data centers are the new unit of compute in the AI era, with the next 1000X performance gain coming largely from ultra-fast photonic interconnects,” said Nicholas Harris, founder and CEO of Lightmatter. “Our 16-lambda bidirectional link is an architectural leap forward. Hyperscalers can achieve significantly higher bandwidth density with standard single-mode fiber, reducing both capital expenditure and operational complexity, while enabling higher ‘radix’—more connections per XPU or switch.”
“Lightmatter’s innovation arrives at a pivotal moment for hyperscale AI infrastructure. The ability to dramatically increase bandwidth density on existing single-mode fiber, coupled with the technology’s robust thermal performance, is a game-changer for data center scalability and efficiency. This solves one of the most pressing challenges in AI development and brings advanced Co-Packaged Optics a giant step closer to market,” said Alan Weckel, co-founder and analyst, 650 Group.
Lightmatter’s breakthrough incorporates a proprietary closed-loop digital stabilization system that actively compensates for thermal drift, ensuring continuous, low-error transmission over wide temperature fluctuations. In addition, architectural innovations make the Passage 3D CPO platform inherently polarization-insensitive, maintaining robust performance even when the fibers are being handled or subject to mechanical stress. Standard SM fiber, while offering immense bandwidth potential, does not inherently maintain light’s polarization state, unlike specialized and more costly polarization-maintaining (PM) fiber. By achieving polarization insensitivity, Lightmatter enables the use of cost-effective SM fiber for its industry-leading bidirectional DWDM technology.
This combination of unparalleled fiber bandwidth density, efficient spectral utilization, and robust performance makes Lightmatter’s Passage technology foundational for the industry’s transition from electrical to optical interconnects in AI data centers. It empowers customers to accelerate development of larger and more capable AI models with more powerful, efficient, and scalable data centers.
For more information about Passage technology, please visit https://lightmatter.co/
About Lightmatter
Lightmatter is leading the revolution in AI data center infrastructure, enabling the next giant leaps in human progress. The company’s groundbreaking Passage™ platform—the world’s first 3D-stacked silicon photonics engine—connects thousands to millions of processors at the speed of light. Designed to eliminate critical data bottlenecks, Lightmatter’s technology enables unparalleled efficiency and scalability for the most advanced AI and high-performance computing workloads, pushing the boundaries of AI infrastructure.
Media Contact:
Lightmatter
John O’Brien
press@lightmatter.co
Experience our breakthroughs.
• Watch the demo (video): https://bit.ly/3Hkt0kW
• Passage M1000 End to End System (video): https://bit.ly/46XaR72
Read our Chief Scientist Darius Bunandar’s “Doubling Down: World’s First 16-λ Single Fiber Bidirectional Link for AI”: https://bit.ly/4fPUZWs
Photonics: The Next Leap in AI Infrastructure
Light-speed computing for the AI revolution
Just as fiber optics revolutionized global communications by replacing electrical signals with light, photonics is poised to transform AI infrastructure by fundamentally reimagining how we process and move data.
The Energy Crisis in AI
Today's AI systems consume staggering amounts of energy. Training large language models requires megawatt-hours of electricity, while inference at scale demands massive data centers that consume as much power as small cities. As AI capabilities expand exponentially, this energy consumption threatens to become unsustainable.
The bottleneck isn't just in computation—it's in the fundamental physics of moving electrons through silicon. Every bit transfer generates heat, every calculation requires power, and every nanosecond of latency compounds into massive inefficiencies at scale.
Enter Photonic Computing
Photonic computing replaces electrons with photons—particles of light that move at the ultimate speed limit of the universe. Unlike electrons, photons don't interact with each other, eliminating interference and reducing energy loss. They can carry multiple wavelengths simultaneously, enabling massive parallel processing that would be impossible with traditional electronics.
Key Advantages of Photonic AI
- Ultra-low latency: Light-speed processing reduces computation time by orders of magnitude
- Massive bandwidth: Wavelength division multiplexing enables parallel data streams
- Energy efficiency: Photons generate virtually no heat, dramatically reducing power consumption
- Scalability: Linear scaling without the quadratic growth in complexity of electronic systems
The Fiber Optics Parallel
The transformation mirrors the fiber optics revolution of the 1980s and 1990s. Before fiber optics, long-distance communication relied on electrical signals that degraded over distance and required frequent amplification. The switch to light-based transmission enabled the global internet, high-definition video streaming, and instantaneous worldwide communication.
Similarly, photonic computing promises to break through the current limitations of electronic processing. Where traditional processors hit physical limits of heat dissipation and signal interference, photonic processors operate in an entirely different domain—one where the speed of light is the only constraint.
Real-World Applications
Photonic AI infrastructure would enable breakthrough applications across industries:
- Real-time language translation with zero perceptible latency
- Autonomous vehicles with instantaneous decision-making capabilities
- Climate modeling with unprecedented resolution and speed
- Medical diagnostics processing complex imaging in real-time
- Financial modeling with microsecond decision capabilities
LightMatter: Pioneering the Transition

LightMatter is leading the photonic AI revolution with a pragmatic, phased approach that bridges today's silicon infrastructure with tomorrow's all-optical systems. Rather than attempting to replace entire computing architectures overnight, they're strategically transforming AI infrastructure piece by piece, starting where it matters most.
Phase 1: Light-Enabled Silicon
LightMatter's breakthrough began with their photonic interconnect technology—the critical bottleneck in modern AI systems. Their Passage™ interconnect replaces electrical wires between chips with light-based connections, instantly eliminating the bandwidth limitations and energy waste of traditional copper interconnects.
This approach is genius in its simplicity: existing silicon chips continue to handle computation, but data moves between them at light speed with minimal energy loss. GPU clusters that once consumed massive amounts of power just moving data between processors now operate with dramatically improved efficiency and throughput.
LightMatter's Interconnect Advantages
- 25x bandwidth increase over electrical interconnects
- 90% reduction in interconnect power consumption
- Drop-in compatibility with existing silicon processors
- Scalable architecture supporting thousands of connected devices
Phase 2: Full Photonic Infrastructure
LightMatter's long-term vision extends beyond interconnects to complete photonic computing systems. Their roadmap includes photonic neural network accelerators that perform matrix operations—the fundamental building blocks of AI—entirely in the optical domain. This represents the ultimate evolution: computation itself happening at light speed.
These full photonic systems promise to deliver the theoretical maximums of optical computing: processing speeds limited only by the speed of light, energy consumption approaching the fundamental physical limits, and massive parallel processing capabilities that dwarf today's electronic systems.
The Path Forward
LightMatter's phased approach demonstrates the practical path to photonic AI dominance. By first solving the interconnect bottleneck with immediately deployable technology, they're building the foundation for more revolutionary advances while delivering real value today.
The transition won't happen overnight, but the physics is compelling. As AI models continue to grow in complexity and energy demands become unsustainable, photonic infrastructure offers a path to exponentially more capable and efficient AI systems.
The Light-Speed Future
Just as fiber optics made the modern internet possible, photonic computing will unlock the next generation of AI—systems that think at the speed of light while consuming a fraction of today's energy.
The question isn't whether photonics will transform AI infrastructure—it's how quickly we can make the transition. In the race for artificial general intelligence, light itself may be our greatest ally.
Iambic Therapeutics taps Lambda for AI compute to develop the next generation of AI molecular property prediction for drug discovery
Iambic Therapeutics, a CNBC 2025 Disruptor 50 Company developing novel medicines using its AI-driven discovery and development platform, has partnered with Lambda to power its platforms' AI infrastructure for AI driven drug discovery.
Lambda, the GPU cloud company founded by AI engineers, today announced that Iambic Therapeutics, a clinical-stage life science and technology company developing novel medicines using its AI-driven discovery and development platform, has selected Lambda to provide an NVIDIA HGX B200 cluster to support the training of Enchant, its industry-leading model for molecular property prediction.
Iambic’s Enchant is a breakthrough multi-modal transformer model for predicting clinical and preclinical endpoints related to the drug discovery and development process. Enchant enables researchers to determine the viability of new drug molecules and make high-confidence predictions where data is most sparse, helping address the critical real-world challenge of understanding how novel drug candidates may affect a patient while still in the earliest stages of discovery. Iambic’s recently announced Enchant v2 provides accurate predictions for dozens of biological, physiochemical, pharmacokinetic, metabolic, safety, and other properties essential for clinical success.
“With the release of Enchant v2, we demonstrated both the model’s accuracy and scalability and we believe we can rapidly build on these gains through model scale alone,” said Matt Welborn, PhD, Iambic’s VP of Machine Learning. “We are expanding our successful relationship with Lambda to an NVIDIA HGX B200 cluster, which will accelerate this opportunity and the breadth of pre-clinical and clinical endpoints Enchant can predict, increasing the likelihood of a molecules' success in human studies and the efficiency of drug development.”
Enchant’s high-confidence predictions enable multi-parameter optimization for the design of potentially more effective medicines, program prioritization, and the design of clinical trials for the potentially rapid translation of novel medicines. Iambic researchers also demonstrated that in some cases Enchant can be a better predictor of in vivo drug clearance than in vitro experiments – a key advancement as regulators look for drug developers to broaden their use of in silico testing. Today, Enchant is the leading model in the field based on performance benchmarks across diverse molecular property prediction tasks.
“We're thrilled to deepen our partnership with Iambic, a leader in AI-driven drug discovery,” said Robert Brooks IV, Founding Team and VP, Revenue. “By leveraging Lambda's 1-Click Clusters for rapid testing and validation, Iambic was able to seamlessly scale to an NVIDIA HGX B200 cluster to accelerate breakthroughs in life sciences.”
To learn more about Lambda’s cloud offerings for AI training and inference, click here.
About Iambic’s AI-Driven Discovery Platform
The Iambic AI-driven platform was created to address the most challenging design problems in drug discovery, leveraging technology innovations such as Enchant (multimodal transformer model that predicts clinical and preclinical endpoints) and NeuralPLexer (best-in-class predictor of protein and protein-ligand structures). The integration of physics principles into the platform’s AI architectures improves data efficiency and allows molecular models to venture widely across the space of possible chemical structures. The platform enables identification of novel chemical modalities for engaging difficult-to-address biological targets, discovery of defined product profiles that optimize therapeutic window, and multiparameter optimization for highly differentiated development candidates. Through close integration of AI-generated molecular designs with automated chemical synthesis and experimental execution, Iambic completes design-make-test cycles on a weekly cadence.
About Iambic Therapeutics
Iambic is a clinical-stage life-science and technology company developing novel medicines using its AI-driven discovery and development platform. Based in San Diego and founded in 2020, Iambic has assembled a world-class team that unites pioneering AI experts and experienced drug hunters. The Iambic platform has demonstrated delivery of new drug candidates to human clinical trials with unprecedented speed and across multiple target classes and mechanisms of action. Iambic is advancing a pipeline of potential best-in-class and first-in-class clinical assets, both internally and in partnership, to address urgent unmet patient need. Learn more about the Iambic team, platform, pipeline, and partnerships at Iambic.ai
About Lambda
Lambda was founded in 2012 by AI engineers with published research at the top machine learning conferences in the world. Our goal is to become the #1 AI compute platform serving developers across the entire AI development lifecycle. We enable AI engineers to easily, securely and affordably build, test and deploy AI products at scale. Our product portfolio spans from on-prem GPU hardware to hosted GPUs in the cloud. Lambda’s mission is to create a world where access to computation is as effortless and ubiquitous as electricity.
Contacts
Media Contact
pr@lambdal.com
Lambda and Cologix launch the first NVIDIA HGX B200-Accelerated AI Clusters in Columbus data center
Supermicro’s AI-optimized hardware powers turnkey AI infrastructure for enterprise innovation in the Midwest
Cologix, a leading network-neutral interconnection and hyperscale edge data center company in North America, today announced its collaboration with Lambda, the AI Developer Cloud, to deploy NVIDIA HGX B200-accelerated 1-Click Clusters at Cologix’s COL4 ScalelogixSM data center in Columbus, Ohio. Built on Supermicro’s high-performance, energy-efficient AI solutions, this is the first deployment of its kind in the region delivering enterprise-grade AI compute with simplified access for businesses across the Midwest.
As AI adoption accelerates, organizations are looking for fast, cost-effective ways to support large model training, fine-tuning and inference workloads. After standing up AI infrastructure in Chicago in 2024, this strategic collaboration brings Lambda’s NVIDIA GPU-accelerated 1-Click Clusters™ to Columbus, enabling regional enterprises to spin up high-performance compute infrastructure in seconds—no infrastructure management required.
“Columbus is a thriving hub for AI innovation, from manufacturing to healthcare,” said Robert Brooks IV, Founding Team & VP, Revenue at Lambda. “With Supermicro’s trusted systems and Cologix’s reliable infrastructure, we’re giving Lambda’s customers in the Midwest the fastest path to production-ready AI—and the added flexibility to integrate with hyperscaler environments.”
The launch increases the availability of Lambda’s popular 1-Click Clusters™, which are purpose-built for model training and inference at scale. Combined with Supermicro’s AI-optimized hardware and Cologix’s carrier-dense environment, this deployment brings frictionless access to next-gen AI performance all within a highly secure, compliant data center environment.
Built on NVIDIA Blackwell GPUs,1-Click Clusters™ can be self-served or provisioned through Lambda’s flagship GPU Flexible Commitment (GFC) – an innovative compute consumption model that grants organizations access to Lambda’s entire Cloud portfolio, and provides seamless transitions to the next generations of NVIDIA accelerated computing.
“Columbus is one of the fastest-growing digital corridors in the country and this launch brings coastal-level AI infrastructure into the region,” said Chris Heinrich, Chief Revenue Officer of Cologix. “Our collaboration with Lambda and Supermicro gives regional enterprises a powerful edge, combining low-latency access, dense interconnection and ready-to-deploy clusters, giving teams the ability to move faster and scale smarter.”
Cologix is currently the largest colocation and interconnection provider in the area with a portfolio of four data centers spanning a total of 500,000 square feet and 80 MW of power. All four of Cologix’s data centers in Columbus are interconnected with a diverse fiber ring. Additionally, Cologix has Ohio’s most comprehensive carrier hotel in its Columbus data centers as well as an interconnection ecosystem of 50+ unique network and cloud service providers, two public cloud onramps with access to Amazon Web Services® Direct Connect and Google Cloud Interconnect and the Ohio IX internet exchange.
“Supermicro’s collaboration with Lambda and Cologix delivers real-world impact,” said Charles Liang, president and CEO. “Our NVIDIA HGX B200-based hardware enables the highest performance AI workloads in a space- and energy-efficient footprint. Together, we’re bringing those benefits to businesses in the Midwest and beyond.”
Together, Lambda, Supermicro and Cologix are enabling enterprises in healthcare, finance, logistics, retail and manufacturing to accelerate their AI roadmaps without the heavy lift of managing infrastructure. This deployment is part of a broader trend of moving high-performance compute closer to where data is generated and used and reflects Cologix’s continued investment in digital infrastructure across North America.
Learn how to supercharge your AI stack with Lambda’s 1-Click Cluster solutions and tap into Cologix’s latest AI-ready data centers to accelerate your business’ growth at the digital edge.
About Cologix
Cologix powers digital infrastructure with 45+ hyperscale edge data centers and interconnection hubs across 12 North American markets, providing high-density, ultra-low latency solutions for cloud providers, carriers and enterprises. With AI-ready, industry-leading facilities, Cologix offers scalable, flexible and sustainable data center options to help its customers accelerate their business at the digital edge. Cologix provides extensive physical and virtual connections, including Access Marketplace, where customers gain fast, reliable and self-service provisioning for on-demand connectivity. For more information, visit Cologix or follow us on LinkedIn and X.
About Lambda
Lambda was founded in 2012 by AI engineers with published research at the top machine learning conferences in the world. Our GPU cloud and on-prem hardware enables AI developers to easily, securely and affordably build, test and deploy AI products at scale. Lambda’s mission is to accelerate human progress with ubiquitous and affordable access to computation. One person, one GPU.
About Supermicro
Supermicro (NASDAQ: SMCI) is a global leader in Application-Optimized Total IT Solutions. Founded and operating in San Jose, California, Supermicro is committed to delivering first-to-market innovation for Enterprise, Cloud, AI, and 5G Telco/Edge IT Infrastructure. We are a Total IT Solutions provider with server, AI, storage, IoT, switch systems, software, and support services. Supermicro's motherboard, power, and chassis design expertise further enables our development and production, enabling next-generation innovation from cloud to edge for our global customers. Our products are designed and manufactured in-house (in the US, Taiwan, and the Netherlands), leveraging global operations for scale and efficiency and optimized to improve TCO and reduce environmental impact (Green Computing). The award-winning portfolio of Server Building Block Solutions® allows customers to optimize for their exact workload and application by selecting from a broad family of systems built from our flexible and reusable building blocks that support a comprehensive set of form factors, processors, memory, GPUs, storage, networking, power, and cooling solutions (air-conditioned, free air cooling or liquid cooling).
Supermicro, Server Building Block Solutions, and We Keep IT Green are trademarks and/or registered trademarks of Super Micro Computer, Inc.
All other brands, names, and trademarks are the property of their respective owners.
Attachments











