Fast AI inference infrastructure leader Groq and HUMAIN launch OpenAI's new Open Models Day Zero
Available worldwide with real-time performance, low cost, and local support in Saudi Arabia
Groq, the pioneer in fast inference, and HUMAIN, a PIF company and Saudi Arabia's leading AI services provider, today announced the immediate availability of OpenAI's two open models on GroqCloud. The launch delivers gpt-oss-120B and gpt-oss-20B with full 128K context, real-time responses, and integrated server-side tools live on Groq's optimized inference platform from day zero.
Groq has long supported OpenAI's open-source efforts, including large-scale deployment of Whisper. This launch builds on that foundation, bringing their newest models to production with global access and local support through HUMAIN.
"OpenAI is setting a new high performance standard in open source models," said Jonathan Ross, CEO of Groq. "Groq was built to run models like this, fast and affordably, so developers everywhere can use them from day zero. Working with HUMAIN strengthens local access and support in the Kingdom of Saudi Arabia, empowering developers in the region to build smarter and faster."
"Groq delivers the unmatched inference speed, scalability, and cost-efficiency we need to bring cutting-edge AI to the Kingdom," said Tareq Amin, CEO at HUMAIN. "Together, we're enabling a new wave of Saudi innovation—powered by the best open-source models and the infrastructure to scale them globally. We're proud to support OpenAI's leadership in open-source AI."
Built for full model capabilities
To make the most of OpenAI's new models, Groq delivers extended context and built-in tools like code execution and web search. Web search helps provide real-time relevant information, while code execution enables reasoning and complex workflows. Groq's platform delivers these capabilities from day zero with a full 128k token context length.
Unmatched price-performance
Groq's purpose-built stack delivers the lowest cost per token for OpenAI's new models while maintaining speed and accuracy.
gpt-oss-120B is currently running at 500+ t/s and gpt-oss-20B is currently running at 1000+ t/s on GroqCloud.
Groq is offering OpenAI's latest open models at the following pricing:
- gpt-oss-120B: $0.15 / M input tokens and $0.75 / M output tokens
- gpt-oss-20B: $0.10 / M input tokens and $0.50 / M output tokens
Note: For a limited time, tool calls used with OpenAI's open models will not be charged. Learn more at groq.com/pricing.
Global from day zero
Groq's global data center footprint across North America, Europe, and the Middle East ensures reliable, high-performance AI inference wherever developers operate. Through GroqCloud, OpenAI's open models are now available worldwide with minimal latency.
About Groq
Groq is the AI inference platform redefining price performance. Its custom-built LPU and cloud have been specifically designed to run powerful models instantly, reliably, and at the lowest cost per token—without compromise. Over 1.9 million developers trust Groq to build fast and scale smarter.
Contact: pr-media@groq.com
About HUMAIN
HUMAIN, a PIF company, is a global artificial intelligence company delivering full-stack AI capabilities across four core areas - next-generation data centers, hyper-performance infrastructure & cloud platforms, advanced AI Models, including the world's most advanced Arabic multimodal LLMs, and transformative AI Solutions that combine deep sector insight with real-world execution.
HUMAIN's end-to-end model serves both public and private sector organisations, unlocking exponential value across all industries, driving transformation and strengthening capabilities through human-AI synergies. With a growing portfolio of sector-specific AI products and a core mission to drive IP leadership and talent supremacy world-wide, HUMAIN is engineered for global competitiveness and national distinction.
WATCH Lightmatter's record breaking Passage M1000 Powering the Next 1000x in AI Performance
ALSO READ:
Doubling Down: World’s First 16-λ Single Fiber Bidirectional Link for AI
The computational needs of Artificial Intelligence (AI) have pushed hardware to its absolute limits. One of the most significant bottlenecks in building a massive, scale-up AI system is interconnect I/O—specifically, the number of optical fibers you can connect to a GPU or a switch. Co-Packaged Optics (CPO) has emerged as a leading solution, but it still faces the physical constraint of fiber real estate.
The Passage 3D Silicon Photonics Engine supports from single-chip to multi-die complexes and all the way to wafer-scale—the world’s fastest interconnect.
Explore the Passage M1000 Reference System, a groundbreaking platform built around the Passage M1000 3D Photonic Superchip. Discover how this revolutionary photonic interconnect technology is changing the game for Artificial Intelligence (AI) infrastructure by enabling massive scale-up bandwidth and radix, connecting GPUs, TPUs and data center switches in the largest AI model training clusters.
3D Photonic Interconnect for AI
The Passage™ M1000 3D Photonic Superchip reference platform demonstrates record-breaking 114 Tbps total bandwidth. A 4000 mm2 photonic interposer for massive die complexes. M1000 has 256 optical fibers and supports 1.5kW+ power delivery and the world’s first built-in solid state optical circuit switching.
Unleash I/O anywhere within your chip complex. Eliminate energy-draining, latency-heavy network-on-chip journeys. Outperform shoreline-limited technologies like conventional CPO, NPO, and pluggables.
Lightmatter's ground breaking 8X leap in bidirectional wavelengths per fiber paves the way for Next-Generation AI Data Centers
Lightmatter Achieves World-First 16-Wavelength Bidirectional Link on Single-Mode Optical Fiber
A historical massive leap in AI infrastructure. This is what scaling AI should look like.
Lightmatter has achieved a world-first in optical communications by demonstrating a 16-wavelength bidirectional link on a single strand of standard optical fiber, along with unprecedented thermal performance and polarization insensitivity.
ALSO READ:
Doubling Down: World’s First 16-λ Single Fiber Bidirectional Link for AI
This landmark achievement is an 8X leap in bidirectional fiber bandwidth density, directly solving the bottlenecks that are limiting the scale and performance of today’s AI data centers. Powered by our Passage™ interconnect and Guide™ laser technologies, this breakthrough delivers an unprecedented 800 Gbps per fiber. It’s an architectural leap forward that enables hyperscalers to dramatically increase efficiency and scalability, paving the way for the next generation of AI models.
Lightmatter, the leader in photonic (super)computing, today announced a groundbreaking achievement in optical communications: a 16-wavelength bidirectional Dense Wavelength Division Multiplexing (DWDM) optical link operating on one strand of standard single-mode (SM) fiber. Powered by Lightmatter’s industry-leading PassageTM interconnect and GuideTM laser technologies, this breakthrough shatters previous limitations in fiber bandwidth density and spectral utilization, setting a new benchmark for high-performance, resilient data center interconnects.
With the rise of complex trillion-parameter Mixture of Experts (MoE) models, scaling AI workloads is increasingly bottlenecked by bandwidth and radix (I/O port count) limitations in data center infrastructure. Lightmatter’s Passage technology delivers an unprecedented 800 Gbps bidirectional bandwidth (400 Gbps transmit and 400 Gbps receive) per single-mode fiber for distances of several hundred meters or more. This achievement advances chip I/O design by simultaneously increasing both radix and bandwidth per fiber compared to existing co-packaged optics (CPO) solutions.
While commercial bidirectional (BiDi) transmission on a single fiber has been limited mainly to two wavelengths, achieving 16 wavelengths (also referred to as “lambdas”) has historically required multiple or specialized fibers. This Lightmatter milestone addresses significant technical challenges related to managing complex wavelength-dependent propagation characteristics, power budget constraints, optical nonlinearity, and mitigating crosstalk and backscattering in a single fiber. Such innovations pave the way for the next major advances in AI model development, which demand more extensive and efficient high-bandwidth networking than exists today.
“Data centers are the new unit of compute in the AI era, with the next 1000X performance gain coming largely from ultra-fast photonic interconnects,” said Nicholas Harris, founder and CEO of Lightmatter. “Our 16-lambda bidirectional link is an architectural leap forward. Hyperscalers can achieve significantly higher bandwidth density with standard single-mode fiber, reducing both capital expenditure and operational complexity, while enabling higher ‘radix’—more connections per XPU or switch.”
“Lightmatter’s innovation arrives at a pivotal moment for hyperscale AI infrastructure. The ability to dramatically increase bandwidth density on existing single-mode fiber, coupled with the technology’s robust thermal performance, is a game-changer for data center scalability and efficiency. This solves one of the most pressing challenges in AI development and brings advanced Co-Packaged Optics a giant step closer to market,” said Alan Weckel, co-founder and analyst, 650 Group.
Lightmatter’s breakthrough incorporates a proprietary closed-loop digital stabilization system that actively compensates for thermal drift, ensuring continuous, low-error transmission over wide temperature fluctuations. In addition, architectural innovations make the Passage 3D CPO platform inherently polarization-insensitive, maintaining robust performance even when the fibers are being handled or subject to mechanical stress. Standard SM fiber, while offering immense bandwidth potential, does not inherently maintain light’s polarization state, unlike specialized and more costly polarization-maintaining (PM) fiber. By achieving polarization insensitivity, Lightmatter enables the use of cost-effective SM fiber for its industry-leading bidirectional DWDM technology.
This combination of unparalleled fiber bandwidth density, efficient spectral utilization, and robust performance makes Lightmatter’s Passage technology foundational for the industry’s transition from electrical to optical interconnects in AI data centers. It empowers customers to accelerate development of larger and more capable AI models with more powerful, efficient, and scalable data centers.
For more information about Passage technology, please visit https://lightmatter.co/
About Lightmatter
Lightmatter is leading the revolution in AI data center infrastructure, enabling the next giant leaps in human progress. The company’s groundbreaking Passage™ platform—the world’s first 3D-stacked silicon photonics engine—connects thousands to millions of processors at the speed of light. Designed to eliminate critical data bottlenecks, Lightmatter’s technology enables unparalleled efficiency and scalability for the most advanced AI and high-performance computing workloads, pushing the boundaries of AI infrastructure.
Media Contact:
Lightmatter
John O’Brien
press@lightmatter.co
Experience our breakthroughs.
• Watch the demo (video): https://bit.ly/3Hkt0kW
• Passage M1000 End to End System (video): https://bit.ly/46XaR72
Read our Chief Scientist Darius Bunandar’s “Doubling Down: World’s First 16-λ Single Fiber Bidirectional Link for AI”: https://bit.ly/4fPUZWs
Photonics: The Next Leap in AI Infrastructure
Light-speed computing for the AI revolution
Just as fiber optics revolutionized global communications by replacing electrical signals with light, photonics is poised to transform AI infrastructure by fundamentally reimagining how we process and move data.
The Energy Crisis in AI
Today's AI systems consume staggering amounts of energy. Training large language models requires megawatt-hours of electricity, while inference at scale demands massive data centers that consume as much power as small cities. As AI capabilities expand exponentially, this energy consumption threatens to become unsustainable.
The bottleneck isn't just in computation—it's in the fundamental physics of moving electrons through silicon. Every bit transfer generates heat, every calculation requires power, and every nanosecond of latency compounds into massive inefficiencies at scale.
Enter Photonic Computing
Photonic computing replaces electrons with photons—particles of light that move at the ultimate speed limit of the universe. Unlike electrons, photons don't interact with each other, eliminating interference and reducing energy loss. They can carry multiple wavelengths simultaneously, enabling massive parallel processing that would be impossible with traditional electronics.
Key Advantages of Photonic AI
- Ultra-low latency: Light-speed processing reduces computation time by orders of magnitude
- Massive bandwidth: Wavelength division multiplexing enables parallel data streams
- Energy efficiency: Photons generate virtually no heat, dramatically reducing power consumption
- Scalability: Linear scaling without the quadratic growth in complexity of electronic systems
The Fiber Optics Parallel
The transformation mirrors the fiber optics revolution of the 1980s and 1990s. Before fiber optics, long-distance communication relied on electrical signals that degraded over distance and required frequent amplification. The switch to light-based transmission enabled the global internet, high-definition video streaming, and instantaneous worldwide communication.
Similarly, photonic computing promises to break through the current limitations of electronic processing. Where traditional processors hit physical limits of heat dissipation and signal interference, photonic processors operate in an entirely different domain—one where the speed of light is the only constraint.
Real-World Applications
Photonic AI infrastructure would enable breakthrough applications across industries:
- Real-time language translation with zero perceptible latency
- Autonomous vehicles with instantaneous decision-making capabilities
- Climate modeling with unprecedented resolution and speed
- Medical diagnostics processing complex imaging in real-time
- Financial modeling with microsecond decision capabilities
LightMatter: Pioneering the Transition
LightMatter is leading the photonic AI revolution with a pragmatic, phased approach that bridges today's silicon infrastructure with tomorrow's all-optical systems. Rather than attempting to replace entire computing architectures overnight, they're strategically transforming AI infrastructure piece by piece, starting where it matters most.
Phase 1: Light-Enabled Silicon
LightMatter's breakthrough began with their photonic interconnect technology—the critical bottleneck in modern AI systems. Their Passage™ interconnect replaces electrical wires between chips with light-based connections, instantly eliminating the bandwidth limitations and energy waste of traditional copper interconnects.
This approach is genius in its simplicity: existing silicon chips continue to handle computation, but data moves between them at light speed with minimal energy loss. GPU clusters that once consumed massive amounts of power just moving data between processors now operate with dramatically improved efficiency and throughput.
LightMatter's Interconnect Advantages
- 25x bandwidth increase over electrical interconnects
- 90% reduction in interconnect power consumption
- Drop-in compatibility with existing silicon processors
- Scalable architecture supporting thousands of connected devices
Phase 2: Full Photonic Infrastructure
LightMatter's long-term vision extends beyond interconnects to complete photonic computing systems. Their roadmap includes photonic neural network accelerators that perform matrix operations—the fundamental building blocks of AI—entirely in the optical domain. This represents the ultimate evolution: computation itself happening at light speed.
These full photonic systems promise to deliver the theoretical maximums of optical computing: processing speeds limited only by the speed of light, energy consumption approaching the fundamental physical limits, and massive parallel processing capabilities that dwarf today's electronic systems.
The Path Forward
LightMatter's phased approach demonstrates the practical path to photonic AI dominance. By first solving the interconnect bottleneck with immediately deployable technology, they're building the foundation for more revolutionary advances while delivering real value today.
The transition won't happen overnight, but the physics is compelling. As AI models continue to grow in complexity and energy demands become unsustainable, photonic infrastructure offers a path to exponentially more capable and efficient AI systems.
The Light-Speed Future
Just as fiber optics made the modern internet possible, photonic computing will unlock the next generation of AI—systems that think at the speed of light while consuming a fraction of today's energy.
The question isn't whether photonics will transform AI infrastructure—it's how quickly we can make the transition. In the race for artificial general intelligence, light itself may be our greatest ally.
Iambic Therapeutics taps Lambda for AI compute to develop the next generation of AI molecular property prediction for drug discovery
Iambic Therapeutics, a CNBC 2025 Disruptor 50 Company developing novel medicines using its AI-driven discovery and development platform, has partnered with Lambda to power its platforms' AI infrastructure for AI driven drug discovery.
Lambda, the GPU cloud company founded by AI engineers, today announced that Iambic Therapeutics, a clinical-stage life science and technology company developing novel medicines using its AI-driven discovery and development platform, has selected Lambda to provide an NVIDIA HGX B200 cluster to support the training of Enchant, its industry-leading model for molecular property prediction.
Iambic’s Enchant is a breakthrough multi-modal transformer model for predicting clinical and preclinical endpoints related to the drug discovery and development process. Enchant enables researchers to determine the viability of new drug molecules and make high-confidence predictions where data is most sparse, helping address the critical real-world challenge of understanding how novel drug candidates may affect a patient while still in the earliest stages of discovery. Iambic’s recently announced Enchant v2 provides accurate predictions for dozens of biological, physiochemical, pharmacokinetic, metabolic, safety, and other properties essential for clinical success.
“With the release of Enchant v2, we demonstrated both the model’s accuracy and scalability and we believe we can rapidly build on these gains through model scale alone,” said Matt Welborn, PhD, Iambic’s VP of Machine Learning. “We are expanding our successful relationship with Lambda to an NVIDIA HGX B200 cluster, which will accelerate this opportunity and the breadth of pre-clinical and clinical endpoints Enchant can predict, increasing the likelihood of a molecules' success in human studies and the efficiency of drug development.”
Enchant’s high-confidence predictions enable multi-parameter optimization for the design of potentially more effective medicines, program prioritization, and the design of clinical trials for the potentially rapid translation of novel medicines. Iambic researchers also demonstrated that in some cases Enchant can be a better predictor of in vivo drug clearance than in vitro experiments – a key advancement as regulators look for drug developers to broaden their use of in silico testing. Today, Enchant is the leading model in the field based on performance benchmarks across diverse molecular property prediction tasks.
“We're thrilled to deepen our partnership with Iambic, a leader in AI-driven drug discovery,” said Robert Brooks IV, Founding Team and VP, Revenue. “By leveraging Lambda's 1-Click Clusters for rapid testing and validation, Iambic was able to seamlessly scale to an NVIDIA HGX B200 cluster to accelerate breakthroughs in life sciences.”
To learn more about Lambda’s cloud offerings for AI training and inference, click here.
About Iambic’s AI-Driven Discovery Platform
The Iambic AI-driven platform was created to address the most challenging design problems in drug discovery, leveraging technology innovations such as Enchant (multimodal transformer model that predicts clinical and preclinical endpoints) and NeuralPLexer (best-in-class predictor of protein and protein-ligand structures). The integration of physics principles into the platform’s AI architectures improves data efficiency and allows molecular models to venture widely across the space of possible chemical structures. The platform enables identification of novel chemical modalities for engaging difficult-to-address biological targets, discovery of defined product profiles that optimize therapeutic window, and multiparameter optimization for highly differentiated development candidates. Through close integration of AI-generated molecular designs with automated chemical synthesis and experimental execution, Iambic completes design-make-test cycles on a weekly cadence.
About Iambic Therapeutics
Iambic is a clinical-stage life-science and technology company developing novel medicines using its AI-driven discovery and development platform. Based in San Diego and founded in 2020, Iambic has assembled a world-class team that unites pioneering AI experts and experienced drug hunters. The Iambic platform has demonstrated delivery of new drug candidates to human clinical trials with unprecedented speed and across multiple target classes and mechanisms of action. Iambic is advancing a pipeline of potential best-in-class and first-in-class clinical assets, both internally and in partnership, to address urgent unmet patient need. Learn more about the Iambic team, platform, pipeline, and partnerships at Iambic.ai
About Lambda
Lambda was founded in 2012 by AI engineers with published research at the top machine learning conferences in the world. Our goal is to become the #1 AI compute platform serving developers across the entire AI development lifecycle. We enable AI engineers to easily, securely and affordably build, test and deploy AI products at scale. Our product portfolio spans from on-prem GPU hardware to hosted GPUs in the cloud. Lambda’s mission is to create a world where access to computation is as effortless and ubiquitous as electricity.
Contacts
Media Contact
pr@lambdal.com
Lambda and Cologix launch the first NVIDIA HGX B200-Accelerated AI Clusters in Columbus data center
Supermicro’s AI-optimized hardware powers turnkey AI infrastructure for enterprise innovation in the Midwest
Cologix, a leading network-neutral interconnection and hyperscale edge data center company in North America, today announced its collaboration with Lambda, the AI Developer Cloud, to deploy NVIDIA HGX B200-accelerated 1-Click Clusters at Cologix’s COL4 ScalelogixSM data center in Columbus, Ohio. Built on Supermicro’s high-performance, energy-efficient AI solutions, this is the first deployment of its kind in the region delivering enterprise-grade AI compute with simplified access for businesses across the Midwest.
As AI adoption accelerates, organizations are looking for fast, cost-effective ways to support large model training, fine-tuning and inference workloads. After standing up AI infrastructure in Chicago in 2024, this strategic collaboration brings Lambda’s NVIDIA GPU-accelerated 1-Click Clusters™ to Columbus, enabling regional enterprises to spin up high-performance compute infrastructure in seconds—no infrastructure management required.
“Columbus is a thriving hub for AI innovation, from manufacturing to healthcare,” said Robert Brooks IV, Founding Team & VP, Revenue at Lambda. “With Supermicro’s trusted systems and Cologix’s reliable infrastructure, we’re giving Lambda’s customers in the Midwest the fastest path to production-ready AI—and the added flexibility to integrate with hyperscaler environments.”
The launch increases the availability of Lambda’s popular 1-Click Clusters™, which are purpose-built for model training and inference at scale. Combined with Supermicro’s AI-optimized hardware and Cologix’s carrier-dense environment, this deployment brings frictionless access to next-gen AI performance all within a highly secure, compliant data center environment.
Built on NVIDIA Blackwell GPUs,1-Click Clusters™ can be self-served or provisioned through Lambda’s flagship GPU Flexible Commitment (GFC) – an innovative compute consumption model that grants organizations access to Lambda’s entire Cloud portfolio, and provides seamless transitions to the next generations of NVIDIA accelerated computing.
“Columbus is one of the fastest-growing digital corridors in the country and this launch brings coastal-level AI infrastructure into the region,” said Chris Heinrich, Chief Revenue Officer of Cologix. “Our collaboration with Lambda and Supermicro gives regional enterprises a powerful edge, combining low-latency access, dense interconnection and ready-to-deploy clusters, giving teams the ability to move faster and scale smarter.”
Cologix is currently the largest colocation and interconnection provider in the area with a portfolio of four data centers spanning a total of 500,000 square feet and 80 MW of power. All four of Cologix’s data centers in Columbus are interconnected with a diverse fiber ring. Additionally, Cologix has Ohio’s most comprehensive carrier hotel in its Columbus data centers as well as an interconnection ecosystem of 50+ unique network and cloud service providers, two public cloud onramps with access to Amazon Web Services® Direct Connect and Google Cloud Interconnect and the Ohio IX internet exchange.
“Supermicro’s collaboration with Lambda and Cologix delivers real-world impact,” said Charles Liang, president and CEO. “Our NVIDIA HGX B200-based hardware enables the highest performance AI workloads in a space- and energy-efficient footprint. Together, we’re bringing those benefits to businesses in the Midwest and beyond.”
Together, Lambda, Supermicro and Cologix are enabling enterprises in healthcare, finance, logistics, retail and manufacturing to accelerate their AI roadmaps without the heavy lift of managing infrastructure. This deployment is part of a broader trend of moving high-performance compute closer to where data is generated and used and reflects Cologix’s continued investment in digital infrastructure across North America.
Learn how to supercharge your AI stack with Lambda’s 1-Click Cluster solutions and tap into Cologix’s latest AI-ready data centers to accelerate your business’ growth at the digital edge.
About Cologix
Cologix powers digital infrastructure with 45+ hyperscale edge data centers and interconnection hubs across 12 North American markets, providing high-density, ultra-low latency solutions for cloud providers, carriers and enterprises. With AI-ready, industry-leading facilities, Cologix offers scalable, flexible and sustainable data center options to help its customers accelerate their business at the digital edge. Cologix provides extensive physical and virtual connections, including Access Marketplace, where customers gain fast, reliable and self-service provisioning for on-demand connectivity. For more information, visit Cologix or follow us on LinkedIn and X.
About Lambda
Lambda was founded in 2012 by AI engineers with published research at the top machine learning conferences in the world. Our GPU cloud and on-prem hardware enables AI developers to easily, securely and affordably build, test and deploy AI products at scale. Lambda’s mission is to accelerate human progress with ubiquitous and affordable access to computation. One person, one GPU.
About Supermicro
Supermicro (NASDAQ: SMCI) is a global leader in Application-Optimized Total IT Solutions. Founded and operating in San Jose, California, Supermicro is committed to delivering first-to-market innovation for Enterprise, Cloud, AI, and 5G Telco/Edge IT Infrastructure. We are a Total IT Solutions provider with server, AI, storage, IoT, switch systems, software, and support services. Supermicro's motherboard, power, and chassis design expertise further enables our development and production, enabling next-generation innovation from cloud to edge for our global customers. Our products are designed and manufactured in-house (in the US, Taiwan, and the Netherlands), leveraging global operations for scale and efficiency and optimized to improve TCO and reduce environmental impact (Green Computing). The award-winning portfolio of Server Building Block Solutions® allows customers to optimize for their exact workload and application by selecting from a broad family of systems built from our flexible and reusable building blocks that support a comprehensive set of form factors, processors, memory, GPUs, storage, networking, power, and cooling solutions (air-conditioned, free air cooling or liquid cooling).
Supermicro, Server Building Block Solutions, and We Keep IT Green are trademarks and/or registered trademarks of Super Micro Computer, Inc.
All other brands, names, and trademarks are the property of their respective owners.
Attachments
Nixxy NIXX: NASDAQ acquires Leadnova.ai platform from NexGenAI and reports $5.2 Million in May revenue
Nixxy (NASDAQ:NIXX) an AI-powered data and communications infrastructure company, today announced the successful acquisition of the Leadnova.ai platform assets from the NexGenAI suite of companies. Nixxy also reported that it generated approximately $5.2 million in unaudited gross revenue for the month of May 2025 up from approximately $1.4 million in April 2025.
Leadnova.ai Platform Asset Acquisition
Leadnova.ai is a SaaS-based solution designed to support business development through data delivery, sales enablement, outreach automation, and engagement analytics. As part of the asset acquisition, Nixxy will take ownership of the Leadnova.ai domain, front-end and back-end software, automation engines, structured data repositories, and API infrastructure.
The integration of Leadnova.ai into Nixxy's AuralinkAI platform is expected to accelerate Nixxy's AI-enhanced telecom strategy and enable more intelligent, performance-driven enterprise communication services.
"Our strategy is focused on combining infrastructure with automation and AI," said Mike Schmidt, CEO of Nixxy. "Leadnova.ai adds powerful capabilities that align perfectly with our vision to deliver smarter, higher-margin communications services at scale."
May Revenue Performance and Telecom Growth Trajectory
Nixxy generated gross revenues (unaudited) of approximately $5.2 million in May 2025, representing continued traction in its growth and expansion strategy. With additional customer agreements and infrastructure now coming online, Nixxy continues to target a $10 million monthly revenue run rate by August 2025, contingent on continued execution and successful onboarding of new partners.
"We're pleased with the early momentum and the diversity of revenue sources starting to emerge," added Schmidt. "We expect ongoing growth across voice, messaging, and enterprise automation as we scale our network and integrate our AI capabilities."
Further updates will be provided as Nixxy progresses through key execution milestones.
Nixxy Announces Registered Direct Offering
Nixxy today announced the pricing of a registered direct offering for the sale and issuance of up to 846,667 shares of the Company's common stock to a small group of accredited investors at a price per share of $1.50.
The shares of common stock in the registered direct offering (but excluding the securities issued in the planned private placement) were offered pursuant to a "shelf" registration statement on Form S-3 (File No. 333-267470) initially filed with the Securities and Exchange Commission (the "SEC") on September 16, 2022, and declared effective by the SEC on September 30, 2022. The offering of the common stock in the registered direct offering was made only by means of a prospectus, including a prospectus supplement, forming a part of the effective registration statement. A final prospectus supplement and accompanying prospectus describing the terms of the proposed offering was filed with the SEC on May 29, 2025 and is available on the SEC's website located at http://www.sec.gov. Electronic copies of the final prospectus supplement and the accompanying prospectus may be obtained, when available, by contacting the Company at 1178 Broadway, 3rd Floor, New York, NY 10001, or by telephone at (877) 708-8878.
About Nixxy, Inc.
Nixxy, Inc. (NASDAQ:NIXX) is a next-generation communications and infrastructure company transforming the telecom landscape through AI-powered platforms, intelligent routing, and enterprise-grade messaging solutions. Anchored by its AuralinkAI platform, Nixxy integrates advanced automation, data analytics, and scalable infrastructure to deliver high-performance voice and messaging services globally. With a focus on operational efficiency, strategic acquisitions, and platform scalability, Nixxy is building a robust telecom network optimized for volume, intelligence, and margin. Nixxy's hybrid approach, combining infrastructure ownership with AI-enhanced service delivery, differentiates it in modern telecom innovation, serving both wholesale and enterprise clients.
Filings and press releases can be found at http://www.nixxy.com/investor-relations.
Contact Information
Investor Contact: Nixxy, Inc.
Investor Relations Email: IR@nixxy.com
Phone: (877) 708-8868
Groq’s CEO Jonathan Ross on why AI’s next big shift isn’t about Nvidia
"Right now, we're in the printing press era of AI, the very beginning," says Groq Founder & CEO Jonathan Ross
ALSO WATCH & READ
Groq Founder CEO Jonathan Ross Says Speed to Deployment Sets Its AI Inference Infrastructure Apart
Groq and Bell Canada partner to build 6 AI inference infrastructure data centers
Groq is official hyperscaler inference provider for Saudi AI datacenter Humain
The race to better compete with AI chip darling Nvidia (NVDA) is well underway. Enter Groq. The company makes what it calls language processing units (LPUs). These LPUs are designed to make large language models run faster and more efficiently, unlike Nvidia GPUs that target training models. Groq’s last capital raise was in August 2024, when it raised $640 million from the likes of BlackRock and Cisco. The company’s valuation at the time stood at $2.8 billion, a fraction of Nvidia’s more than $3 trillion market cap. It currently clocks in at $3.5 billion, according to Yahoo Finance private markets data.
At the helm of Groq is founder and CEO Jonathan Ross. While at Google, Ross designed the custom chips that the tech giant would go on to train its AI models on. Yahoo Finance Executive Editor Brian Sozzi sits down on the Opening Bid podcast with Ross to discuss his future plans for Groq. Ross is fresh off a trip with other key tech executives to Saudi Arabia, joining President Donald Trump in forging a deeper tech relationship with the country. Ross begins by sharing with Sozzi the conversations he had on the ground in Saudi Arabia.
Groq Founder CEO Jonathan Ross Says Speed to Deployment Sets Its AI Inference Infrastructure Apart
Groq CEO Jonathan Ross explains how the company differentiates itself from the competition in an interview with Bloomberg’s Caroline Hyde. With great insights into how Groq's incredible rate of trajectory enabled their AI-inference platform to close an exclusive partnership with Bell AI Fabric, Canada’s largest sovereign AI infrastructure project. Speed matters in how you build, deploy, and market to be one of the fastest-growing AI companies.
Conduit Raises $36 Million Series A to Scale Use of Stablecoins for Cross-Border Payments
ALSO READ:
Dragonfly co-leads $36 million Series A in cross-border payment startup Conduit - The Block
Exclusive: Stablecoin company Conduit raises $36 million from Dragonfly Capital and Altos Ventures - Fortune
Conduit transaction volumes surged 16x in 2024, surpassing $10 billion in annualized payment volume. It will use new funding to expand its geographic reach and increase the range of fiat and digital currencies supported through its innovative real-time global payment rails.
Conduit, a leading cross-border payments platform powered by stablecoins, announced today it closed a $36 million Series A funding round. The round was co-led by Dragonfly and Altos Ventures, with participation from Sound Ventures, Commerce Ventures, DCG, Circle Ventures (the issuer of the USDC stablecoin) and existing investors Helios Digital Ventures, and Portage Ventures.
Conduit’s cross-border payment network seamlessly integrates stablecoins, USD and local currencies, providing businesses with a faster, cheaper, and more reliable alternative to the legacy SWIFT system. Already connected into multiple local banks across North America, Latin America, Europe, Africa, and Asia, Conduit will use the capital to fuel expansion into additional markets and support an even broader range of traditional and digital currencies through its real-time payment rails.
This round of funding comes on the back of Conduit’s exceptional growth, with transaction volumes growing 16x through its platform between 2023 and 2024. To date, Conduit has saved clients over 60,000 hours in settlement times and generated fee savings worth over $55 million. The platform bridges crypto-native infrastructure with traditional finance, offering nearly instant, programmable global transactions with integrated AML, sanctions screening, and transaction monitoring.
Clients choose Conduit for:
- Speed and Efficiency: Unlike payment platforms that rely on slow and disjointed networks of correspondent banks, Conduit has direct partnerships with two dozen banks across the world, enabling transactions to settle in seconds rather than days.
- Broad Geographic Coverage: Conduit natively supports a diverse range of currencies and payment methods, including highly inflationary local currencies in Latin America, Africa, and Asia.
- Deep Liquidity: Conduit’s robust network of institutional-grade FX providers ensures large transactions can be processed seamlessly without liquidity constraints.
"This fresh capital injection will enable us to accelerate our mission to build the next generation global payments infrastructure, to promote fairer economic opportunities across the world," said Kirill Gertman, Conduit CEO. "Traditional cross-border payment systems do not meet the demands of modern businesses. Conduit’s platform seamlessly bridges the gap between traditional banking and stablecoin technology, offering unparalleled speed, affordability, transparency and reliability."
Conduit’s platform enables nearly instant global transfers across multiple payment rails, including USD-denominated payment networks (SWIFT, ACH, FedWire) and local payment systems throughout Europe, the UK, and countries such as China, Hong Kong, Mexico, Brazil, Colombia, Nigeria, and Kenya, among others. Businesses in these regions often face restricted access to USD, lack of SWIFT connectivity, limited interoperability between stablecoins and fiat currencies, slow settlement times, high fees, and complex regulatory requirements. While stablecoins can have a significant impact on how businesses can manage their treasuries, most market participants still expect invoices to be settled in fiat currencies, creating a need for seamless interoperability between fiat and digital currencies. Conduit enables clients in those jurisdictions to transition between stablecoins and local currencies in real-time to more efficiently settle commercial invoices.
As part of this investment, Dragonfly Capital’s Rob Hadick will join the Conduit board. "We're thrilled to lead this investment round and support Kirill and his team as they reimagine how money moves across borders. With billions of annual transaction volume already flowing through Conduit’s platform, it has proven there’s a better way to move money globally and that stablecoins are the future of cross-border payments," Hadick stated. "What impressed us most was not just their innovative technology, but their remarkable traction and clear product-market fit. By addressing the real pain points businesses face with international transactions, particularly in emerging markets, Conduit has positioned itself as a critical infrastructure provider for the global economy.”
Founded in 2021, Conduit currently has 57 employees and serves more than 100 clients, experiencing 105% year-over-year client growth. The company plans to expand its product offering into Asia, strengthen its footprint in Mexico, as well as other geographies.
About Conduit
Conduit is a next-generation payment network for businesses that move money across borders. We provide fast, reliable global payments by combining instant local payment rails with the efficiency of stablecoins. With a single API, Conduit connects banks, local payment rails, and blockchains to create a resilient network spanning key markets worldwide — including deep connectivity across Latin America, Africa and Asia. To learn more visit https://conduitpay.com/.
Contacts
Julie Bishop
julie@walkercomms.com