Aderant acquires HerculesAI legal technology assets accelerating AI capabilities in work-to-cash solution

Aderant + HerculesAI technology = intelligent automation for the world’s most innovative law firms. The acquisition of HerculesAI’s legal technology assets supercharges Aderant’s already industry leading legal technology suite.

For more than a decade, HerculesAI has been laser-focused on providing AI solutions that eliminate manual work, enable customers to scale beyond human capabilities, and deliver real ROI.

HerculesAI CEO Alex Babin said “I am proud of the work that our team has put into building our legal technology solutions and look forward to seeing them be incorporated into Aderant’s suite where they will be used by hundreds of thousands of people worldwide.”

Aderant, a leading global provider of business management solutions for law firms, today announced that it has signed a definitive agreement to acquire the legal technology assets from HerculesAI, a pioneer in AI-driven billing compliance and decision intelligence. This strategic move marks a significant leap forward in Aderant’s mission to deliver a fully automated, insight-rich, and agile work-to-cash solution for the legal industry.

By embedding HerculesAI’s advanced machine learning and decision intelligence into Aderant’s solutions—and supercharging MADDI, Aderant’s AI-powered virtual associate—Aderant is unlocking unprecedented levels of automation, insight, and agility for law firms around the globe. This powerful integration doesn’t just streamline workflows; it proactively prevents revenue leakage, boosts realization rates, and redefines compliance as a strategic advantage—all driven by AI that understands the context, intent, and urgency behind every decision.

“This acquisition is a game-changer for our clients,” said Chris Cartrett, President & CEO of Aderant. “By integrating HerculesAI’s advanced compliance engine with our industry-leading work-to-cash platform, we’re not just automating workflows—we’re enabling intelligent automation that drives measurable profitability. Law firms will gain unprecedented precision, speed, and confidence in their operations. Beyond the technology, we’re equally excited to welcome the entire HerculesAI legal technology team to Aderant. AI-centered development is at the core of our strategy, with many successful product releases in recent years. This acquisition will only enhance the AI engineering and data science talent within Aderant. That’s not just a win—it’s a strategic leap forward.”

Alex Babin, founder and CEO of HerculesAI, added, “Aderant is a leading force in legal technology, and each of HerculesAI’s legal products fits naturally into their portfolio. This move strengthens Aderant’s end-to-end work-to-cash offering and further transforms compliance from a bottleneck into a business accelerator. Just as important as product synergy is the cultural alignment between our teams - a shared commitment to solving real problems for customers. That alignment goes beyond technology and sets the foundation for long-term impact.”

Aderant has released nine AI driven products into the market in the past two years. The integration of HerculesAI technology into Aderant’s ecosystem will enable even greater real-time enforcement of outside counsel guidelines (OCGs), internal billing policies, and historical patterns—delivering audit-ready analytics and one-click resolution of billing issues. Combined with Aderant’s AR Automation and Cloud solutions, this creates a seamless, end-to-end work-to-cash experience.

Aderant will be exhibiting at ILTACON 2025, conference of the International Legal Technology Association, on August 11-14 at the Gaylord National Resort and Convention Center in National Harbor, Maryland (Aderant booth #501 & HerculesAI booth #1209).

For more information on Aderant, visit aderant.com.

About Aderant®

Aderant is dedicated to helping law firms run a better business. As a leading global provider of business management and practice-of-law solutions, the world’s best firms rely on Aderant to keep their businesses moving forward and inspire innovation. At Aderant, the “A” is more than just a letter. It represents how we fulfill our foundational purpose, serving our clients. Aderant operates as a business unit of Roper Technologies (Nasdaq: ROP), a constituent of the S&P 500 and Fortune 1000. The company is headquartered in Atlanta, Georgia, and has several other offices across North America, Europe, and Asia-Pacific. For more information, visit Aderant.com, email info@aderant.com, or follow the company on LinkedIn.

About HerculesAI

HerculesAI converts rules-heavy, document-bound workflows into real-time, auditable data flows for the world’s most regulated enterprises. Its AI agents capture data from any source, map it to company or industry standards, and validate every field against business rules—eliminating manual entry, cutting administration time by up to 70 percent, and ensuring continuous compliance. SOC 2 (Type II)-certified, the platform can be deployed in the customer’s cloud or on-premises. Founded in 2014, HerculesAI's headquarters are in Campbell, California, with global offices in Europe, the United Kingdom and Canada. Learn more at hercules.ai.

Contacts

Media Contacts 

Will Ayers
Manager, Communications
Will.ayers@aderant.com 

Tam Ellis
Vice President of Marketing, HerculesAI
216-208-4050 

Christy Burke, Burke & Company PR
917-623-5096
cburke@burke-company.com

AlphaSense Launches Autonomous AI Agent Interviewer and Channel Checks to deliver real-time market signals across every sector of the economy

AlphaSense just changed the game for market intelligence with the launch of their AI Agent Interviewer and Channel Checks to give you real-time market signals across every sector. Think demand shifts, pricing moves, and supply chain changes before the rest of the market catches on.

Here’s why it matters:
✅ 200K+ investor-led expert interviews in the world’s largest transcript library
✅ AI-led expert calls that uncover fresh insights in hours
✅ Research synthesized in minutes so you can act faster than the competition

It's basically taking what was already the smartest, most comprehensive expert research platform out there and giving it superpowers. Instead of choosing between quality or speed, you finally get both.

Capabilities augment the world’s largest investor-led Tegus Expert Transcript Library with autonomous AI-led interviews, driving exponential scaling of insights in the AlphaSense platform

AlphaSense, the AI platform redefining market intelligence for the business and financial world, today launched its AI agent interviewer and debuted Channel Checks, game-changing capabilities that expand Tegus Expert Insights, the company’s comprehensive expert research offering. For the first time, users can now get a real-time pulse on the economy through Channel Check interviews, surfacing market-moving insights such as demand shifts, pricing changes, and supply chain disruptions.

This expansion unites the unrivaled Tegus Expert Transcript Library – the world’s largest and fastest-growing collection of investor-led expert calls, now surpassing 200,000 transcripts across 25,000 public and private companies – with the AI Interviewer’s ability to independently conduct and transcribe conversations with top-tier human experts. The result is a powerful blend of trusted, investor-generated research augmented with AI-driven interviews. By leveraging AlphaSense’s Deep Research AI agent, users can easily draw synthesized insights across the full content library, compressing days of research work into minutes.

Built on AlphaSense’s proprietary generative AI and fueled by the market’s deepest intelligence library – spanning 500 million premium business documents and the Tegus Expert Transcript Library – the AI Interviewer leverages the full breadth of prior relevant knowledge to engage senior industry experts in probing conversations, matching the precision of an experienced human interviewer while operating at the speed and scale only AI can deliver.

“With AI now running point on expert calls, AlphaSense has turned primary research into a learning engine. The more the system hears, the better it asks – and the more valuable the answers become. That virtuous cycle should raise the bar for market intelligence platforms everywhere,” said Jay Simons, General Partner at BOND.

Channel Checks are already live across dozens of high-value sub-industries within Technology, Media & Telecom, Energy & Industrials, and Healthcare – with hundreds of new interviews with human experts added every week and coverage continuing to expand.

Customers will soon be able to initiate AI-led expert calls for their own research targets by selecting a topic, approving an AI-generated question guide, and receiving a fully transcribed interview with key takeaways in hours – delivering broader coverage, faster insights, and expert research at unprecedented scale.

The Most Trusted Expert Insights, at Scale

At the core of Tegus Expert Insights is the Tegus Expert Transcript Library – the world’s largest and fastest-growing collection of investor-led interviews, more than double the combined size of all competing libraries, and growing by 7,000 transcripts each month. Trusted by thousands of firms, including more than 50% of Midas List VCs – the world’s top-ranked venture capital investors – it delivers insider perspectives from former executives, customers, competitors, and other key voices – making opaque markets transparent.

“It's my job to get up to speed on companies and markets faster and better than everyone else. Tegus makes me the expert in the room,” said Sai Senthilkumar, Partner, Growth at Redpoint Ventures.

“The investment community has built their trust in investor-led Tegus content over years, and that remains the cornerstone of our expert insights offering,” said Jack Kokko, CEO and Founder of AlphaSense. “Now by combining our unparalleled depth of Tegus content with AI agent-led interviews, we’re delivering the earliest and most accurate view of what’s actually unfolding in the market and helping decision-makers spot market shifts well before they are reflected in earnings reports, filings, or forecasts.”

Relentless Commitment to Quality, Scale, and Compliance

AlphaSense is investing aggressively in scaling proprietary expert content, with over $100 million in annual investment in the rapidly growing Tegus Expert Transcript Library, a global team of hundreds of expert recruiters across all major industries, and a market-leading compliance program led by a former Goldman Sachs executive.

Every expert call is rigorously pre-screened, monitored, and reviewed to protect against material non-public information and ensure strict adherence to global regulatory standards. With a dedicated compliance team of over 75 professionals, AlphaSense ensures clients get the insights they need – in a secure and compliant manner that has earned the market’s trust.

A Unified, End-to-End Agentic Workflow

AlphaSense is the only platform offering a fully autonomous, end-to-end agentic workflow, where AI agents:

  • Autonomously source expert insights through live interviews
  • Research and generate decision-ready research outputs and work-product
  • Continuously learn from each new interview to make the next one even smarter

By pairing trusted investor-led calls with high-velocity, AI-led interviews, AlphaSense’s Tegus Expert Insights amplifies the power of the market’s most comprehensive and intelligent expert research solution.

About AlphaSense

AlphaSense is the AI platform redefining market intelligence and agentic workflow orchestration, trusted by thousands of leading organizations to drive faster, more confident decisions in business and finance. The platform combines domain specific AI with a vast content universe of over 500 million premium business documents – including equity research, earnings calls, expert interviews, filings, news, and clients’ internal proprietary content. Purpose-built for speed, accuracy, and enterprise-grade security, AlphaSense helps clients extract critical insights, uncover market-moving trends, and automate complex workflows with high quality outputs. With AI solutions like Generative Search, Generative Grid, and Deep Research, AlphaSense delivers the clarity and depth professionals need to navigate complexity and obtain accurate, real-time information quickly. For more information, visit www.alpha-sense.com.

Media Contact

Pete Daly for AlphaSense

Email: media@alpha-sense.com

 

Explore more


Fast AI inference infrastructure leader Groq and HUMAIN launch OpenAI's new Open Models Day Zero

Available worldwide with real-time performance, low cost, and local support in Saudi Arabia

Groq, the pioneer in fast inference, and HUMAIN, a PIF company and Saudi Arabia's leading AI services provider, today announced the immediate availability of OpenAI's two open models on GroqCloud. The launch delivers gpt-oss-120B and gpt-oss-20B with full 128K context, real-time responses, and integrated server-side tools live on Groq's optimized inference platform from day zero.

Groq has long supported OpenAI's open-source efforts, including large-scale deployment of Whisper. This launch builds on that foundation, bringing their newest models to production with global access and local support through HUMAIN.

"OpenAI is setting a new high performance standard in open source models," said Jonathan Ross, CEO of Groq. "Groq was built to run models like this, fast and affordably, so developers everywhere can use them from day zero. Working with HUMAIN strengthens local access and support in the Kingdom of Saudi Arabia, empowering developers in the region to build smarter and faster."

"Groq delivers the unmatched inference speed, scalability, and cost-efficiency we need to bring cutting-edge AI to the Kingdom," said Tareq Amin, CEO at HUMAIN. "Together, we're enabling a new wave of Saudi innovation—powered by the best open-source models and the infrastructure to scale them globally. We're proud to support OpenAI's leadership in open-source AI."

Built for full model capabilities

To make the most of OpenAI's new models, Groq delivers extended context and built-in tools like code execution and web search. Web search helps provide real-time relevant information, while code execution enables reasoning and complex workflows. Groq's platform delivers these capabilities from day zero with a full 128k token context length.

Unmatched price-performance

Groq's purpose-built stack delivers the lowest cost per token for OpenAI's new models while maintaining speed and accuracy.

gpt-oss-120B is currently running at 500+ t/s and gpt-oss-20B is currently running at 1000+ t/s on GroqCloud.

Groq is offering OpenAI's latest open models at the following pricing:

  • gpt-oss-120B: $0.15 / M input tokens and $0.75 / M output tokens
  • gpt-oss-20B: $0.10 / M input tokens and $0.50 / M output tokens

Note: For a limited time, tool calls used with OpenAI's open models will not be charged. Learn more at groq.com/pricing.

Global from day zero

Groq's global data center footprint across North AmericaEurope, and the Middle East ensures reliable, high-performance AI inference wherever developers operate. Through GroqCloud, OpenAI's open models are now available worldwide with minimal latency.

About Groq

Groq is the AI inference platform redefining price performance. Its custom-built LPU and cloud have been specifically designed to run powerful models instantly, reliably, and at the lowest cost per token—without compromise. Over 1.9 million developers trust Groq to build fast and scale smarter.

Contact: pr-media@groq.com

About HUMAIN

HUMAIN, a PIF company, is a global artificial intelligence company delivering full-stack AI capabilities across four core areas - next-generation data centers, hyper-performance infrastructure & cloud platforms, advanced AI Models, including the world's most advanced Arabic multimodal LLMs, and transformative AI Solutions that combine deep sector insight with real-world execution.

HUMAIN's end-to-end model serves both public and private sector organisations, unlocking exponential value across all industries, driving transformation and strengthening capabilities through human-AI synergies. With a growing portfolio of sector-specific AI products and a core mission to drive IP leadership and talent supremacy world-wide, HUMAIN is engineered for global competitiveness and national distinction.

www.humain.ai


WATCH Lightmatter's record breaking Passage M1000 Powering the Next 1000x in AI Performance

ALSO READ:
Doubling Down: World’s First 16-λ Single Fiber Bidirectional Link for AI
The computational needs of Artificial Intelligence (AI) have pushed hardware to its absolute limits. One of the most significant bottlenecks in building a massive, scale-up AI system is interconnect I/O—specifically, the number of optical fibers you can connect to a GPU or a switch. Co-Packaged Optics (CPO) has emerged as a leading solution, but it still faces the physical constraint of fiber real estate.

The Passage 3D Silicon Photonics Engine supports from single-chip to multi-die complexes and all the way to wafer-scale—the world’s fastest interconnect.

Explore the Passage M1000 Reference System, a groundbreaking platform built around the Passage M1000 3D Photonic Superchip. Discover how this revolutionary photonic interconnect technology is changing the game for Artificial Intelligence (AI) infrastructure by enabling massive scale-up bandwidth and radix, connecting GPUs, TPUs and data center switches in the largest AI model training clusters.

3D Photonic Interconnect for AI

The Passage™ M1000 3D Photonic Superchip reference platform demonstrates record-breaking 114 Tbps total bandwidth. A 4000 mm2 photonic interposer for massive die complexes. M1000 has 256 optical fibers and supports 1.5kW+ power delivery and the world’s first built-in solid state optical circuit switching.

Unleash I/O anywhere within your chip complex. Eliminate energy-draining, latency-heavy network-on-chip journeys. Outperform shoreline-limited technologies like conventional CPO, NPO, and pluggables.


Lightmatter's ground breaking 8X leap in bidirectional wavelengths per fiber paves the way for Next-Generation AI Data Centers

Lightmatter Achieves World-First 16-Wavelength Bidirectional Link on Single-Mode Optical Fiber

A historical massive leap in AI infrastructure. This is what scaling AI should look like.

Lightmatter has achieved a world-first in optical communications by demonstrating a 16-wavelength bidirectional link on a single strand of standard optical fiber, along with unprecedented thermal performance and polarization insensitivity.

ALSO READ:
Doubling Down: World’s First 16-λ Single Fiber Bidirectional Link for AI

This landmark achievement is an 8X leap in bidirectional fiber bandwidth density, directly solving the bottlenecks that are limiting the scale and performance of today’s AI data centers. Powered by our Passage™ interconnect and Guide™ laser technologies, this breakthrough delivers an unprecedented 800 Gbps per fiber. It’s an architectural leap forward that enables hyperscalers to dramatically increase efficiency and scalability, paving the way for the next generation of AI models.

Lightmatter, the leader in photonic (super)computing, today announced a groundbreaking achievement in optical communications: a 16-wavelength bidirectional Dense Wavelength Division Multiplexing (DWDM) optical link operating on one strand of standard single-mode (SM) fiber. Powered by Lightmatter’s industry-leading PassageTM interconnect and GuideTM laser technologies, this breakthrough shatters previous limitations in fiber bandwidth density and spectral utilization, setting a new benchmark for high-performance, resilient data center interconnects.

With the rise of complex trillion-parameter Mixture of Experts (MoE) models, scaling AI workloads is increasingly bottlenecked by bandwidth and radix (I/O port count) limitations in data center infrastructure. Lightmatter’s Passage technology delivers an unprecedented 800 Gbps bidirectional bandwidth (400 Gbps transmit and 400 Gbps receive) per single-mode fiber for distances of several hundred meters or more. This achievement advances chip I/O design by simultaneously increasing both radix and bandwidth per fiber compared to existing co-packaged optics (CPO) solutions.

While commercial bidirectional (BiDi) transmission on a single fiber has been limited mainly to two wavelengths, achieving 16 wavelengths (also referred to as “lambdas”) has historically required multiple or specialized fibers. This Lightmatter milestone addresses significant technical challenges related to managing complex wavelength-dependent propagation characteristics, power budget constraints, optical nonlinearity, and mitigating crosstalk and backscattering in a single fiber. Such innovations pave the way for the next major advances in AI model development, which demand more extensive and efficient high-bandwidth networking than exists today.

“Data centers are the new unit of compute in the AI era, with the next 1000X performance gain coming largely from ultra-fast photonic interconnects,” said Nicholas Harris, founder and CEO of Lightmatter. “Our 16-lambda bidirectional link is an architectural leap forward. Hyperscalers can achieve significantly higher bandwidth density with standard single-mode fiber, reducing both capital expenditure and operational complexity, while enabling higher ‘radix’—more connections per XPU or switch.”

“Lightmatter’s innovation arrives at a pivotal moment for hyperscale AI infrastructure. The ability to dramatically increase bandwidth density on existing single-mode fiber, coupled with the technology’s robust thermal performance, is a game-changer for data center scalability and efficiency. This solves one of the most pressing challenges in AI development and brings advanced Co-Packaged Optics a giant step closer to market,” said Alan Weckel, co-founder and analyst, 650 Group.

Lightmatter’s breakthrough incorporates a proprietary closed-loop digital stabilization system that actively compensates for thermal drift, ensuring continuous, low-error transmission over wide temperature fluctuations. In addition, architectural innovations make the Passage 3D CPO platform inherently polarization-insensitive, maintaining robust performance even when the fibers are being handled or subject to mechanical stress. Standard SM fiber, while offering immense bandwidth potential, does not inherently maintain light’s polarization state, unlike specialized and more costly polarization-maintaining (PM) fiber. By achieving polarization insensitivity, Lightmatter enables the use of cost-effective SM fiber for its industry-leading bidirectional DWDM technology.

This combination of unparalleled fiber bandwidth density, efficient spectral utilization, and robust performance makes Lightmatter’s Passage technology foundational for the industry’s transition from electrical to optical interconnects in AI data centers. It empowers customers to accelerate development of larger and more capable AI models with more powerful, efficient, and scalable data centers.

For more information about Passage technology, please visit https://lightmatter.co/

About Lightmatter

Lightmatter is leading the revolution in AI data center infrastructure, enabling the next giant leaps in human progress. The company’s groundbreaking Passage™ platform—the world’s first 3D-stacked silicon photonics engine—connects thousands to millions of processors at the speed of light. Designed to eliminate critical data bottlenecks, Lightmatter’s technology enables unparalleled efficiency and scalability for the most advanced AI and high-performance computing workloads, pushing the boundaries of AI infrastructure.

Media Contact:

Lightmatter
John O’Brien
press@lightmatter.co

Experience our breakthroughs.
• Watch the demo (video): https://bit.ly/3Hkt0kW
• Passage M1000 End to End System (video): https://bit.ly/46XaR72

Read our Chief Scientist Darius Bunandar’s “Doubling Down: World’s First 16-λ Single Fiber Bidirectional Link for AI”: https://bit.ly/4fPUZWs


Photonics: The Next Leap in AI Infrastructure

Light-speed computing for the AI revolution

Just as fiber optics revolutionized global communications by replacing electrical signals with light, photonics is poised to transform AI infrastructure by fundamentally reimagining how we process and move data.

The Energy Crisis in AI

Today's AI systems consume staggering amounts of energy. Training large language models requires megawatt-hours of electricity, while inference at scale demands massive data centers that consume as much power as small cities. As AI capabilities expand exponentially, this energy consumption threatens to become unsustainable.

The bottleneck isn't just in computation—it's in the fundamental physics of moving electrons through silicon. Every bit transfer generates heat, every calculation requires power, and every nanosecond of latency compounds into massive inefficiencies at scale.

Enter Photonic Computing

Photonic computing replaces electrons with photons—particles of light that move at the ultimate speed limit of the universe. Unlike electrons, photons don't interact with each other, eliminating interference and reducing energy loss. They can carry multiple wavelengths simultaneously, enabling massive parallel processing that would be impossible with traditional electronics.

Key Advantages of Photonic AI

  • Ultra-low latency: Light-speed processing reduces computation time by orders of magnitude
  • Massive bandwidth: Wavelength division multiplexing enables parallel data streams
  • Energy efficiency: Photons generate virtually no heat, dramatically reducing power consumption
  • Scalability: Linear scaling without the quadratic growth in complexity of electronic systems

The Fiber Optics Parallel

The transformation mirrors the fiber optics revolution of the 1980s and 1990s. Before fiber optics, long-distance communication relied on electrical signals that degraded over distance and required frequent amplification. The switch to light-based transmission enabled the global internet, high-definition video streaming, and instantaneous worldwide communication.

Similarly, photonic computing promises to break through the current limitations of electronic processing. Where traditional processors hit physical limits of heat dissipation and signal interference, photonic processors operate in an entirely different domain—one where the speed of light is the only constraint.

Real-World Applications

Photonic AI infrastructure would enable breakthrough applications across industries:

  • Real-time language translation with zero perceptible latency
  • Autonomous vehicles with instantaneous decision-making capabilities
  • Climate modeling with unprecedented resolution and speed
  • Medical diagnostics processing complex imaging in real-time
  • Financial modeling with microsecond decision capabilities

LightMatter: Pioneering the Transition

LightMatter is leading the photonic AI revolution with a pragmatic, phased approach that bridges today's silicon infrastructure with tomorrow's all-optical systems. Rather than attempting to replace entire computing architectures overnight, they're strategically transforming AI infrastructure piece by piece, starting where it matters most.

Phase 1: Light-Enabled Silicon

LightMatter's breakthrough began with their photonic interconnect technology—the critical bottleneck in modern AI systems. Their Passage™ interconnect replaces electrical wires between chips with light-based connections, instantly eliminating the bandwidth limitations and energy waste of traditional copper interconnects.

This approach is genius in its simplicity: existing silicon chips continue to handle computation, but data moves between them at light speed with minimal energy loss. GPU clusters that once consumed massive amounts of power just moving data between processors now operate with dramatically improved efficiency and throughput.

LightMatter's Interconnect Advantages

  • 25x bandwidth increase over electrical interconnects
  • 90% reduction in interconnect power consumption
  • Drop-in compatibility with existing silicon processors
  • Scalable architecture supporting thousands of connected devices

Phase 2: Full Photonic Infrastructure

LightMatter's long-term vision extends beyond interconnects to complete photonic computing systems. Their roadmap includes photonic neural network accelerators that perform matrix operations—the fundamental building blocks of AI—entirely in the optical domain. This represents the ultimate evolution: computation itself happening at light speed.

These full photonic systems promise to deliver the theoretical maximums of optical computing: processing speeds limited only by the speed of light, energy consumption approaching the fundamental physical limits, and massive parallel processing capabilities that dwarf today's electronic systems.

The Path Forward

LightMatter's phased approach demonstrates the practical path to photonic AI dominance. By first solving the interconnect bottleneck with immediately deployable technology, they're building the foundation for more revolutionary advances while delivering real value today.

The transition won't happen overnight, but the physics is compelling. As AI models continue to grow in complexity and energy demands become unsustainable, photonic infrastructure offers a path to exponentially more capable and efficient AI systems.

The Light-Speed Future

Just as fiber optics made the modern internet possible, photonic computing will unlock the next generation of AI—systems that think at the speed of light while consuming a fraction of today's energy.

The question isn't whether photonics will transform AI infrastructure—it's how quickly we can make the transition. In the race for artificial general intelligence, light itself may be our greatest ally.


Iambic Therapeutics taps Lambda for AI compute to develop the next generation of AI molecular property prediction for drug discovery

Iambic Therapeutics, a CNBC 2025 Disruptor 50 Company developing novel medicines using its AI-driven discovery and development platform, has partnered with Lambda to power its platforms' AI infrastructure for AI driven drug discovery.

Lambda, the GPU cloud company founded by AI engineers, today announced that Iambic Therapeutics, a clinical-stage life science and technology company developing novel medicines using its AI-driven discovery and development platform, has selected Lambda to provide an NVIDIA HGX B200 cluster to support the training of Enchant, its industry-leading model for molecular property prediction.

Iambic’s Enchant is a breakthrough multi-modal transformer model for predicting clinical and preclinical endpoints related to the drug discovery and development process. Enchant enables researchers to determine the viability of new drug molecules and make high-confidence predictions where data is most sparse, helping address the critical real-world challenge of understanding how novel drug candidates may affect a patient while still in the earliest stages of discovery. Iambic’s recently announced Enchant v2 provides accurate predictions for dozens of biological, physiochemical, pharmacokinetic, metabolic, safety, and other properties essential for clinical success.

“With the release of Enchant v2, we demonstrated both the model’s accuracy and scalability and we believe we can rapidly build on these gains through model scale alone,” said Matt Welborn, PhD, Iambic’s VP of Machine Learning. “We are expanding our successful relationship with Lambda to an NVIDIA HGX B200 cluster, which will accelerate this opportunity and the breadth of pre-clinical and clinical endpoints Enchant can predict, increasing the likelihood of a molecules' success in human studies and the efficiency of drug development.”

Enchant’s high-confidence predictions enable multi-parameter optimization for the design of potentially more effective medicines, program prioritization, and the design of clinical trials for the potentially rapid translation of novel medicines. Iambic researchers also demonstrated that in some cases Enchant can be a better predictor of in vivo drug clearance than in vitro experiments – a key advancement as regulators look for drug developers to broaden their use of in silico testing. Today, Enchant is the leading model in the field based on performance benchmarks across diverse molecular property prediction tasks.

“We're thrilled to deepen our partnership with Iambic, a leader in AI-driven drug discovery,” said Robert Brooks IV, Founding Team and VP, Revenue. “By leveraging Lambda's 1-Click Clusters for rapid testing and validation, Iambic was able to seamlessly scale to an NVIDIA HGX B200 cluster to accelerate breakthroughs in life sciences.”

To learn more about Lambda’s cloud offerings for AI training and inference, click here.

About Iambic’s AI-Driven Discovery Platform

The Iambic AI-driven platform was created to address the most challenging design problems in drug discovery, leveraging technology innovations such as Enchant (multimodal transformer model that predicts clinical and preclinical endpoints) and NeuralPLexer (best-in-class predictor of protein and protein-ligand structures). The integration of physics principles into the platform’s AI architectures improves data efficiency and allows molecular models to venture widely across the space of possible chemical structures. The platform enables identification of novel chemical modalities for engaging difficult-to-address biological targets, discovery of defined product profiles that optimize therapeutic window, and multiparameter optimization for highly differentiated development candidates. Through close integration of AI-generated molecular designs with automated chemical synthesis and experimental execution, Iambic completes design-make-test cycles on a weekly cadence.

About Iambic Therapeutics

Iambic is a clinical-stage life-science and technology company developing novel medicines using its AI-driven discovery and development platform. Based in San Diego and founded in 2020, Iambic has assembled a world-class team that unites pioneering AI experts and experienced drug hunters. The Iambic platform has demonstrated delivery of new drug candidates to human clinical trials with unprecedented speed and across multiple target classes and mechanisms of action. Iambic is advancing a pipeline of potential best-in-class and first-in-class clinical assets, both internally and in partnership, to address urgent unmet patient need. Learn more about the Iambic team, platform, pipeline, and partnerships at Iambic.ai

About Lambda

Lambda was founded in 2012 by AI engineers with published research at the top machine learning conferences in the world. Our goal is to become the #1 AI compute platform serving developers across the entire AI development lifecycle. We enable AI engineers to easily, securely and affordably build, test and deploy AI products at scale. Our product portfolio spans from on-prem GPU hardware to hosted GPUs in the cloud. Lambda’s mission is to create a world where access to computation is as effortless and ubiquitous as electricity.

Contacts

Media Contact
pr@lambdal.com


Lambda and Cologix launch the first NVIDIA HGX B200-Accelerated AI Clusters in Columbus data center

Supermicro’s AI-optimized hardware powers turnkey AI infrastructure for enterprise innovation in the Midwest

Cologix, a leading network-neutral interconnection and hyperscale edge data center company in North America, today announced its collaboration with Lambda, the AI Developer Cloud, to deploy NVIDIA HGX B200-accelerated 1-Click Clusters at Cologix’s COL4 ScalelogixSM data center in Columbus, Ohio. Built on Supermicro’s high-performance, energy-efficient AI solutions, this is the first deployment of its kind in the region delivering enterprise-grade AI compute with simplified access for businesses across the Midwest.

As AI adoption accelerates, organizations are looking for fast, cost-effective ways to support large model training, fine-tuning and inference workloads. After standing up AI infrastructure in Chicago in 2024, this strategic collaboration brings Lambda’s NVIDIA GPU-accelerated 1-Click Clusters™ to Columbus, enabling regional enterprises to spin up high-performance compute infrastructure in seconds—no infrastructure management required.

“Columbus is a thriving hub for AI innovation, from manufacturing to healthcare,” said Robert Brooks IV, Founding Team & VP, Revenue at Lambda. “With Supermicro’s trusted systems and Cologix’s reliable infrastructure, we’re giving Lambda’s customers in the Midwest the fastest path to production-ready AI—and the added flexibility to integrate with hyperscaler environments.”

The launch increases the availability of Lambda’s popular 1-Click Clusters™, which are purpose-built for model training and inference at scale. Combined with Supermicro’s AI-optimized hardware and Cologix’s carrier-dense environment, this deployment brings frictionless access to next-gen AI performance all within a highly secure, compliant data center environment.

Built on NVIDIA Blackwell GPUs,1-Click Clusters™ can be self-served or provisioned through Lambda’s flagship GPU Flexible Commitment (GFC) – an innovative compute consumption model that grants organizations access to Lambda’s entire Cloud portfolio, and provides seamless transitions to the next generations of NVIDIA accelerated computing.

“Columbus is one of the fastest-growing digital corridors in the country and this launch brings coastal-level AI infrastructure into the region,” said Chris Heinrich, Chief Revenue Officer of Cologix. “Our collaboration with Lambda and Supermicro gives regional enterprises a powerful edge, combining low-latency access, dense interconnection and ready-to-deploy clusters, giving teams the ability to move faster and scale smarter.”

Cologix is currently the largest colocation and interconnection provider in the area with a portfolio of four data centers spanning a total of 500,000 square feet and 80 MW of power. All four of Cologix’s data centers in Columbus are interconnected with a diverse fiber ring. Additionally, Cologix has Ohio’s most comprehensive carrier hotel in its Columbus data centers as well as an interconnection ecosystem of 50+ unique network and cloud service providers, two public cloud onramps with access to Amazon Web Services® Direct Connect and Google Cloud Interconnect and the Ohio IX internet exchange.

“Supermicro’s collaboration with Lambda and Cologix delivers real-world impact,” said Charles Liang, president and CEO. “Our NVIDIA HGX B200-based hardware enables the highest performance AI workloads in a space- and energy-efficient footprint. Together, we’re bringing those benefits to businesses in the Midwest and beyond.”

Together, Lambda, Supermicro and Cologix are enabling enterprises in healthcare, finance, logistics, retail and manufacturing to accelerate their AI roadmaps without the heavy lift of managing infrastructure. This deployment is part of a broader trend of moving high-performance compute closer to where data is generated and used and reflects Cologix’s continued investment in digital infrastructure across North America.

Learn how to supercharge your AI stack with Lambda’s 1-Click Cluster solutions and tap into Cologix’s latest AI-ready data centers to accelerate your business’ growth at the digital edge.

About Cologix

Cologix powers digital infrastructure with 45+ hyperscale edge data centers and interconnection hubs across 12 North American markets, providing high-density, ultra-low latency solutions for cloud providers, carriers and enterprises. With AI-ready, industry-leading facilities, Cologix offers scalable, flexible and sustainable data center options to help its customers accelerate their business at the digital edge. Cologix provides extensive physical and virtual connections, including Access Marketplace, where customers gain fast, reliable and self-service provisioning for on-demand connectivity. For more information, visit Cologix or follow us on LinkedIn and X.

About Lambda

Lambda was founded in 2012 by AI engineers with published research at the top machine learning conferences in the world. Our GPU cloud and on-prem hardware enables AI developers to easily, securely and affordably build, test and deploy AI products at scale. Lambda’s mission is to accelerate human progress with ubiquitous and affordable access to computation. One person, one GPU.

About Supermicro

Supermicro (NASDAQ: SMCI) is a global leader in Application-Optimized Total IT Solutions. Founded and operating in San Jose, California, Supermicro is committed to delivering first-to-market innovation for Enterprise, Cloud, AI, and 5G Telco/Edge IT Infrastructure. We are a Total IT Solutions provider with server, AI, storage, IoT, switch systems, software, and support services. Supermicro's motherboard, power, and chassis design expertise further enables our development and production, enabling next-generation innovation from cloud to edge for our global customers. Our products are designed and manufactured in-house (in the US, Taiwan, and the Netherlands), leveraging global operations for scale and efficiency and optimized to improve TCO and reduce environmental impact (Green Computing). The award-winning portfolio of Server Building Block Solutions® allows customers to optimize for their exact workload and application by selecting from a broad family of systems built from our flexible and reusable building blocks that support a comprehensive set of form factors, processors, memory, GPUs, storage, networking, power, and cooling solutions (air-conditioned, free air cooling or liquid cooling).

Supermicro, Server Building Block Solutions, and We Keep IT Green are trademarks and/or registered trademarks of Super Micro Computer, Inc.

All other brands, names, and trademarks are the property of their respective owners.

Attachments



Nixxy NIXX: NASDAQ acquires Leadnova.ai platform from NexGenAI and reports $5.2 Million in May revenue

Nixxy (NASDAQ:NIXX) an AI-powered data and communications infrastructure company, today announced the successful acquisition of the Leadnova.ai platform assets from the NexGenAI suite of companies. Nixxy also reported that it generated approximately $5.2 million in unaudited gross revenue for the month of May 2025 up from approximately $1.4 million in April 2025.

Leadnova.ai Platform Asset Acquisition
Leadnova.ai is a SaaS-based solution designed to support business development through data delivery, sales enablement, outreach automation, and engagement analytics. As part of the asset acquisition, Nixxy will take ownership of the Leadnova.ai domain, front-end and back-end software, automation engines, structured data repositories, and API infrastructure.

The integration of Leadnova.ai into Nixxy's AuralinkAI platform is expected to accelerate Nixxy's AI-enhanced telecom strategy and enable more intelligent, performance-driven enterprise communication services.

"Our strategy is focused on combining infrastructure with automation and AI," said Mike Schmidt, CEO of Nixxy. "Leadnova.ai adds powerful capabilities that align perfectly with our vision to deliver smarter, higher-margin communications services at scale."

May Revenue Performance and Telecom Growth Trajectory
Nixxy generated gross revenues (unaudited) of approximately $5.2 million in May 2025, representing continued traction in its growth and expansion strategy. With additional customer agreements and infrastructure now coming online, Nixxy continues to target a $10 million monthly revenue run rate by August 2025, contingent on continued execution and successful onboarding of new partners.

"We're pleased with the early momentum and the diversity of revenue sources starting to emerge," added Schmidt. "We expect ongoing growth across voice, messaging, and enterprise automation as we scale our network and integrate our AI capabilities."

Further updates will be provided as Nixxy progresses through key execution milestones.

Nixxy Announces Registered Direct Offering
Nixxy today announced the pricing of a registered direct offering for the sale and issuance of up to 846,667 shares of the Company's common stock to a small group of accredited investors at a price per share of $1.50.

The shares of common stock in the registered direct offering (but excluding the securities issued in the planned private placement) were offered pursuant to a "shelf" registration statement on Form S-3 (File No. 333-267470) initially filed with the Securities and Exchange Commission (the "SEC") on September 16, 2022, and declared effective by the SEC on September 30, 2022. The offering of the common stock in the registered direct offering was made only by means of a prospectus, including a prospectus supplement, forming a part of the effective registration statement. A final prospectus supplement and accompanying prospectus describing the terms of the proposed offering was filed with the SEC on May 29, 2025 and is available on the SEC's website located at http://www.sec.gov. Electronic copies of the final prospectus supplement and the accompanying prospectus may be obtained, when available, by contacting the Company at 1178 Broadway, 3rd Floor, New York, NY 10001, or by telephone at (877) 708-8878.

About Nixxy, Inc.
Nixxy, Inc. (NASDAQ:NIXX) is a next-generation communications and infrastructure company transforming the telecom landscape through AI-powered platforms, intelligent routing, and enterprise-grade messaging solutions. Anchored by its AuralinkAI platform, Nixxy integrates advanced automation, data analytics, and scalable infrastructure to deliver high-performance voice and messaging services globally. With a focus on operational efficiency, strategic acquisitions, and platform scalability, Nixxy is building a robust telecom network optimized for volume, intelligence, and margin. Nixxy's hybrid approach, combining infrastructure ownership with AI-enhanced service delivery, differentiates it in modern telecom innovation, serving both wholesale and enterprise clients.

Filings and press releases can be found at http://www.nixxy.com/investor-relations.

Contact Information
Investor Contact: Nixxy, Inc.
Investor Relations EmailIR@nixxy.com
Phone: (877) 708-8868


Groq’s CEO Jonathan Ross on why AI’s next big shift isn’t about Nvidia

"Right now, we're in the printing press era of AI, the very beginning," says Groq Founder & CEO Jonathan Ross

ALSO WATCH & READ
Groq Founder CEO Jonathan Ross Says Speed to Deployment Sets Its AI Inference Infrastructure Apart
Groq and Bell Canada partner to build 6 AI inference infrastructure data centers
Groq is official hyperscaler inference provider for Saudi AI datacenter Humain

The race to better compete with AI chip darling Nvidia (NVDA) is well underway. Enter Groq. The company makes what it calls language processing units (LPUs). These LPUs are designed to make large language models run faster and more efficiently, unlike Nvidia GPUs that target training models. Groq’s last capital raise was in August 2024, when it raised $640 million from the likes of BlackRock and Cisco. The company’s valuation at the time stood at $2.8 billion, a fraction of Nvidia’s more than $3 trillion market cap. It currently clocks in at $3.5 billion, according to Yahoo Finance private markets data.

At the helm of Groq is founder and CEO Jonathan Ross. While at Google, Ross designed the custom chips that the tech giant would go on to train its AI models on. Yahoo Finance Executive Editor Brian Sozzi sits down on the Opening Bid podcast with Ross to discuss his future plans for Groq. Ross is fresh off a trip with other key tech executives to Saudi Arabia, joining President Donald Trump in forging a deeper tech relationship with the country. Ross begins by sharing with Sozzi the conversations he had on the ground in Saudi Arabia.


Privacy Preference Center

Pin It on Pinterest