Airspace Link Launches the Operations Center in AirHub® Portal for Drone Operations

Leading FAA-approved UAS (Drone) Service Supplier Airspace Link of B4UFLY and LAANC, today announced the official launch of the Operations Center in its flagship product AirHub® Portal. Designed to give organizations real-time operational oversight, the Operations Center enables strategic insight, situational awareness, and live tracking of drone and crewed aircraft activity — all in one powerful system.

"The Operations Center transforms AirHub® Portal into a true command center for organizations managing drone operations at scale," said Tyler Dicks, Head of Product at Airspace Link. "From public safety teams and federal agencies to state and local governments and commercial enterprises, we're helping a wide range of users gain the operational clarity they need to deliver safer, smarter, and more coordinated drone operations."

With the new Operate tab activated in AirHub® Portal, users can visualize:

  • Active and planned drone operations for the day
  • Live crewed aircraft traffic and uncrewed drone activity, with supporting sensor partner integration
  • Current weather conditions across mission areas
  • B4UFLY airspace briefings
  • [Coming Soon] Real-time telemetry from user organizations' own connected drone flights

 

Airspace Link's new Operations Center in AirHub® Portal delivers a single-pane-of-glass solution for organizations that require coordinated, real-time airspace oversight. Built for flight operations managers, public safety agencies, airfield managers, and state and local authorities, the Operations Center offers a unified, interactive map display that provides a complete common operational picture of all active and planned missions, ensuring safer, more informed, and more efficient decision-making.

Also designed to meet the needs of IT leaders and security professionals, the Operations Center includes robust post-mission analytics, audit trails, and automatic reporting to support regulatory compliance, internal governance, and continuous operational improvement. Critically, Airspace Link safeguards its technology with enterprise-grade security and privacy protocols, backed by SOC 2 and ISO 27001 certifications — ensuring sensitive operational data is protected by industry-leading standards and rigorous best practices.

At the core of the Operations Center is its open platform architecture, which integrates with a range of airspace awareness data sources, including crewed aircraft ADS-B detection systems from industry leader uAvionix.

"We're proud to partner with Airspace Link in delivering high integrity live aircraft traffic data for the Operations Center," said Cyriel Kronenburg, Vice President for UAS and Aviation Networks from uAvionix, Airspace Link's trusted sensor partner. "This integration ensures organizations have access to a high quality and accurate view of their surrounding airspace on a single pane of glass — a critical component for safe and effective drone operations."

Now available for AirHub® Portal organization accounts, the Operations Center joins Airspace Link's full suite of capabilities – from preflight planning and LAANC authorization to internal operation approvals, crew and asset management, and flight logging. Together, these tools form a comprehensive Drone Operations Management System (DOMS) purpose-built for the needs of modern, connected drone programs.

"Whether you're overseeing a city-wide drone program or scaling enterprise operations, the Operations Center delivers the situational awareness and accountability today's teams demand," Dicks added. "It's about empowering diverse stakeholders with the tools to operate smarter, safer, and with total confidence."

As both an FAA-approved UAS Service Supplier of LAANC and B4UFLY, and a provider of advanced drone operations software, Airspace Link offers one of the only fully integrated Drone Operations Management Systems in the market, eliminating the need to manage multiple systems or vendors.

See It Live at XPONENTIAL 2025

Airspace Link will be exhibiting at XPONENTIAL 2025 in Houston. Visit us at Booth #4320 for a live demonstration of the Operations Center and to explore how AirHub® Portal can elevate your organization's drone operations.

Learn more and book your personalized demo here: https://airspacelink.com/xponential2025

About Airspace Link

Founded in Detroit in 2018, Airspace Link is a leading FAA-approved UAS Service Supplier of LAANC and B4UFLY, creating the digital infrastructure for the safe integration of drones into the national airspace and local communities. As SOC 2 compliant and ISO 27001-certified, Airspace Link's drone operations management system, AirHub® Portal, empowers government entities, commercial fleets, certified drone pilots, and the broader drone industry with the tools needed to enable safe, compliant, and efficient drone operations. For more information about Airspace Link and AirHub® Portal, visit www.airspacelink.com.

Media Contact
Rich Fahle
Rich.fahle@airspacelink.com

Sam Stewart Gacaferi
Sam.stewart@airspacelink.com


Groq and Bell Canada partner to build 6 AI inference infrastructure data centers

Groq Becomes Exclusive Inference Provider for Bell Canada's Sovereign AI Network

ALSO READ
Bell announces plans to open six AI data centres in B.C. as part of Bell AI Fabric - BNN Bloomberg

New data centers across North America expand Groq's network, now serving over 20 million tokens per second

Groq, the pioneer in fast AI inference, today announced an exclusive partnership with Bell Canada to power Bell AI Fabric, the country's largest sovereign AI infrastructure project.

Bell AI Fabric will establish a national AI network across six sites, targeting 500MW of clean, hydro-powered compute. It begins with a 7MW Groq facility in Kamloops, British Columbia, coming online in June.

"As AI moves into production, nations are rethinking where inference runs and who controls it," said Jonathan Ross, CEO and Founder of Groq. "We're building infrastructure that's fast, affordable, and sovereign by design, already powering some of the largest inference deployments in the world."

This month, Groq also brought new data centers online in Houston (DataBank) and Dallas (Equinix), pushing total global network capacity to over 20 million tokens per second.

The momentum reflects rising demand for Groq's LPU-based systems—built for real-time inference with unmatched speed and efficiency. Groq delivers the lowest cost per token without compromise, making large-scale AI viable for governments and enterprises worldwide.

Groq builds fast. Data centers go live in weeks, bringing AI closer to users and giving partners more control over where and how inference runs. Local infrastructure means lower latency, stronger data governance, and faster response times at scale.

"Through Bell AI Fabric, we're building the backbone for Canada's AI economy," said Mirko Bibic, President & CEO, BCE and Bell Canada. "Groq's technology delivers the speed and efficiency our customers need—now, not years from now."

About Groq

Groq is the AI inference platform redefining price performance. Its custom-built LPU and cloud have been specifically designed to run powerful models instantly, reliably, and at the lowest cost per token—without compromise. Over 1.6 million developers and Fortune 500 companies trust Groq to build fast and scale smarter.

Groq Media Contact
pr-media@groq.com


Databricks acquires Neon data for $1billion to deliver serverless postgres for AI Agents and developers

ALSO READ:

 Databricks, the Data and AI company, announced its intent to acquire Neon, a leading serverless Postgres company. As the $100-billion-plus database market braces for unprecedented disruption driven by AI, Databricks plans to continue innovating and investing in Neon's database and developer experience for existing and new Neon customers and partners.

Neon: An Open, Serverless Foundation for Developers and AI Agents

AI agents are becoming increasingly integral components for modern developers, and Neon is purpose-built to support their agentic workflows. Recent internal telemetry showed that over 80 percent of the databases provisioned on Neon were created automatically by AI agents rather than by humans, underscoring how explosively agentic workloads are growing. These workloads differ from human-driven patterns in three important ways:

  1. Speed + flexibility: Agents operate at machine speed and traditional database provisioning often becomes a bottleneck — Neon can spin up a fully isolated Postgres instance in 500 milliseconds or less and supports instant branching and forking of not only database schema but also data, so experiments never disturb production.

  2. Cost proportionality: Agents demand a cost structure that scales precisely with usage — Neon's full separation of compute and storage keeps the total cost of ownership for thousands of ephemeral databases proportional to the queries they actually run.

  3. Open source ecosystem: Agents expect to leverage the rich Postgres community — Neon is 100 percent Postgres-compatible and works out of the box with popular extensions.

"The era of AI-native, agent-driven applications is reshaping what a database must do," said Ali Ghodsi, Co-Founder and CEO at Databricks. "Neon proves it: four out of every five databases on their platform are spun up by code, not humans. By bringing Neon into Databricks, we're giving developers a serverless Postgres that can keep up with agentic speed, pay-as-you-go economics and the openness of the Postgres community."

Databricks and Neon's Shared Vision

Together, Databricks and Neon will work to remove the traditional limitations of databases that require compute and storage to scale in tandem — an inefficiency that hinders AI workloads. The integration of Neon's serverless Postgres architecture with the Databricks Data Intelligence Platform will help developers and enterprise teams efficiently build and deploy AI agent systems. This approach not only prevents performance bottlenecks from thousands of concurrent agents but also simplifies infrastructure, reduces costs and accelerates innovation — all with Databricks' security, governance and scalability at the core.

"Four years ago, we set out to build the best Postgres for the cloud that was serverless, highly scalable, and open to everyone. With this acquisition, we plan to accelerate that mission with the support and resources of an AI giant," said Nikita Shamgunov, CEO of Neon. "Databricks was founded by open source pioneers committed to making it easier for developers to work with data and AI at any scale. Together, we are starting a new chapter on an even more ambitious journey."

Neon's talented team is expected to join Databricks after the transaction closes, and the team brings deep expertise and continuity for Neon's vibrant community. Together, Neon and Databricks will empower organizations to eliminate data silos, simplify architecture and build AI agents that are more responsive, reliable and secure.

We plan to share more at Data + AI Summit in San Francisco, taking place June 9–12.

Details Regarding the Proposed Acquisition
The proposed acquisition is subject to customary closing conditions, including any required regulatory clearances.

About Neon
Neon was founded in 2021 by a team of experienced database engineers and Postgres contributors with a singular goal: to build a serverless Postgres platform that helps developers build reliable and scalable applications faster, from personal projects to startups, all the way to enterprises.

About Databricks
Databricks is the Data and AI company. More than 10,000 organizations worldwide — including Block, Comcast, Condé Nast, Rivian, Shell and over 60% of the Fortune 500 — rely on the Databricks Data Intelligence Platform to take control of their data and put it to work with AI. Databricks is headquartered in San Francisco, with offices around the globe and was founded by the original creators of Lakehouse, Apache Spark™, Delta Lake and MLflow. To learn more, follow Databricks on XLinkedIn and Facebook.


Groq is official hyperscaler inference provider for Saudi AI datacenter Humain

Groq, the pioneer in AI inference, today announced a major global expansion, accelerating its rise as an emerging hyperscaler. With significant new deployments and growing momentum with partners like HUMAIN, Groq continues to set the standard for AI infrastructure speed, scalability, and cost-efficiency.

Groq has been named an official inference provider for HUMAIN, a newly launched AI company headquartered in Saudi Arabia and designed to operate across the full AI value chain. HUMAIN's mission is to transform economies through large-scale AI capabilities, from infrastructure to state-of-the-art models, and Groq's ultra-efficient inference technology will be central to that mission.1.5M+ developers and leading global organizations trust Groq to build AI applications with speed, reliability, & scale.

The announcement builds on Groq's opening of a data center in Dammam, Saudi Arabia, which has been serving traffic since February. It's part of a $1.5 billion commitment from the Kingdom to supercharge AI development in the region and expand Groq's presence in global markets.

Groq prioritizes U.S.-based development for its systems and has scaled rapidly to meet the needs of a global AI ecosystem—filling a strategic void that might otherwise be met by foreign providers. With new data centers coming online across North America this month, Groq has the capacity to serve today's most demanding workloads and continues to expand to meet tomorrow's.

More than 1.5 million developers and leading global organizations now trust Groq to build AI applications with speed, reliability, and scale.

To learn more about Groq's partnership with HUMAIN, visit: https://groq.humain.ai/

About Groq
Groq is the AI inference platform redefining price performance. Its custom-built LPU and cloud have been specifically designed to run powerful models instantly, reliably, and at the lowest cost per token—without compromise. Over 1.5 million developers trust Groq to build fast and scale smarter.

Media Contact
Groq
pr-media@groq.com


Groq powers Meta AI to deliver fast inference for the official Llama API

Meta and Groq have joined forces to deliver blazing-fast, zero-setup access to Llama 4. Developers can request early access to the official Llama API.

ALSO READ:

Introducing the fastest way to run the world's most trusted openly available models with no tradeoffs

Groq, a leader in AI inference, announced today its partnership with Meta to deliver fast inference for the official Llama API – giving developers the fastest, most cost-effective way to run the latest Llama models.

Now in preview, the Llama 4 API model accelerated by Groq will run on the Groq LPU, the world's most efficient inference chip. That means developers can run Llama models with no tradeoffs: low cost, fast responses, predictable low latency, and reliable scaling for production workloads.

"Teaming up with Meta for the official Llama API raises the bar for model performance," said Jonathan Ross, CEO and Founder of Groq. "Groq delivers the speed, consistency, and cost efficiency that production AI demands, while giving developers the flexibility and control they need to build fast."

Unlike general-purpose GPU stacks, Groq is vertically integrated for one job: inference. Builders are increasingly switching to Groq because every layer, from custom silicon to cloud delivery, is engineered to deliver consistent speed and cost efficiency without compromise.

The Llama API is the first-party access point for Meta's openly available models, optimized for production use.

With Groq infrastructure, developers get:

  • Speeds of up to 625 tokens/sec throughput
  • Minimal lift to get started – just three lines of code to migrate from OpenAI
  • No cold starts, no tuning, no GPU overhead

Fortune 500 companies and more than 1.4 million developers already use Groq to build real-time AI applications with speed, reliability, and scale.

The Llama API is available to select developers in preview here with broader rollout planned in the coming weeks.

For more information on the Llama API x Groq partnership, please visit here.

About Groq
Groq is the AI inference platform redefining price and performance. Its custom-built LPU and cloud run powerful models instantly, reliably, and at the lowest cost per token—without compromise. Over a million developers use Groq to build fast and scale smarter.

Media Contact
Groq PR
pr-media@groq.com


Lightmatter announces the fastest co-packaged optics for AI Passage L200

Lightmatter's revolutionary 3D photonic interconnect solution eliminates bandwidth bottlenecks in AI infrastructure for datacenters.

Lightmatter, the leader in photonic (super)computing, today announced Passage™ L200, the world’s first 3D co-packaged optics (CPO) product. Designed to integrate with the latest XPU and switch silicon designs, Passage L200 unleashes unprecedented AI performance scaling by eliminating interconnect bandwidth bottlenecks. The L200 3D CPO family includes both 32 Tbps and 64 Tbps versions, representing a 5 to 10x improvement over existing solutions. This enables over 200 Tbps of total I/O bandwidth per chip package, resulting in up to 8X faster training time for advanced AI models.

Bandwidth scaling has significantly trailed gains in compute performance. Continued advances in AI compute require fundamental changes in interconnect technology. Current connectivity solutions, including electrical, optical and conventional CPO, are bandwidth limited because their I/O interfaces are restricted to the “shoreline,” or edge of the chip. Passage L200 overcomes these constraints with the world’s first edgeless I/O, scaling bandwidth across the entire die area. This modular 3D CPO solution leverages a standard interoperable UCIe die-to-die (D2D) interface, and facilitates scalable chiplet-based architectures to seamlessly integrate with next generation XPUs and switches.

The Passage L200 3D CPO integrates the latest of Alphawave Semi’s chiplet technology portfolio, combining silicon-proven low power and low latency UCIe and optics-ready SerDes with Lightmatter’s category-defining photonic integrated circuit (PIC). Alphawave Semi’s advanced-node electrical integrated circuit (EIC) is 3D integrated on the Passage PIC using standard chip-on-wafer (CoW) techniques. Passage 3D integration enables SerDes I/O to be positioned anywhere on the die, rather than being confined to its shoreline, delivering the equivalent bandwidth of 40 pluggable optical transceivers per L200. Additionally, multiple L200s can be integrated in a package to serve a broad range of XPU and switch applications.

“Bandwidth scaling has become the critical impediment to AI advancement,” said Nick Harris, founder and CEO of Lightmatter. “The engineering breakthroughs represented by our L200 family of 3D CPO solutions provide the fundamental building blocks that will pave the way for next-gen AI processors and switches.”

“We are thrilled to collaborate with Lightmatter on the delivery of the L200,” said Tony Pialis, president and Chief Executive Officer of Alphawave Semi. “Our extensive portfolio, featuring proven chiplets, optical DSPs, and connectivity silicon subsystems, synergizes with Lightmatter’s 3D photonics to create a dynamic solution that propels the next generation of AI infrastructure forward.”

“AI data center interconnects face growing bandwidth and power challenges,” said Andrew Schmitt, founder and directing analyst at Cignal AI. “Co-Packaged Optics (CPO) – integrating optics directly onto XPUs and switches – is the inevitable solution. Lightmatter’s bold approach delivers the essential elements of CPO and gives hyperscalers and chip manufacturers a path to deliver high-performance systems.”

The L200 is engineered for high-volume manufacturing with industry-leading silicon photonics fab and OSAT partners including Global Foundries, ASE, and Amkor as well as advanced node CMOS foundries. Built with advanced redundancy and resiliency, the L200 is powered by Lightmatter’s Guide light engine, delivering exceptional laser integration and total optical power per module to support the full bandwidth of L200.

Lightmatter offers two product SKUs: the L200 (32Tbps) and L200X (64Tbps) 3D CPO engines. These solutions build upon the company’s proven Passage technology platform, offering 16 WDM wavelengths per waveguide/fiber with the most advanced and fully integrated photonics control capabilities.

Key features of the L200 and L200X include:

  • Advanced node CMOS EIC
    • 32Gbps UCIe D2D interface (IP offered royalty-free by Alphawave Semi for accompanying XPU/Switch die)
    • 320 Optically-optimized multi-rate/multi-protocol SerDes
  • Passage PIC
    • L200: 56 Gbps NRZ, 32 Tbps total (Tx+Rx)
    • L200X: 106/112 Gbps PAM4, 64 Tbps total (Tx+Rx)
    • 16 wavelength WDM per waveguide/fiber for 800Gbps/1.6Tbps per fiber
    • Pluggable fiber connectors for lasers and data

Available in 2026, Lightmatter’s L200 and L200X 3D CPO chips are designed to accelerate time to market and performance of next generation XPUs and switches for the next wave of foundation AI models.

Lightmatter will showcase its latest innovations in its booth (#5145) at the Optical Fiber Conference in San Francisco, from April 1-3, 2025.

For more information, please visit https://lightmatter.co/

About Lightmatter

Lightmatter is leading a revolution in AI data center infrastructure, enabling the next giant leaps in human progress. The company’s groundbreaking Passage™ platform—the world’s first 3D-stacked silicon photonics engine—connects thousands to millions of processors at the speed of light. Designed to eliminate critical data bottlenecks, Lightmatter’s technology enables unparalleled efficiency and scalability for the most advanced AI and high-performance computing workloads, pushing the boundaries of AI infrastructure.

Media Contact:

Lightmatter
John O’Brien
press@lightmatter.co


Lightmatter unveils the world’s fastest AI interconnect Passage M1000 3D Photonic Superchip

Lighmatters breakthrough 3D photonic interposer enables the highest bandwidth and largest die complexes for next-gen AI infrastructure silicon designs

ALSO READ:
Lightmatter announces the fastest co-packaged optics for AI Passage L200

In the rapidly evolving landscape of AI, the demand for efficient and high-speed interconnect solutions is paramount. Lightmatter, a leading startup in silicon photonics technology, has introduced the Passage M1000, an innovative optical interposer designed to meet the increasing bandwidth needs of AI applications, thereby transforming data center operations.

  • The Passage M1000 is a silicon photonic interposer that enables high-bandwidth communication between AI chips, achieving speeds of petabits per second and addressing the demands of modern AI applications.
  • This feature allows electro-optical I/O across the entire surface of the chip, eliminating bottlenecks and facilitating seamless communication between stacked dies through a reconfigurable waveguide network.
  • With a record-breaking 114 terabits per second of optical bandwidth, the M1000 outperforms conventional solutions, thanks to its 256 fiber optic attach points.
  • By utilizing silicon photonics, the M1000 reduces energy consumption compared to traditional interconnects, making it a sustainable choice for data centers as AI workloads grow.

Lightmatter, the leader in photonic supercomputing, today announced Passage™ M1000, a groundbreaking 3D Photonic Superchip designed for next-generation XPUs and switches. The Passage™ M1000 enables a record-breaking 114 Tbps total optical bandwidth for the most demanding AI infrastructure applications. At more than 4,000 square millimeters, the M1000 reference platform is a multi-reticle active photonic interposer that enables the world’s largest die complexes in a 3D package, providing connectivity to thousands of GPUs in a single domain.

In existing chip designs, interconnects for processors, memory, and I/O chiplets are bandwidth limited because electrical input/output (I/O) connections are restricted to the edges of these chips. The Passage M1000 overcomes this limitation by unleashing electro-optical I/O virtually anywhere on its surface for the die complex stacked on top. Pervasive interposer connectivity is enabled by an extensive and reconfigurable waveguide network that carries high-bandwidth WDM optical signals throughout the M1000. With fully integrated fiber attachment supporting an unprecedented 256 fibers, the M1000 delivers an order of magnitude higher bandwidth in a smaller package size compared to conventional Co-Packaged Optics (CPO) and similar offerings.

Lightmatter has worked closely with industry leaders, including GlobalFoundries (GF) and Amkor, to facilitate production readiness for customer designs based on the M1000 reference platform, while ensuring the highest standards of quality and performance. The Passage M1000 utilizes the GF Fotonix™ silicon photonics platform which offers seamless integration of photonic components with high-performance CMOS logic into a single die, creating a production-ready design that can scale effectively with AI demands.

“Passage M1000 is a breakthrough achievement in photonics and semiconductor packaging for AI infrastructure,” said Nick Harris, founder and CEO of Lightmatter. “We are delivering a cutting-edge photonics roadmap years ahead of industry projections. Shoreline is no longer a limitation for I/O. This is all made possible by our close co-engineering with leading foundry and assembly partners and our supply chain ecosystem.”

“GF has a long-standing strategic partnership with Lightmatter to commercialize its breakthrough photonics technology for AI data centers,” said Dr. Thomas Caulfield, president and CEO of GF. “The M1000 photonic interposer architecture, built on our GF Fotonix platform, sets the pace for photonics performance and will transform advanced AI chip design. Our advanced manufacturing capabilities and highly flexible, monolithic silicon photonics solution are instrumental in bringing this technology to market, and we look forward to continuing our close collaboration with Lightmatter.”

“The insatiable demand for scale-up bandwidth is fueling interconnect innovation and momentum, with in-package optical integration at the forefront,” said Vlad Kozlov, founder and CEO, LightCounting. “Lightmatter’s unique 3D active photonic interposer presents a compelling advancement, with capabilities that surpass existing CPO solutions.”

Key features of the M1000 include:

  • 8-tile 3D active interposer with integrated programmable waveguide network
  • 3D integrated electrical integrated circuits containing a total of 1024 Electrical SerDes
  • 56 Gbps NRZ modulation
  • 8 wavelength WDM transmission on waveguides and fibers.
  • 256 optical fibers edge attached with 448 Gbps bandwidth per fiber
  • 1.5 kW power delivery in integrated advanced package (7,735 mm2)

The Passage M1000 and Passage L200, also announced today, accelerate advances in AI by enabling larger and more capable AI models to be trained faster than ever before. Passage M1000 will be available in the summer of 2025, accompanied by the world’s most powerful light engine: Guide™, from Lightmatter.

Lightmatter will showcase its latest innovations in its booth #5145 at the Optical Fiber Conference in San Francisco, from April 1-3, 2025.

For more information on Passage M1000, please visit https://lightmatter.co/

About Lightmatter

Lightmatter is leading a revolution in AI data center infrastructure, enabling the next giant leaps in human progress. The company’s groundbreaking Passage™ platform—the world’s first 3D-stacked silicon photonics engine—connects thousands to millions of processors at the speed of light. Designed to eliminate critical data bottlenecks, Lightmatter’s technology enables unparalleled efficiency and scalability for the most advanced AI and high-performance computing workloads, pushing the boundaries of AI infrastructure.

 

Contacts

Media Contact:
Lightmatter
John O’Brien
press@lightmatter.co


All Things Photonics podcast spotlight with Lightmatter Founder & CEO Nick Harris

Lightmatter is pushing the envelope of AI interconnect and networking standards. It’s all in an effort to bring AI infrastructure into a photonic era. MIT graduate Nick Harris founded Lightmatter in 2017. In this Spotlight conversation, he offers a look into the formation, and the future of his company, as well as the breakthroughs driving the photonic supercomputing group today. Spotlight from "All Things Photonics" is a special series of intimate sit-down conversations with preeminent figures in our industry.

Photonics Media publishes business-to-business magazines, buyers’ guides, and websites for individuals working with light-based technologies in the photonics industry. A pioneering publisher in the relatively new discipline of photonics, Photonics Media has built a large global audience comprising academics and researchers, manufacturers, and end-users. The Photonics Media YouTube channel features video coverage of news, products, and events in the photonics industry.


AlphaSense Surpasses $400M in ARR, Accelerating Growth with Private Content Expansion and Generative AI Innovation

Big news! AlphaSense has surpassed $400M in ARR and is now powering smarter decisions for 6,000+ global customers with cutting-edge AI.

They are on a mission to help businesses uncover deeper insights, move faster, and make data-driven decisions with confidence. And they're just getting started. Even more exciting innovations are coming to the platform this year.

Momentum and rapid growth continue for the Market Intelligence Leader as it achieves key mileposts on revenue, customers, product innovation, and industry recognition

AlphaSense, the leading AI-powered market intelligence and search platform, today announced it has exceeded $400 million in annual recurring revenue (ARR), more than doubling its ARR since announcing it reached $200 million in April 2024. This success is driven by its growing customer base that has reached more than 6,000 customers—including 88% of the S&P 100—all of whom rely on AlphaSense for its cutting-edge AI technology to gain deeper insights, make data-driven decisions, and stay ahead of market trends. With its innovative platform, AlphaSense continues to empower businesses across industries—including leading financial institutions—to accelerate growth, streamline operations, and enhance strategic decision-making.

"We are at a tipping point where AI-driven insights are no longer a luxury but a necessity—every company's market value is the sum of the decisions it makes," said Jack Kokko, CEO and Founder of AlphaSense. "Surpassing $400 million in ARR and our rapid growth are clear signals that businesses are recognizing the transformative power of our end-to-end market intelligence platform. As we scale, our focus remains on product and technology innovation, ensuring we deliver high-value solutions and cutting-edge AI and smart workflow capabilities to our customers."

Market Momentum & Strategic Expansion
AlphaSense's record-breaking growth follows a series of strategic moves, including its landmark $930 million acquisition of Tegus in June 2024, the market's leading expert interview library covering more than 35,000 public and private companies, with over 150,000 transcripts of investor-led expert interviews. This acquisition combined AlphaSense's market-leading search and Generative AI technology with the market's top private content library, giving clients access to a vast source of essential insights on the range of companies and industries that matter to their decision-making.

With a total of 450 million searchable documents—including all equity research, event transcripts, expert interviews, company filings, news, and trade journals—AlphaSense equips decision-makers with unparalleled market insights, all powered by advanced AI technology. World leading organizations, including Adobe, Amazon, American Express, Cisco, Nvidia, Microsoft, Pfizer, Salesforce, Nestlé, and JPMorgan, continue to turn to AlphaSense to redefine how businesses access and leverage market intelligence.

In addition to being trusted by customers, AlphaSense also celebrated a landmark year of achievements in 2024, including record-breaking industry and individual award wins from prestigious organizations such as CNBC, Fast Company, Fortune, Inc., and more.

"It's been a privilege to partner with Jack and the AlphaSense team over the past few years as they've redefined how organizations discover and act on critical insights," said James Luo, General Partner at CapitalG, Alphabet's independent growth fund. "Reaching $400M ARR is a testament to their bold vision and the transformative power of their AI-driven platform. We're thrilled to continue supporting AlphaSense as they lead the market intelligence revolution for businesses worldwide."

Innovation & Future Growth
Last week, AlphaSense announced its latest leap forward in transforming how businesses access and analyze critical insights with Generative AI. With the groundbreaking additions of Generative Search and Generative Grid, AlphaSense is not just advancing the industry—it's redefining the possibilities of AI in market intelligence, setting a new standard for speed, accuracy, and depth of analysis.

Looking ahead, AlphaSense is poised to further accelerate its momentum through continued investments in Generative AI, Enterprise Intelligence, deeper content integrations, and global market expansion. As industries navigate increasing complexity and information overload, AlphaSense is dedicated to delivering AI-powered solutions that empower professionals to make better decisions, faster.

Experience the power of AlphaSense's Generative AI by joining the early access program for Generative Grid. Plus, stay connected on the go with the AlphaSense mobile app. To learn more about career opportunities at the company, click here.


Mesh closes $82M Series B to accelerate building the first global crypto payments network

ALSO READ:
Crypto Payments Firm Mesh Raises $82M as Stablecoin Adoption Soars - CoinDesk
Mesh Payments closes on $60M as demand for its corporate spend offering surges - Techcrunch

The round, led by Paradigm with participation from Consensys, QuantumLight, Yolo Investments, and others, was secured using PayPal USD (PYUSD) stablecoin, setting a historical precedent for stablecoin funding 

Mesh, the leading crypto payments network enabling seamless transactions with cheap and immediate conversions, today announced it closed a $82 million Series B funding round, bringing its total amount raised to over $120 million. With payments and stablecoins widely seen as the biggest catalyst for crypto's mass adoption, the funds set the company up for sustained dominance in the industry's most promising sector. The round was led by Paradigm, with participation from Consensys (parent company of MetaMask), QuantumLight Capital (started by Revolut Founder & CEO Nik Storonsky), Yolo Investments, and others. Mesh has previously raised from investors including PayPal Ventures, Galaxy Ventures, and MoneyForward.

In a historic moment for both venture funding and stablecoins, most of the $82 million of investments were settled with PayPal USD (PYUSD) stablecoin. PYUSD was leveraged to close funding instantly and Mesh's technology was used to transfer the assets securely. The benefits of using stablecoins for VC funding are that it's instant, cheap, transparent, and available 24/7. The method of funding comes on the heels of PayPal Ventures' 2024 investment in Mesh, which was also completed largely in PYUSD.

 

Mesh has already partnered with major players such as MetaMask, Shift4, and Revolut, making its technology available to over 400 million users in over 100 countries worldwide. Now, the company can further accelerate product development and the expansion of its APIs to power hundreds of crypto and payments platforms.

"Stablecoins present the single biggest opportunity to disrupt the payments industry since the invention of credit and debit cards, and Mesh is now first in line to scale that vision across the world," said Bam Azizi, CEO and Co-Founder of Mesh. "With this funding, we're expanding the first truly global crypto payments network – one that allows users to pay with any crypto they hold while ensuring merchants can settle in the stablecoin of their choice, just like they do with fiat today."

Mesh's flagship payments solution is powered by its proprietary SmartFunding technology, which eliminates friction between users' assets and merchants' settlement requirements. That means an asset like Bitcoin, Ethereum, or Solana can be used as a means of payments, while merchants automatically receive the transaction amount in stablecoins such as PYUSD, UST, or USDC, all without requiring the user to manually convert assets beforehand.

"We think crypto and stablecoins will be an enormous transformation to payments," said Charlie Noyes, General Partner at Paradigm. "Mesh makes paying with crypto as simple as using a credit card for users and merchants while preserving the benefits of transacting over blockchain rails."

Mesh is on track to become an integral part of global payments as the industry moves towards a stablecoin-dominated ecosystem, with stablecoins already representing over a $200 billion market cap and surpassing $27.6 trillion in transaction volume in 2024.

For more information about Mesh, visit https://meshconnect.com/.

About Mesh

Founded in 2020, Mesh is building the first global crypto payments network, connecting hundreds of exchanges, wallets, and financial services platforms to enable seamless digital asset payments and conversions. By unifying these platforms into a single network, Mesh is pioneering an open, connected, and secure ecosystem for digital finance. For more information, visit https://www.meshconnect.com/.

Contact: mesh@greenbrier.partners


Privacy Preference Center

Pin It on Pinterest