Databricks acquires Neon data for $1billion to deliver serverless postgres for AI Agents and developers
ALSO READ:
Databricks, the Data and AI company, announced its intent to acquire Neon, a leading serverless Postgres company. As the $100-billion-plus database market braces for unprecedented disruption driven by AI, Databricks plans to continue innovating and investing in Neon's database and developer experience for existing and new Neon customers and partners.
Neon: An Open, Serverless Foundation for Developers and AI Agents
AI agents are becoming increasingly integral components for modern developers, and Neon is purpose-built to support their agentic workflows. Recent internal telemetry showed that over 80 percent of the databases provisioned on Neon were created automatically by AI agents rather than by humans, underscoring how explosively agentic workloads are growing. These workloads differ from human-driven patterns in three important ways:
- Speed + flexibility: Agents operate at machine speed and traditional database provisioning often becomes a bottleneck — Neon can spin up a fully isolated Postgres instance in 500 milliseconds or less and supports instant branching and forking of not only database schema but also data, so experiments never disturb production.
- Cost proportionality: Agents demand a cost structure that scales precisely with usage — Neon's full separation of compute and storage keeps the total cost of ownership for thousands of ephemeral databases proportional to the queries they actually run.
- Open source ecosystem: Agents expect to leverage the rich Postgres community — Neon is 100 percent Postgres-compatible and works out of the box with popular extensions.
"The era of AI-native, agent-driven applications is reshaping what a database must do," said Ali Ghodsi, Co-Founder and CEO at Databricks. "Neon proves it: four out of every five databases on their platform are spun up by code, not humans. By bringing Neon into Databricks, we're giving developers a serverless Postgres that can keep up with agentic speed, pay-as-you-go economics and the openness of the Postgres community."
Databricks and Neon's Shared Vision
Together, Databricks and Neon will work to remove the traditional limitations of databases that require compute and storage to scale in tandem — an inefficiency that hinders AI workloads. The integration of Neon's serverless Postgres architecture with the Databricks Data Intelligence Platform will help developers and enterprise teams efficiently build and deploy AI agent systems. This approach not only prevents performance bottlenecks from thousands of concurrent agents but also simplifies infrastructure, reduces costs and accelerates innovation — all with Databricks' security, governance and scalability at the core.
"Four years ago, we set out to build the best Postgres for the cloud that was serverless, highly scalable, and open to everyone. With this acquisition, we plan to accelerate that mission with the support and resources of an AI giant," said Nikita Shamgunov, CEO of Neon. "Databricks was founded by open source pioneers committed to making it easier for developers to work with data and AI at any scale. Together, we are starting a new chapter on an even more ambitious journey."
Neon's talented team is expected to join Databricks after the transaction closes, and the team brings deep expertise and continuity for Neon's vibrant community. Together, Neon and Databricks will empower organizations to eliminate data silos, simplify architecture and build AI agents that are more responsive, reliable and secure.
We plan to share more at Data + AI Summit in San Francisco, taking place June 9–12.
Details Regarding the Proposed Acquisition
The proposed acquisition is subject to customary closing conditions, including any required regulatory clearances.
About Neon
Neon was founded in 2021 by a team of experienced database engineers and Postgres contributors with a singular goal: to build a serverless Postgres platform that helps developers build reliable and scalable applications faster, from personal projects to startups, all the way to enterprises.
About Databricks
Databricks is the Data and AI company. More than 10,000 organizations worldwide — including Block, Comcast, Condé Nast, Rivian, Shell and over 60% of the Fortune 500 — rely on the Databricks Data Intelligence Platform to take control of their data and put it to work with AI. Databricks is headquartered in San Francisco, with offices around the globe and was founded by the original creators of Lakehouse, Apache Spark™, Delta Lake and MLflow. To learn more, follow Databricks on X, LinkedIn and Facebook.
Groq powers Meta AI to deliver fast inference for the official Llama API
Meta and Groq have joined forces to deliver blazing-fast, zero-setup access to Llama 4. Developers can request early access to the official Llama API.
ALSO READ:
Introducing the fastest way to run the world's most trusted openly available models with no tradeoffs
Groq, a leader in AI inference, announced today its partnership with Meta to deliver fast inference for the official Llama API – giving developers the fastest, most cost-effective way to run the latest Llama models.
Now in preview, the Llama 4 API model accelerated by Groq will run on the Groq LPU, the world's most efficient inference chip. That means developers can run Llama models with no tradeoffs: low cost, fast responses, predictable low latency, and reliable scaling for production workloads.
"Teaming up with Meta for the official Llama API raises the bar for model performance," said Jonathan Ross, CEO and Founder of Groq. "Groq delivers the speed, consistency, and cost efficiency that production AI demands, while giving developers the flexibility and control they need to build fast."
Unlike general-purpose GPU stacks, Groq is vertically integrated for one job: inference. Builders are increasingly switching to Groq because every layer, from custom silicon to cloud delivery, is engineered to deliver consistent speed and cost efficiency without compromise.
The Llama API is the first-party access point for Meta's openly available models, optimized for production use.
With Groq infrastructure, developers get:
- Speeds of up to 625 tokens/sec throughput
- Minimal lift to get started – just three lines of code to migrate from OpenAI
- No cold starts, no tuning, no GPU overhead
Fortune 500 companies and more than 1.4 million developers already use Groq to build real-time AI applications with speed, reliability, and scale.
The Llama API is available to select developers in preview here with broader rollout planned in the coming weeks.
For more information on the Llama API x Groq partnership, please visit here.
About Groq
Groq is the AI inference platform redefining price and performance. Its custom-built LPU and cloud run powerful models instantly, reliably, and at the lowest cost per token—without compromise. Over a million developers use Groq to build fast and scale smarter.
Media Contact
Groq PR
pr-media@groq.com
Lightmatter announces the fastest co-packaged optics for AI Passage L200
Lightmatter's revolutionary 3D photonic interconnect solution eliminates bandwidth bottlenecks in AI infrastructure for datacenters.
Lightmatter, the leader in photonic (super)computing, today announced Passage™ L200, the world’s first 3D co-packaged optics (CPO) product. Designed to integrate with the latest XPU and switch silicon designs, Passage L200 unleashes unprecedented AI performance scaling by eliminating interconnect bandwidth bottlenecks. The L200 3D CPO family includes both 32 Tbps and 64 Tbps versions, representing a 5 to 10x improvement over existing solutions. This enables over 200 Tbps of total I/O bandwidth per chip package, resulting in up to 8X faster training time for advanced AI models.
Bandwidth scaling has significantly trailed gains in compute performance. Continued advances in AI compute require fundamental changes in interconnect technology. Current connectivity solutions, including electrical, optical and conventional CPO, are bandwidth limited because their I/O interfaces are restricted to the “shoreline,” or edge of the chip. Passage L200 overcomes these constraints with the world’s first edgeless I/O, scaling bandwidth across the entire die area. This modular 3D CPO solution leverages a standard interoperable UCIe die-to-die (D2D) interface, and facilitates scalable chiplet-based architectures to seamlessly integrate with next generation XPUs and switches.
The Passage L200 3D CPO integrates the latest of Alphawave Semi’s chiplet technology portfolio, combining silicon-proven low power and low latency UCIe and optics-ready SerDes with Lightmatter’s category-defining photonic integrated circuit (PIC). Alphawave Semi’s advanced-node electrical integrated circuit (EIC) is 3D integrated on the Passage PIC using standard chip-on-wafer (CoW) techniques. Passage 3D integration enables SerDes I/O to be positioned anywhere on the die, rather than being confined to its shoreline, delivering the equivalent bandwidth of 40 pluggable optical transceivers per L200. Additionally, multiple L200s can be integrated in a package to serve a broad range of XPU and switch applications.
“Bandwidth scaling has become the critical impediment to AI advancement,” said Nick Harris, founder and CEO of Lightmatter. “The engineering breakthroughs represented by our L200 family of 3D CPO solutions provide the fundamental building blocks that will pave the way for next-gen AI processors and switches.”
“We are thrilled to collaborate with Lightmatter on the delivery of the L200,” said Tony Pialis, president and Chief Executive Officer of Alphawave Semi. “Our extensive portfolio, featuring proven chiplets, optical DSPs, and connectivity silicon subsystems, synergizes with Lightmatter’s 3D photonics to create a dynamic solution that propels the next generation of AI infrastructure forward.”
“AI data center interconnects face growing bandwidth and power challenges,” said Andrew Schmitt, founder and directing analyst at Cignal AI. “Co-Packaged Optics (CPO) – integrating optics directly onto XPUs and switches – is the inevitable solution. Lightmatter’s bold approach delivers the essential elements of CPO and gives hyperscalers and chip manufacturers a path to deliver high-performance systems.”
The L200 is engineered for high-volume manufacturing with industry-leading silicon photonics fab and OSAT partners including Global Foundries, ASE, and Amkor as well as advanced node CMOS foundries. Built with advanced redundancy and resiliency, the L200 is powered by Lightmatter’s Guide light engine, delivering exceptional laser integration and total optical power per module to support the full bandwidth of L200.
Lightmatter offers two product SKUs: the L200 (32Tbps) and L200X (64Tbps) 3D CPO engines. These solutions build upon the company’s proven Passage technology platform, offering 16 WDM wavelengths per waveguide/fiber with the most advanced and fully integrated photonics control capabilities.
Key features of the L200 and L200X include:
- Advanced node CMOS EIC
- 32Gbps UCIe D2D interface (IP offered royalty-free by Alphawave Semi for accompanying XPU/Switch die)
- 320 Optically-optimized multi-rate/multi-protocol SerDes
- Passage PIC
- L200: 56 Gbps NRZ, 32 Tbps total (Tx+Rx)
- L200X: 106/112 Gbps PAM4, 64 Tbps total (Tx+Rx)
- 16 wavelength WDM per waveguide/fiber for 800Gbps/1.6Tbps per fiber
- Pluggable fiber connectors for lasers and data
Available in 2026, Lightmatter’s L200 and L200X 3D CPO chips are designed to accelerate time to market and performance of next generation XPUs and switches for the next wave of foundation AI models.
Lightmatter will showcase its latest innovations in its booth (#5145) at the Optical Fiber Conference in San Francisco, from April 1-3, 2025.
For more information, please visit https://lightmatter.co/
About Lightmatter
Lightmatter is leading a revolution in AI data center infrastructure, enabling the next giant leaps in human progress. The company’s groundbreaking Passage™ platform—the world’s first 3D-stacked silicon photonics engine—connects thousands to millions of processors at the speed of light. Designed to eliminate critical data bottlenecks, Lightmatter’s technology enables unparalleled efficiency and scalability for the most advanced AI and high-performance computing workloads, pushing the boundaries of AI infrastructure.
Media Contact:
Lightmatter
John O’Brien
press@lightmatter.co
Lightmatter unveils the world’s fastest AI interconnect Passage M1000 3D Photonic Superchip
Lighmatters breakthrough 3D photonic interposer enables the highest bandwidth and largest die complexes for next-gen AI infrastructure silicon designs
ALSO READ:
Lightmatter announces the fastest co-packaged optics for AI Passage L200
In the rapidly evolving landscape of AI, the demand for efficient and high-speed interconnect solutions is paramount. Lightmatter, a leading startup in silicon photonics technology, has introduced the Passage M1000, an innovative optical interposer designed to meet the increasing bandwidth needs of AI applications, thereby transforming data center operations.
- The Passage M1000 is a silicon photonic interposer that enables high-bandwidth communication between AI chips, achieving speeds of petabits per second and addressing the demands of modern AI applications.
- This feature allows electro-optical I/O across the entire surface of the chip, eliminating bottlenecks and facilitating seamless communication between stacked dies through a reconfigurable waveguide network.
- With a record-breaking 114 terabits per second of optical bandwidth, the M1000 outperforms conventional solutions, thanks to its 256 fiber optic attach points.
- By utilizing silicon photonics, the M1000 reduces energy consumption compared to traditional interconnects, making it a sustainable choice for data centers as AI workloads grow.
Lightmatter, the leader in photonic supercomputing, today announced Passage™ M1000, a groundbreaking 3D Photonic Superchip designed for next-generation XPUs and switches. The Passage™ M1000 enables a record-breaking 114 Tbps total optical bandwidth for the most demanding AI infrastructure applications. At more than 4,000 square millimeters, the M1000 reference platform is a multi-reticle active photonic interposer that enables the world’s largest die complexes in a 3D package, providing connectivity to thousands of GPUs in a single domain.
In existing chip designs, interconnects for processors, memory, and I/O chiplets are bandwidth limited because electrical input/output (I/O) connections are restricted to the edges of these chips. The Passage M1000 overcomes this limitation by unleashing electro-optical I/O virtually anywhere on its surface for the die complex stacked on top. Pervasive interposer connectivity is enabled by an extensive and reconfigurable waveguide network that carries high-bandwidth WDM optical signals throughout the M1000. With fully integrated fiber attachment supporting an unprecedented 256 fibers, the M1000 delivers an order of magnitude higher bandwidth in a smaller package size compared to conventional Co-Packaged Optics (CPO) and similar offerings.
Lightmatter has worked closely with industry leaders, including GlobalFoundries (GF) and Amkor, to facilitate production readiness for customer designs based on the M1000 reference platform, while ensuring the highest standards of quality and performance. The Passage M1000 utilizes the GF Fotonix™ silicon photonics platform which offers seamless integration of photonic components with high-performance CMOS logic into a single die, creating a production-ready design that can scale effectively with AI demands.
“Passage M1000 is a breakthrough achievement in photonics and semiconductor packaging for AI infrastructure,” said Nick Harris, founder and CEO of Lightmatter. “We are delivering a cutting-edge photonics roadmap years ahead of industry projections. Shoreline is no longer a limitation for I/O. This is all made possible by our close co-engineering with leading foundry and assembly partners and our supply chain ecosystem.”
“GF has a long-standing strategic partnership with Lightmatter to commercialize its breakthrough photonics technology for AI data centers,” said Dr. Thomas Caulfield, president and CEO of GF. “The M1000 photonic interposer architecture, built on our GF Fotonix platform, sets the pace for photonics performance and will transform advanced AI chip design. Our advanced manufacturing capabilities and highly flexible, monolithic silicon photonics solution are instrumental in bringing this technology to market, and we look forward to continuing our close collaboration with Lightmatter.”
“The insatiable demand for scale-up bandwidth is fueling interconnect innovation and momentum, with in-package optical integration at the forefront,” said Vlad Kozlov, founder and CEO, LightCounting. “Lightmatter’s unique 3D active photonic interposer presents a compelling advancement, with capabilities that surpass existing CPO solutions.”
Key features of the M1000 include:
- 8-tile 3D active interposer with integrated programmable waveguide network
- 3D integrated electrical integrated circuits containing a total of 1024 Electrical SerDes
- 56 Gbps NRZ modulation
- 8 wavelength WDM transmission on waveguides and fibers.
- 256 optical fibers edge attached with 448 Gbps bandwidth per fiber
- 1.5 kW power delivery in integrated advanced package (7,735 mm2)
The Passage M1000 and Passage L200, also announced today, accelerate advances in AI by enabling larger and more capable AI models to be trained faster than ever before. Passage M1000 will be available in the summer of 2025, accompanied by the world’s most powerful light engine: Guide™, from Lightmatter.
Lightmatter will showcase its latest innovations in its booth #5145 at the Optical Fiber Conference in San Francisco, from April 1-3, 2025.
For more information on Passage M1000, please visit https://lightmatter.co/
About Lightmatter
Lightmatter is leading a revolution in AI data center infrastructure, enabling the next giant leaps in human progress. The company’s groundbreaking Passage™ platform—the world’s first 3D-stacked silicon photonics engine—connects thousands to millions of processors at the speed of light. Designed to eliminate critical data bottlenecks, Lightmatter’s technology enables unparalleled efficiency and scalability for the most advanced AI and high-performance computing workloads, pushing the boundaries of AI infrastructure.
Contacts
Media Contact:
Lightmatter
John O’Brien
press@lightmatter.co
All Things Photonics podcast spotlight with Lightmatter Founder & CEO Nick Harris
Lightmatter is pushing the envelope of AI interconnect and networking standards. It’s all in an effort to bring AI infrastructure into a photonic era. MIT graduate Nick Harris founded Lightmatter in 2017. In this Spotlight conversation, he offers a look into the formation, and the future of his company, as well as the breakthroughs driving the photonic supercomputing group today. Spotlight from "All Things Photonics" is a special series of intimate sit-down conversations with preeminent figures in our industry.
Photonics Media publishes business-to-business magazines, buyers’ guides, and websites for individuals working with light-based technologies in the photonics industry. A pioneering publisher in the relatively new discipline of photonics, Photonics Media has built a large global audience comprising academics and researchers, manufacturers, and end-users. The Photonics Media YouTube channel features video coverage of news, products, and events in the photonics industry.
AlphaSense Surpasses $400M in ARR, Accelerating Growth with Private Content Expansion and Generative AI Innovation
Big news! AlphaSense has surpassed $400M in ARR and is now powering smarter decisions for 6,000+ global customers with cutting-edge AI.
They are on a mission to help businesses uncover deeper insights, move faster, and make data-driven decisions with confidence. And they're just getting started. Even more exciting innovations are coming to the platform this year.
Momentum and rapid growth continue for the Market Intelligence Leader as it achieves key mileposts on revenue, customers, product innovation, and industry recognition
AlphaSense, the leading AI-powered market intelligence and search platform, today announced it has exceeded $400 million in annual recurring revenue (ARR), more than doubling its ARR since announcing it reached $200 million in April 2024. This success is driven by its growing customer base that has reached more than 6,000 customers—including 88% of the S&P 100—all of whom rely on AlphaSense for its cutting-edge AI technology to gain deeper insights, make data-driven decisions, and stay ahead of market trends. With its innovative platform, AlphaSense continues to empower businesses across industries—including leading financial institutions—to accelerate growth, streamline operations, and enhance strategic decision-making.
"We are at a tipping point where AI-driven insights are no longer a luxury but a necessity—every company's market value is the sum of the decisions it makes," said Jack Kokko, CEO and Founder of AlphaSense. "Surpassing $400 million in ARR and our rapid growth are clear signals that businesses are recognizing the transformative power of our end-to-end market intelligence platform. As we scale, our focus remains on product and technology innovation, ensuring we deliver high-value solutions and cutting-edge AI and smart workflow capabilities to our customers."
Market Momentum & Strategic Expansion
AlphaSense's record-breaking growth follows a series of strategic moves, including its landmark $930 million acquisition of Tegus in June 2024, the market's leading expert interview library covering more than 35,000 public and private companies, with over 150,000 transcripts of investor-led expert interviews. This acquisition combined AlphaSense's market-leading search and Generative AI technology with the market's top private content library, giving clients access to a vast source of essential insights on the range of companies and industries that matter to their decision-making.
With a total of 450 million searchable documents—including all equity research, event transcripts, expert interviews, company filings, news, and trade journals—AlphaSense equips decision-makers with unparalleled market insights, all powered by advanced AI technology. World leading organizations, including Adobe, Amazon, American Express, Cisco, Nvidia, Microsoft, Pfizer, Salesforce, Nestlé, and JPMorgan, continue to turn to AlphaSense to redefine how businesses access and leverage market intelligence.
In addition to being trusted by customers, AlphaSense also celebrated a landmark year of achievements in 2024, including record-breaking industry and individual award wins from prestigious organizations such as CNBC, Fast Company, Fortune, Inc., and more.
"It's been a privilege to partner with Jack and the AlphaSense team over the past few years as they've redefined how organizations discover and act on critical insights," said James Luo, General Partner at CapitalG, Alphabet's independent growth fund. "Reaching $400M ARR is a testament to their bold vision and the transformative power of their AI-driven platform. We're thrilled to continue supporting AlphaSense as they lead the market intelligence revolution for businesses worldwide."
Innovation & Future Growth
Last week, AlphaSense announced its latest leap forward in transforming how businesses access and analyze critical insights with Generative AI. With the groundbreaking additions of Generative Search and Generative Grid, AlphaSense is not just advancing the industry—it's redefining the possibilities of AI in market intelligence, setting a new standard for speed, accuracy, and depth of analysis.
Looking ahead, AlphaSense is poised to further accelerate its momentum through continued investments in Generative AI, Enterprise Intelligence, deeper content integrations, and global market expansion. As industries navigate increasing complexity and information overload, AlphaSense is dedicated to delivering AI-powered solutions that empower professionals to make better decisions, faster.
Experience the power of AlphaSense's Generative AI by joining the early access program for Generative Grid. Plus, stay connected on the go with the AlphaSense mobile app. To learn more about career opportunities at the company, click here.
Mesh closes $82M Series B to accelerate building the first global crypto payments network
ALSO READ:
Crypto Payments Firm Mesh Raises $82M as Stablecoin Adoption Soars - CoinDesk
Mesh Payments closes on $60M as demand for its corporate spend offering surges - Techcrunch
The round, led by Paradigm with participation from Consensys, QuantumLight, Yolo Investments, and others, was secured using PayPal USD (PYUSD) stablecoin, setting a historical precedent for stablecoin funding
Mesh, the leading crypto payments network enabling seamless transactions with cheap and immediate conversions, today announced it closed a $82 million Series B funding round, bringing its total amount raised to over $120 million. With payments and stablecoins widely seen as the biggest catalyst for crypto's mass adoption, the funds set the company up for sustained dominance in the industry's most promising sector. The round was led by Paradigm, with participation from Consensys (parent company of MetaMask), QuantumLight Capital (started by Revolut Founder & CEO Nik Storonsky), Yolo Investments, and others. Mesh has previously raised from investors including PayPal Ventures, Galaxy Ventures, and MoneyForward.
In a historic moment for both venture funding and stablecoins, most of the $82 million of investments were settled with PayPal USD (PYUSD) stablecoin. PYUSD was leveraged to close funding instantly and Mesh's technology was used to transfer the assets securely. The benefits of using stablecoins for VC funding are that it's instant, cheap, transparent, and available 24/7. The method of funding comes on the heels of PayPal Ventures' 2024 investment in Mesh, which was also completed largely in PYUSD.
Mesh has already partnered with major players such as MetaMask, Shift4, and Revolut, making its technology available to over 400 million users in over 100 countries worldwide. Now, the company can further accelerate product development and the expansion of its APIs to power hundreds of crypto and payments platforms.
"Stablecoins present the single biggest opportunity to disrupt the payments industry since the invention of credit and debit cards, and Mesh is now first in line to scale that vision across the world," said Bam Azizi, CEO and Co-Founder of Mesh. "With this funding, we're expanding the first truly global crypto payments network – one that allows users to pay with any crypto they hold while ensuring merchants can settle in the stablecoin of their choice, just like they do with fiat today."
Mesh's flagship payments solution is powered by its proprietary SmartFunding technology, which eliminates friction between users' assets and merchants' settlement requirements. That means an asset like Bitcoin, Ethereum, or Solana can be used as a means of payments, while merchants automatically receive the transaction amount in stablecoins such as PYUSD, UST, or USDC, all without requiring the user to manually convert assets beforehand.
"We think crypto and stablecoins will be an enormous transformation to payments," said Charlie Noyes, General Partner at Paradigm. "Mesh makes paying with crypto as simple as using a credit card for users and merchants while preserving the benefits of transacting over blockchain rails."
Mesh is on track to become an integral part of global payments as the industry moves towards a stablecoin-dominated ecosystem, with stablecoins already representing over a $200 billion market cap and surpassing $27.6 trillion in transaction volume in 2024.
For more information about Mesh, visit https://meshconnect.com/.
About Mesh
Founded in 2020, Mesh is building the first global crypto payments network, connecting hundreds of exchanges, wallets, and financial services platforms to enable seamless digital asset payments and conversions. By unifying these platforms into a single network, Mesh is pioneering an open, connected, and secure ecosystem for digital finance. For more information, visit https://www.meshconnect.com/.
Contact: mesh@greenbrier.partners
AlphaSense and Cerebras join forces to power the future of AI market intelligence with 10x faster insights
AlphaSense, the leading AI-powered market intelligence and search platform, and Cerebras Systems, the pioneer in accelerating generative AI, today announced a strategic partnership to revolutionize how enterprises access, analyze, and act on critical market data. By integrating Cerebras' cutting-edge AI Inference capabilities, AlphaSense has significantly enhanced the speed and precision of its AI-driven research tools – delivering intelligent insights up to 10x faster and enabling sharper decision-making for business and financial professionals worldwide.
With the growing reliance on AI-driven research, speed and accuracy have become imperative for enterprises seeking to stay ahead of market trends, investment opportunities, and competitive shifts. Through this partnership, AlphaSense has successfully integrated Cerebras Inference, leveraging Cerebras' WSE-3 (Wafer-Scale Engine) and Llama models to optimize multi-turn AI-driven financial analysis. As a result, AlphaSense users can perform sophisticated multi-turn queries – delivering results in seconds, a fraction over previous speeds.
Powering AlphaSense with Wafer Scale AI
At the core of this collaboration is Cerebras' Wafer-Scale Engine (WSE-3), one of the world's fastest AI processors, designed to run the latest AI models 10x faster than the latest GPUs. Cerebras Inference enables AlphaSense to reduce latency, increase throughput, and push the boundaries of AI-driven market intelligence. With this advanced infrastructure, AlphaSense supports next-generation deep research models, making financial and business insights more accessible, actionable, and real-time.
"AI is revolutionizing market intelligence, but speed to key insights is imperative to getting in-depth and accurate research actionable," said Raj Neervannan, Chief Technology Officer and Co-Founder of AlphaSense. "By partnering with Cerebras, we are integrating cutting-edge AI infrastructure with our intuitive, trustable generative AI product with exhaustive and unique content sets that allows us to deliver unprecedented speed and the most accurate and relevant insights available – helping our customers make smarter decisions with confidence."
Transforming AI for Enterprises, Financial Services, and Beyond
This partnership marks a major leap forward for financial institutions, Fortune 500 companies, and enterprises relying on AI-driven research and analytics. By harnessing Cerebras' AI acceleration, AlphaSense is setting a new standard in financial services, market intelligence, and enterprise decision-making.
"We are thrilled to collaborate with AlphaSense to deliver unprecedented AI acceleration for market intelligence," said Andrew Feldman, CEO and Co-Founder of Cerebras. "Through this partnership, we are redefining financial services analytics—enabling organizations to access real-time, high-precision insights at a speed never seen before."
As AI continues to reshape the way businesses operate, this partnership represents a major leap forward in the evolution of market intelligence. By combining AlphaSense's industry-leading AI search platform with Cerebras' revolutionary AI computing power, the two companies are setting a new standard for speed, precision, and scale in AI-driven insights.
Cerebras Inference runs exclusively in US-based data centers, offering best-in-class data privacy, zero data retention, and strict compliance with US laws. This ensures AlphaSense customers can confidently leverage advanced AI reasoning capabilities without compromising on security or privacy.
With this collaboration, AlphaSense customers gain a transformative edge – equipped with the most advanced AI technology to extract high-value insights from vast amounts of unstructured data, make faster and more informed decisions, and stay ahead in an increasingly competitive business landscape.
Apptronik Robotics and Jabil to scale production of Apollo humanoid robots and deploy in manufacturing operations
Jabil’s world-class manufacturing expertise, Apollo’s mass manufacturable design, and the promise of robots building robots will enable the flywheel needed for the mass adoption of humanoid robots.
Apptronik, the AI-powered humanoid robotics company, and Jabil (NYSE: JBL), a global leader in engineering, manufacturing, and supply chain solutions, have announced a pilot and strategic collaboration to build Apollo humanoid robots and integrate them into specific Jabil manufacturing operations. This includes the production lines that will build Apollo humanoid robots, paving the way for Apollo to build Apollo.
As part of the pilot program, newly manufactured Apollo units will leverage Jabil’s factory environment for real-world validation testing. The robots will be used to complete an array of simple, repetitive intralogistics and manufacturing tasks, including inspection, sorting, kitting, lineside delivery, fixture placement, and sub-assembly before being deployed to Apptronik customer sites. When humanoids are introduced to Jabil’s manufacturing lines, they are to augment and support the existing workforce; people who may have previously done those repetitive tasks now can dedicate their time to more creative, thought-intensive projects that shape and improve the future of Jabil’s operations.
Jabil supports customers in the development of market-leading advanced robotics and warehouse automation solutions, pushing the boundaries of technology. By building and evaluating their function within a best-in-class production environment, Apptronik and Jabil will gather valuable real-world use cases for automation in manufacturing and optimizing Apollo’s AI models.
As the worldwide manufacturing partner for Apollo humanoid robots, Jabil can provide Apptronik the flexibility and agility to scale production around the world as needed. Jabil’s expertise in developing and manufacturing robots will allow Apptronik to unify its supply chain and gain access to Jabil’s advanced manufacturing capabilities around the globe. This collaboration will benefit Apptronik customers through world-class quality, scalability, inventory management, turn-key procurement, and rapid production while providing Jabil the opportunity to test new automation solutions in support of safer operations, greater efficiency, and accelerated time-to-market.

Prioritizing Humanoid Scalability
To fulfill customer demand for its humanoid robots at the price point necessary for mass adoption, Apptronik’s world-class design includes a heritage of unique actuators, or motors, that unlock affordability and simplify maintenance. Its latest generation of actuators requires a significantly reduced number of components, manufacturing time, and cost compared to previous generations. With a cost-effective, simplified bill of materials (BOM) and ability to mass produce units at scale with Jabil, Apptronik aims to make general-purpose humanoids more affordable to expand into new markets and roles, such as front-of-house retail, elder care, and eventually home use.
“Humanoid robots have the potential to revolutionize the way we live and work, but for that to become a reality, we need to be able to build them rapidly at scale, at the right price point, and in geographies where our customers are located,” said Jeff Cardenas, co-founder and CEO of Apptronik. “Our partnership with Jabil, along with our unique design for manufacturability and ability to have Apollo humanoid robots handling material movement and assembly tasks in the factory, are critical components needed to create a flywheel effect that could make humanoid robots ubiquitous.”
“We’ve been committed to advanced automation and robotics across our operations, so piloting Apollo is a logical next step for our division and for Jabil in the long term,” said Rafael Renno, Senior Vice President of Global Business Units at Jabil. “Not only will we get a first-hand look at the impact that general-purpose robots can have as we test Apollo in our operations, but as we begin producing Apollo units, we can play a role in defining the future of manufacturing. These new technologies and applications further enhance Jabil's best-in-class capabilities to solve complex challenges and manufacture at scale for our customers.”
Newly manufactured Apollo humanoid robots will leverage Jabil’s factory environment for real-world validation testing, completing an array of simple, repetitive manufacturing tasks before being deployed to Apptronik customer sites.
About Apptronik:
Apptronik is a human-centered robotics company developing AI-powered humanoid robots. Our goal is to create human helpers to support humanity in every facet of life. Our robot, Apollo, is designed to collaborate thoughtfully with humans—initially in critical industries such as manufacturing and logistics, with future applications in healthcare, the home, and beyond. Apollo is the culmination of nearly a decade of development, drawing on Apptronik’s extensive work on 15 previous robots, including NASA’s Valkyrie robot. Apptronik started out of the Human Centered Robotics Lab at the University of Texas at Austin and has over 150 employees. Learn more at https://apptronik.com.
About Jabil:
At Jabil (NYSE: JBL), we are proud to be a trusted partner for the world's top brands, offering comprehensive engineering, manufacturing, and supply chain solutions. With over 50 years of experience across industries and a vast network of over 100 sites worldwide, Jabil combines global reach with local expertise to deliver both scalable and customized solutions. Our commitment extends beyond business success as we strive to build sustainable processes that minimize environmental impact and foster vibrant and diverse communities around the globe. Discover more at www.jabil.com.
Investor Contact
Adam Berry
Senior Vice President, Investor Relations and Communications
adam_berry@jabil.com
Media Contact
Timur Aydin
Senior Director, Enterprise Marketing and Communications
publicrelations@jabil.com
Liz Clinkenbeard
Head of Communications
press@apptronik.com
LATAM fintech leader Kapital embraces full transparency
We applaud Arjun Sethi, Chairman at Tribe Capital & Co-CEO at Kraken, and the team at Kapital on a bold move that puts them in a category of their own. Embracing transparency as a private startup as a global fintech leader, and demonstrating transparency equal trust when building a company fosters a stronger culture and relationship with employees and investor.
In a world where more and more companies stay private, transparency matters—especially for regulated businesses. When I shared Kraken Digital Asset Exchange metrics, it sparked a conversation among the companies in my portfolio at Tribe Capital. The question was: Why do this? And my response was simple—because we should and it keeps us accountable to our shareholders and our customers. Today, I’m proud to see some of the companies I’ve co-founded doing the same.
Highlights:
✅ Profitable since day one.
✅ We offer deposits, lending, expense management, taxes, payroll, and more. As a regulated bank in Mexico, operating in Colombia and Peru, we’ve built a platform that enables real innovation in financial services.
✅ Our unique approach—combining software with lending—lets us provide the best financial products to small and medium-sized businesses, giving them the tools they need to scale.
📊 Why this matters:
In emerging markets, small and medium businesses (SMBs) are the heartbeat of the economy. We’ve built something that truly serves them, and I’m excited to share some key numbers:
📈 Just shy of $200M in revenue.
🏦 $500M in deposits in Mexico.
💼 100,000+ merchants on our platform across the region.
We’ll be sharing our metrics every quarter. Looking forward to continuing this journey.