Aerospace innovator Natilus expands globally into the fastest-growing aviation markets with SpiceJet India
Natilus and Spicejet have partnered with SpiceJet buying 100 of Natilu's innovative blended-wing aircraft, aiming for greater fuel efficiency, lower costs, and reduced emissions in India's growing market. Natilus established an Indian subsidiary (Natilus India) in Mumbai to support operations, as SpiceJet helps with local certification for these futuristic, sustainable planes, potentially transforming domestic air travel.
- The company's new subsidiary in Mumbai - Natilus India - will serve Indian airlines' need for new planes amid country's rapid economic growth
- SpiceJet places purchase order for 100 of Natilus's HORIZON passenger aircraft to reduce airline's emissions and operational costs
- The move reflects India's appetite for new aircraft that supplement commercial fleets and open the door to new domestic and international routes
Natilus, a U.S. aerospace manufacturer of blended-wing body aircraft, today debuted Natilus India, a subsidiary headquartered in Mumbai and led by Ravi Bhatia, Regional Director, to support in-country operations. Simultaneously, Natilus announced its first commercial partnership with one of India's largest passenger airlines, SpiceJet Ltd., which plans to purchase 100 of Natilus's flagship passenger plane, HORIZON, once the plane is certified in India. SpiceJet sees HORIZON as an opportunity to advance its sustainability efforts and drive forward aviation innovation in India and will be the first airline in India to add HORIZON to its fleets.
Natilus India will prioritize the expansion of Natilus's family of BWB aircraft into Indian markets and will coordinate closely with SpiceJet. This development also positions Natilus to begin exploratory sourcing of Indian-made manufactured components.
"We see immense opportunity to deliver a superior and more-efficient airplane for Indian airlines, such as SpiceJet," said Ravi Bhatia, Regional Director of Natilus India. "The establishment of the subsidiary in India is the first step in Natilus establishing roots in India and ultimately securing more commercial airlines customers who want to better serve the needs of their passengers."
Natilus is developing a family of blended-wing aircraft including its flagship passenger plane HORIZON and cargo plane KONA. Through improvements in aerodynamics, Natilus's blended-wing aircraft offer 40% greater capacity, 50% lower operating costs and 30% less fuel. Natilus's HORIZON, which can transport up to 240 passengers in its high-capacity configuration, is a prime solution for Indian commercial airlines looking to grow their fleets and keep flights affordable. Deliveries of the HORIZON will begin in the 2030s, at which point the HORIZON is poised to become the most cost- and fuel-efficient aircraft in existing commercial and cargo fleets.
"Today, India has one of the fastest growing markets in the world and it has an appetite for new aircraft that can both supplement its commercial carrier fleets, while opening the door to new domestic and international routes," said Aleksey Matyushev, CEO and Co-Founder of Natilus. "Our HORIZON serves an ideal solution and represents an innovative path forward for the industry, for India and for the world."
Fintech leader Octane raises $100M in Series F Funding round to fuel record growth
ALSO READ:
Octane and Adventure Lifestyle Launch Adventure Lifestyle Finance
Equity Financing to Drive Continued Growth, Innovation, and Market Expansion.
The fintech revolutionizing the buying experience for the powersports industry Octane®, announced it has closed its Series F funding round of $100 million in equity capital. The raise includes new equity capital to be used for growth initiatives as well as amounts to be used for secondary share transfers.
The capital builds on Octane’s strong originations growth and enables the Company to further accelerate market penetration and deepen its product offering, positioning the Company even more favorably for long-term success. The Series F raise attracted a mix of returning and new investors; Valar Ventures led the round with participation from Upper90, Huntington Bank, Camping World and Good Sam, Holler-Classic, and others. Prior to the Series F, Octane had raised $242 million in total equity funding since inception, including its Series E, which closed in 2024.
“Building on our strong foundation, this capital allows us to move more quickly on key initiatives that will further differentiate us in existing markets and speed up our entrance into new ones,” said Jason Guss, CEO and Co-Founder of Octane. “We’re grateful to our existing investors for their continued support and belief in our vision, as well as to new investors for their partnership. We look forward to strengthening these relationships as we expand our offerings and unlock the full potential of financial products for merchants and consumers.”
“One of the investing lessons of the past two decades is that the best tech companies can compound for far longer than expected,” said James Fitzgerald, Founding Partner of Valar Ventures. “Octane’s unique offering supports dealers and OEMs with software and financing solutions unavailable elsewhere. We expect Octane to continue to take market share — both in its existing markets and in those it’s only begun to enter — for a very long time. We are excited to continue backing this team and to partner with them for another decade, or longer.”
“It’s been impressive to watch Octane’s execution in becoming a clear leader in the powersports market,” said Billy Libby, Managing Partner at Upper90. “Now the company is scaling its proprietary underwriting engine and end-to-end technology platform as it expands into new markets and helps dealers grow their profits and deliver better financing experiences to consumers. Few public or private companies are growing as rapidly — and profitably — as Octane, and we’re excited to be part of their continued growth.”
Thus far in 2025, Octane has launched a myriad of new products and technology enhancements, including groundbreaking updates for both merchants and consumers. Notably, Octane strengthened its industry-leading financing portal to provide even faster, easier customer acquisition and closing processes for merchants, helping them reach more buyers and increase profitability. At the same time, customers can access simplified payment options, expedited question resolution, and increased flexibility within the Customer Portal.
Since its founding in 2014, Octane has originated over $7 billion in loans through its in-house lender Roadrunner Financial®, Inc., issued more than $4.7 billion in asset-backed securities, and has sold or committed to sell $3.3 billion of secured consumer loans since December 2023. The Company grew originations by more than 30% from Q3 2024 to Q3 2025 and is GAAP net income profitable. Octane works with 60 original equipment manufacturer (OEM) partner brands and serves markets worth a combined $150 billion with its innovative technology solutions and fast, easy financing experience.
Senior Vice President, Communications and People at Octane
Press@octane.co
Investor Relations:
IR@octane.co
Forbes and Lightmatter CEO talk light based photonic compute and the future of AI infrastructure
Meet The Founder Betting On Light As The Future Of AI Chip Technology and Infrastructure.
Nicholas Harris, CEO of Lightmatter, sat down with Forbes to discuss building AI chip technology relying on light rather than electrical signals. Harris also discussed why the end of Moore's Law and the AI boom created the perfect moment for photonics, enabling massive acceleration in AI model training and increased data center efficiency.

- Addressing Bottlenecks: As AI chip processing power increases, the connections (interconnects) between chips become the primary limitation, causing GPUs to sit idle while waiting for data. Lightmatter's photonic solutions significantly increase bandwidth and reduce latency and power consumption compared to conventional copper interconnects.
- Photonic Interconnects: The company's core product is the Passage platform, a 3D-stacked silicon photonics engine that facilitates high-speed data transfer between processors. This technology uses light (photons) to move data, offering up to 100 times faster data movement between chips in some cases.
- Products & Integration:
- Passage M1000: An active photonic interposer (a layer that AI chips sit atop) with record-breaking optical bandwidth for large AI model training.
- Passage L200: A 3D co-packaged optics (CPO) chiplet that integrates directly with XPU (accelerator/processor) and switch designs for improved performance and energy efficiency.
- Envise: A complete photonic AI accelerator platform designed to run AI computations with greater power efficiency than traditional electronic systems.
- Scalability & Partnerships: Lightmatter's technology enables large-scale AI clusters with up to millions of nodes. The company partners with major semiconductor and packaging firms like GlobalFoundries, Amkor, and ASE to ensure high-volume manufacturing and industry standardization. They are also part of the UALink Consortium, which aims to standardize AI interconnect solutions.
- Higher Speed and Bandwidth: Light-based interconnects can transfer data at much faster speeds and higher densities than electrical signals, which is crucial for the massive datasets used in AI.
- Energy Efficiency: Photonic interconnects consume significantly less power, helping data centers manage their growing energy consumption and operating costs.
- Performance Scaling: By eliminating interconnect bottlenecks, Lightmatter enables faster training times for advanced AI models, allowing for the development of larger and more capable neural networks.
AlphaSense AI makes Inc.'s 2025 Best in Business list for a major year for AI innovation and enterprise adoption
Company recognized for leadership in "Best AI Implementation" and "Best in Innovation" categories, validating the impact of its enterprise AI workflow enhancements
AlphaSense, the AI platform redefining market intelligence for the business and financial world, today announced it has been named to Inc.'s 2025 Best in Business list in the "Best AI Implementation" and "Best in Innovation" categories. This recognition solidifies a record-breaking year of growth for the company, which recently surpassed $500 million in Annual Recurring Revenue (ARR), driven by accelerated adoption of AlphaSense's AI workflow capabilities now used by over 6,500 enterprises.
AlphaSense is pioneering how businesses in every industry integrate generative AI into critical workflows such as go-to-market planning, investment banking, M&A, investor relations, and competitive intelligence. The company's AI innovation is underscored by the rapid adoption of its Generative Search capabilities, which enable users to instantly find insights across 500 million documents with natural language queries. It understands industry-specific terminology, anticipates relevant questions, and delivers granularly cited, analyst-level insights in seconds.
These AI capabilities have since expanded to include sophisticated AI workflow agents that act like a team of trusted, domain-specific analysts to guide decision making.
The Inc. Best in Business award celebrates companies that have demonstrated the most impactful and tangible business wins of the year. This accolade is the latest in AlphaSense's notable industry recognitions already this year, including being named to the Forbes 2025 Cloud 100 and listed as No. 8 in the CNBC Disruptor 50.
"This recognition by Inc. and the AI advancements we've made this year validates the work we are doing to democratize access to elite market intelligence for our customers, with trustworthy AI at the helm," said Kiva Kolstein, President and Chief Revenue Officer of AlphaSense. "The companies winning in this new era aren't just experimenting with AI, they're using AI to earn trust and grow revenue. Our mission has never been more critical. We enable customers to shift from complexity to clarity, deliver intelligence that drives real impact, and empower users to move from insight to action before the competition."
AlphaSense's AI platform enhancements over the past year include Generative Search, Generative Grid, Deep Research, Financial Data and AI Agent Interviewer, demonstrating the company's evolution from AI search to fully automated, end-to-end AI workflows and addressing the growing need among enterprises for domain-specific AI that is trustworthy and scalable.
AlphaSense is accelerating its vision to deliver trusted intelligence at unprecedented speed. Upcoming enhancements will be focused on unlocking deeper insights across quantitative and qualitative data, automating complex workflows with customized agents, and generating decision-ready deliverables—from pitchbooks to newsletters—on-demand.
About AlphaSense
AlphaSense is the AI platform redefining market intelligence and workflow orchestration, trusted by thousands of leading organizations to drive faster, more confident decisions in business and finance. The platform combines domain-specific AI with a vast content universe of over 500 million premium business documents — including equity research, earnings calls, expert interviews, filings, news, and internal proprietary content. Purpose-built for speed, accuracy, and enterprise-grade security, AlphaSense helps teams extract critical insights, uncover market-moving trends, and automate complex workflows with high-quality outputs. With AI solutions like Generative Search, Generative Grid, and Deep Research, AlphaSense delivers the clarity and depth professionals need to navigate complexity and obtain accurate, real-time information quickly. For more information, visit www.alpha-sense.com.
Media Contact
Remi Duhe for AlphaSense
Email: media@alpha-sense.com
The AI tech IPO window is reopening with Anthropic and OpenAI moving preparing to IPO in 2026
Anthropic has hired Wilson Sonsini to prepare for a potential 2026 IPO while simultaneously raising a major private round (Bloomberg, The Information).
OpenAI is taking similar steps: the company has reportedly begun internal preparation for a 2026 listing, including discussions with banks, legal advisors, and early examinations of financial disclosures (Reuters, FT, Bloomberg).
These are concrete moves from two of the most capital-intensive AI companies — not speculation.
Private Capital Has Hit Its Limits
- AI scale-up looks more like energy infrastructure than software.
- IDC and Gartner estimate AI-related data-center capex in the hundreds of billions for 2025.
- Bloomberg Intelligence projects multi-trillion-dollar cumulative AI infra investment through 2030.
OpenAI and Anthropic both face heavy compute and power requirements. Private rounds alone cannot fund frontier model development at this scale. This is the structural reason top AI firms are preparing public filings.
IPO Activity Is Already Back
The US IPO market — frozen in 2022–2023 — has materially reopened:
- IPO volume in Q1–Q3 2025 nearly matched the entire year of 2024 (Renaissance Capital).
- ServiceTitan surged more than 40% on its December debut (WSJ).
- Anthropic and OpenAI both advancing IPO groundwork for 2026 (Bloomberg, Reuters, FT).
This activity shows a functioning pipeline, not a hypothetical one.
Why AI Is Driving the Shift to Public Markets
Major tech companies continue massive infrastructure expansion:
- AI capex for 2025 is measured in the hundreds of billions (IDC, Gartner).
- OpenAI’s annualized revenue has grown sharply, but the company still operates with substantial losses due to compute costs (FT).
- Microsoft has paused or slowed select data-center builds, signalling early supply constraints (Bloomberg).
These pressures create a simple reality: Only public markets can supply capital at the required scale.
Risks the Market Is Closely Watching
Monetization: OpenAI, Anthropic, and others show strong revenue growth but remain capital-intensive and unprofitable (FT).
Infrastructure bottlenecks: GPU availability, power constraints, and paused hyperscaler projects (Bloomberg).
Macro sensitivity: IPO activity reacts immediately to liquidity conditions; any broad market shock could slow issuance.
These risks are real and visible in filings, analyst notes, and infrastructure reports.
Bottom Line
Between Anthropic’s IPO preparation and OpenAI’s internal listing work, the direction of travel is clear:
late-stage AI companies are returning to public markets because they have to.
With IPO activity rising, strong debut performance, and unprecedented infrastructure spending needs, the tech financing environment has already shifted.
The IPO window isn’t “about to open.”
It’s open.
Heather Planishek joins the Lambda superintelligence cloud team as Chief Financial Officer
Big next step to IPO, and the public markets for Lambda and the AI industry. Today, Heather Planishek joins Lambda as Chief Financial Officer. Most recently, she served as Chief Operating and Financial Officer at Tines, the intelligent workflow platform, and has been Lambda's Audit Chair since July 2025. Heather brings deep company insight to our leadership team, as we accelerate the deployment of AI factories to meet demand from hyperscalers, enterprises, and frontier labs building superintelligence.
Heather is an industry veteran who brings deep financial and operational expertise as Lambda accelerates the deployment of AI factories to meet demand from hyperscalers, enterprises, and frontier labs building superintelligence.
Lambda, the Superintelligence Cloud, announced the appointment of Heather Planishek as Chief Financial Officer. Planishek brings deep experience scaling high-growth technology companies and will lead Lambda’s global finance organization as the company continues to expand its infrastructure footprint and operations.
Planishek most recently served as Chief Operating and Financial Officer at Tines, the intelligent workflow platform. She previously served as Chief Accounting Officer at Palantir Technologies Inc. (NASDAQ: PLTR), where she helped guide the company through a period of significant growth and its transition to the public markets. Earlier in her career, she held key leadership roles at Hewlett Packard Enterprise and Ernst & Young, building extensive expertise in enterprise-scale finance, operations, and governance. Planishek joined Lambda’s Board of Directors in July of this year, and as part of the transition, will step down from her board position.
“It’s awesome to have Heather join as our CFO,” said Stephen Balaban, co-founder and CEO of Lambda. “I’ve gotten to know her through our work together on the board, and it’s great to have her take on an executive leadership role at Lambda.”
In her new position, Planishek will oversee Lambda’s financial strategy, planning, accounting, treasury, investor relations, and business systems. Her appointment strengthens the company’s leadership team as Lambda continues to serve some of the world’s leading AI labs and largest hyperscalers, as well as tens of thousands of AI developers across research, startups, and enterprise.
“Lambda sits at the center of one of the most important shifts in technology, with infrastructure that I believe is rapidly becoming essential for AI at scale,” said Planishek. “I’m excited to join the leadership team and help strengthen the financial and operational foundation that will support Lambda’s continued growth and long-term market leadership.”
About Lambda
Lambda, The Superintelligence Cloud, is a leader in AI cloud infrastructure serving tens of thousands of customers.
Founded in 2012 by machine learning engineers who published at NeurIPS and ICCV, Lambda builds supercomputers for AI training and inference.
Our customers range from AI researchers to enterprises and hyperscalers.
Lambda’s mission is to make compute as ubiquitous as electricity and give everyone the power of superintelligence. One person, one GPU.
Contacts
Lambda Media Contact
pr@lambda.ai
Lambda revolutionizes light based AI infrastructure with NVIDIA Quantum‑X Photonics for next-gen AI factories
Frontier AI training and inference now operate at unprecedented scale. The new co-packaged optics networking from NVIDIA delivers 3.5x higher efficiency and 10x resiliency for large-scale GPU clusters.
Lambda, the Superintelligence Cloud, today announced it is among the first AI infrastructure providers to integrate NVIDIA's silicon photonics–based networking.
The shift to co-packaged optics CPO, addresses a critical bottleneck in AI infrastructure. As AI models now train on hundreds of thousands of GPUs, and beyond, the network connecting them has become as important as the GPUs themselves. Traditional networking approaches are not keeping pace with this scale.
Why CPO Matters for AI Compute Networks
NVIDIA Quantum-X Photonics InfiniBand and NVIDIA Spectrum-X Photonics Ethernet switches use co-packaged optics (CPO) with integrated silicon photonics to provide the most advanced networking solution for massive-scale AI infrastructure. CPO addresses the demands and constraints of GPU clusters across multiple vectors:
- Lower power consumption: Integrating the silicon photonic optical engine directly next to the switch ASIC eliminates the need for transceivers with active components that require additional power. At launch, NVIDIA mentioned a 3.5x in power efficiency improvement over traditional pluggable networks.
- Increased reliability and uptime: Fewer discrete optical transceiver modules, one of the highest failure rate components in a cluster, mean fewer potential failure points. NVIDIA cites 10x higher resilience and 5x longer AI application runtime without interruption over traditional pluggable networks.
- Lower latency communication: Placing optical conversion next to the switch ASIC minimizes electrical trace lengths. This simplified data and electro-optical conversion path provides lower latency than traditional pluggable networks.
- Faster deployment at scale: Fewer separate components, simplified optics cabling, and fewer service points mean that large-scale clusters can be deployed, provisioned, and serviced more quickly. NVIDIA cites 1.3x faster time to operation versus traditional pluggable networks.
"NVIDIA Quantum‑X Photonics is the foundation for high-performance, resilient AI networks. It delivers superior power efficiency, improved signal integrity, and enables AI applications to run seamlessly in the world’s largest datacenters,” said Ken Patchett, VP of DC Infrastructure, at Lambda. “By integrating optical components directly next to the network switches, we believe our customers can deploy AI infrastructure faster while significantly reducing operational costs – essential as we continue to scale to support frontier AI workloads."
NVIDIA reports that NVIDIA Photonics delivers 3.5x better power efficiency, 5x longer sustained application runtime, and 10x greater resiliency than traditional pluggable transceivers. Co-packaged optics can provide increased compute per watt and enhanced network reliability, enabling faster model training and inference.
Built for the era of real-time AI, NVIDIA Photonics networking features a simplified design, resulting in fewer components to install and maintain. Lambda continues to provide scalable AI infrastructure, helping enterprises, research labs, and startups build multi-site, large-scale GPU AI factories.
“AI factories are a fundamentally new class of infrastructure – defined by their network architecture and purpose-built to generate intelligence at massive scale,” said Gilad Shainer, senior vice president of networking at NVIDIA. “By integrating silicon photonics directly into switches, NVIDIA Quantum-X silicon photonics networking switches enable the kind of scalable network fabric that makes massive-GPU AI factories possible.”
How Lambda Plans to Leverage It
Lambda is preparing its next-generation GPU clusters to integrate CPO networking using NVIDIA Quantum-X Photonics InfiniBand and Spectrum-X Photonics Ethernet switches. These advances in silicon-photonics switching are critical as we design massive-scale training and inference systems. For Lambda’s NVIDIA GB300 NVL72 and NVIDIA Vera Rubin NVL144 clusters, we are adopting CPO-based networks to deliver higher reliability and performance for customers while simplifying large-scale deployment operations and improving power efficiency.

This builds on Lambda's cooperation with NVIDIA. Lambda recently achieved NVIDIA Exemplar Cloud status, validating its ability to deliver consistent performance for large-scale training workloads on NVIDIA Hopper GPUs. Over the past decade, Lambda has earned six NVIDIA awards, affirming its position as a trusted collaborator in NVIDIA’s ecosystem.
About Lambda
Lambda, The Superintelligence Cloud, is a leader in AI cloud infrastructure serving tens of thousands of customers.
Founded in 2012 by machine learning engineers who published at NeurIPS and ICCV, Lambda builds supercomputers for AI training and inference.
Our customers range from AI researchers to enterprises and hyperscalers.
Lambda’s mission is to make compute as ubiquitous as electricity and give everyone the power of superintelligence. One person, one GPU.
Contacts
Lambda Media Contact
pr@lambdal.com
Archetype AI Raises $35M Seiers E to scale deployment of physical agents to solve real-world problems
ALSO READ:
Archetype AI raises $35M for "physical" AI models - Axios
Imagine Physical Agents that reduce downtime, improve safety, and increase operational efficiency across warehouses, construction sites, and city infrastructure.
The teams results validate what we believed from the start: organizations don't need more data — they need intelligence where it matters most: inside their operational environments.
The breakthrough here is Newton, their frontier Physical AI foundation model — massively multimodal, capable of fusing sensor data with natural language, and scaling cross-modal reasoning across increasingly complex environments, tasks, and problems.
With the Series A, they will continue to grow the team and expand partnerships to make Physical AI accessible to every organization that manages physical assets — and ultimately, to bring intelligence to every asset in the physical world.
Archetype AI enables businesses to build, customize, and deploy Physical Agents that turn real-world sensor data into actionable insights, recommendations, and automations
The leading Physical AI company Archetype AI closed a $35 million Series A funding led by IAG Capital Partners and Hitachi Ventures, with participation from new and existing investors including Bezos Expeditions, Venrock, Amazon Industrial Innovation Fund, Samsung Ventures, Systemiq Capital, E12 Ventures, Higher Life Ventures, and others. Archetype AI is also introducing new tools to build and deploy Physical Agents that sense, understand, and act in real-world environments.
“Archetype AI is refining and defining the full stack of Physical AI, creating scalable solutions that operate in the real world, not just on screens or in simulations,” said Dennis Sacha, Founding Partner at IAG Capital Partners. “This team is building a category-defining company that will transform how humans and agents interact with everything from edge devices to critical infrastructure, generating lasting value at scale.”
While AI agents are increasingly used to automate digital workflows online, extending those capabilities into the physical world has remained complex, costly, and resource-intensive. Traditional approaches — siloed industry-specific solutions or custom ML tools — demand significant engineering expertise and capital investments, and they only solve narrow business problems (for example, safety) without the ability to generalize across multiple real-world use cases.
With Archetype’s Physical Agents, businesses can turn raw sensor data into real-world intelligence in minutes using natural language prompts and APIs that integrate seamlessly with existing frameworks. Powered by Newton™️, a breakthrough Physical AI foundation model, the agents fuse multimodal sensor data, video, and contextual information to generate insights, recommendations, and automations. The Archetype platform offers pre-built services like Agent Toolkit, enabling rapid assembly, testing, and deployment of Physical Agents. These agents can run anywhere — in a private cloud, on-premise, or at the edge — ensuring complete data sovereignty and enterprise-grade security, a critical requirement for physical industries.
“Physical Agents allow businesses to move from intent to action with speed and efficiency that were not previously possible,” said Ivan Poupyrev, Co-founder and CEO of Archetype AI. “Newton provides general physical intelligence, while the Archetype platform and Agent Toolkit make it simple to build and deploy customer-specific solutions that solve their critical problems by using specific knowledge about their physical operations — here and now.”
Agent Toolkit enables businesses to build custom Physical Agents that serve diverse use cases. To accelerate development, Archetype provides pre-built, ready-to-use agents, including:
- Process Monitoring Agent — Track ongoing machine operations, detect and discover anomalies, and identify machine states.
- Task Verification Agent — Verify worker adherence and compliance to planned workflows and procedures in services, training, and operations control.
- Safety Agent — Monitor environments for potential hazards and unsafe behaviors that can be defined by natural language to ensure safety compliance.
With these agents, customers can quickly deploy solutions tailored to their operations. The Archetype platform supports creating new agents, modifying existing ones, and enabling the model to adapt agents to new environments and requirements.
Early enterprise customers, including NTT DATA, Kajima, and the City of Bellevue, have already deployed Physical Agents to increase efficiency, reduce downtime, and improve safety in environments as varied as warehouses, construction sites, and city streets.
“Archetype AI’s Physical Agents improve operations and safety across real-world assets,” said a Samsung Ventures representative. “From reducing machine downtime in factories to monitoring construction sites in real time, their platform delivers tangible results enterprises can put into action immediately.”
The new funding will enable Archetype to accelerate scaling the Archetype platform, expand the Physical Agent capabilities, and invest further in frontier research and development to advance Newton's ability to interpret, reason, and act in the physical world. The company is releasing new research results demonstrating the state-of-the-art capabilities of Newton for physical signal-language fusion, which allows the model to generate continuous time series signals from language descriptions. With this research, Archetype is making the next step beyond understanding to acting and manipulating the physical world.
The Archetype AI Platform and Physical Agent Toolkit are available today in beta for select customers, with expanded capabilities rolling out in the coming months.
About Archetype AI
Archetype AI is the Physical AI company helping humanity make sense of the real world. Founded by Google veterans, it pioneers a new AI category that fuses real-time sensor data with natural language, enabling users to ask open-ended questions about what’s happening now and what could happen next. As the creators of Newton, a first-of-its-kind foundation model, Archetype AI designed this platform to interpret the physical world and enhance human decision-making. Archetype AI partners with Fortune Global 500 brands across industries — automotive, consumer electronics, logistics, and retail — turning real-world data into actionable insights. For more information, visit: archetypeai.io.
Contacts
Groq expands to Asia-Pacific with Equinix data center to power the next generation of AI inference
In collaboration with Equinix NASDAQ: EQIX, Groq brings low-latency, high-efficiency compute closer to customers in Australia.
Groq, a global leader in AI inference, today announced its first AI infrastructure footprint in Asia-Pacific, through its deployment in Equinix's data center in Sydney, Australia. The development is part of its continued global data center network expansion, following launches in the U.S. and Europe. This extends Groq’s global footprint and brings fast, low-cost and scalable AI inference closer to organizations and the public sector across Australia.
Under this partnership, Groq and Equinix will establish one of the largest high-speed AI inference infrastructure sites in the country with a 4.5MW Groq facility in Sydney, offering up to 5x faster and lower cost compute power than traditional GPUs and hyperscaler clouds. Leveraging Equinix Fabric®, a software-defined interconnection service, organizations in Asia-Pacific will benefit from secure, low-latency, high-speed interconnectivity, ensuring seamless access to GroqCloud™ for production AI workloads, ensuring full control, compliance and data sovereignty.
Groq is already working with customers across Australia, including Canva, to deliver inference solutions tailored to their business needs, from enhancing customer experiences to improving employee productivity.
“The world doesn’t have enough compute for everyone to build AI. That’s why Groq and Equinix are expanding access, starting in Australia.” said Jonathan Ross, CEO and Founder, Groq
Cyrus Adaggra, President, Asia-Pacific, Equinix, said: “Groq is a pioneer in AI inference, and we’re delighted they’re rapidly scaling their high-performance infrastructure globally through Equinix. Our unique ecosystems and wide global footprint continue to serve as a connectivity gateway to their customers and enable efficient enterprise AI workflows at scale.”
"We're entering a new era where technology has the potential to massively accelerate human creativity. With Australia's growing strength in AI and compute infrastructure, we're looking forward to continuing to empower more than 260 million people to bring their ideas to life in entirely new ways." said Cliff Obrecht, Co-Founder and COO of Canva
About Groq
Groq is the inference infrastructure that powers AI with the speed and cost it requires. Founded in 2016, the company created the LPU and GroqCloud to ensure compute is faster and more affordable. Today, Groq is a key part of the American AI Stack and trusted by more than two million developers and many of the world’s leading Fortune 500 companies.
Groq Media Contact
About Equinix
Equinix (Nasdaq: EQIX) shortens the path to boundless connectivity anywhere in the world. Its digital infrastructure, data center footprint and interconnected ecosystems empower innovations that enhance our work, life and planet. Equinix connects economies, countries, organizations and communities, delivering seamless digital experiences and cutting-edge AI – quickly, efficiently and everywhere.
Equinix Media Contacts
Annie Ho (Asia-Pacific)
Graham White (Australia)
Groq expands partnership with HUMAIN to deploy Next-Generation Inference Infrastructure in Saudi Arabia
Groq, the U.S. leader in ultra-low-latency inference acceleration, and HUMAIN, the PIF company delivering global full-stack artificial intelligence solutions. Announced a major expansion of their strategic partnership at the U.S.-Saudi Investment Forum held in Washington, D.C. alongside the visit of Saudi Arabia’s Crown Prince HRH Mohammed bin Salman Al-Saud.
The agreement builds on the region’s first and largest sovereign inference cluster, powered by Groq and already serving more than 150 countries. It continues the mission to establish HUMAIN as the number three global inference provider and affirms the joint commitment to keep this Groq-powered cluster the largest in the region as demand for real-time AI grows.
As part of this deployment, HUMAIN will partner with Groq to expand the advanced Groq-powered inference infrastructure already operating in the Kingdom by more than three times the current capacity, ensuring rapid time-to-market and underscoring HUMAIN’s position in ultra-low-latency, sovereign AI compute. This proven architecture serves as the backbone of the nation’s real-time AI capabilities.
In addition, HUMAIN will partner with Groq to introduce Groq’s latest next-generation chipset and rack architecture, bringing substantial improvements in compute density, power efficiency, on-die memory bandwidth, and RealScale™ interconnect performance. This next-generation technology enables the deployment of more sophisticated, larger-scale AI models and agentic workloads with deterministic speed and efficiency.
Tareq Amin, CEO of HUMAIN, highlighted the strategic significance of the expansion.
“Our work with HUMAIN continues to advance. By combining immediate deployment capabilities with the introduction of our next-generation architecture, we are enabling real-time AI at a scale that sets a new global benchmark.” Jonathan Ross, Groq Founder and CEO
"By scaling our sovereign inference cluster and incorporating both our current high-performance infrastructure and Groq’s newest chipset innovations, together, we are building one of the world’s most advanced real-time AI platforms. This expansion accelerates our mission to bring world-leading compute capabilities directly into the Kingdom’s data centers. Tareq Amin,CEO of HUMAIN
The expanded cluster will power a new wave of national-scale AI applications across government, enterprise, healthcare, finance, industrial systems, multilingual AI, and real-time agentic workflows.











