Delivering the computational power of the human brain to artificial intelligence. By leveraging decades of metamaterials research and 300 patents, unlocking the speed and efficiency of optical compute in an in-memory processor to increase the speed and energy efficiency of AI inference by more than 100X.

Replace 100 GPUs with a single processor. Today’s AI compute demand is skyrocketing, driven by the massive requirements for training and inference in large language models (LLMs). To meet this surging demand, the world requires a 100x performance increase in AI compute capabilities.
Harness the Speed of Light.

Neurophos has completely redesigned the core of photonic computers—optical modulators—reducing their size by a factor of 10,000 compared to today's designs. This breakthrough enables us to breathe life into a previously infeasible 3D photonic computing concept, paving the way for general AI inference hardware that can carry the workload of 100 leading GPUs while consuming only 1% of the power.

Neurophos optical processing unit combines its ultra-dense and fast in-house optical modulators with an optics stack that lets chips sit on the same plane, avoiding interconnect issues. This design enables the OPU to perform 100x faster and 100x more efficiently than leading GPUs.

To make AI hardware 100x faster, you need a radical rethink of the fundamentals.

Neurophos is developing the world’s most advanced form of AI inference accelerator: photonic in-memory compute using CMOS-compatible optical metasurfaces. By leveraging the massive parallelism of light and a breakthrough in what the company claims is a 10,000× smaller optical modulators, enabling billions per wafer through standard processes, Neurophos redefines compute density and energy efficiency.

Neurophos has solved the fundamental bottleneck in optical computing: modulator density. The breakthrough achieves 10,000x smaller optical modulators than current state-of-the-art, with the ability to pack 35 million modulators on a single 850mm² chip – whereas existing solutions would require a dining-table-sized area for equivalent count. This density breakthrough is critical because modulator count directly drives both computational throughput and energy efficiency.

Powered by optical metasurfaces and silicon photonics, Neurophos has developed an ultra-fast, wafer-scale AI inference architecture that breaks through the scaling limits of both electrical and legacy photonic compute. Built on decades of metamaterials research and supported by a portfolio of 300+ patents, The Neurophos’ team claims they have validated up to 235x performance and 190x energy efficiency improvements over Nvidia’s B200 GPUs, positioning the company to redefine the foundation of AI datacenter infrastructure.

AI compute at the speed of light

Privacy Preference Center

Pin It on Pinterest