AI Networking: Cornelis’ CN500 Boosts Performance

AI Networking: Cornelis’ CN500 Boosts Performance

In the good old days, networks were all about connecting a small number of local computers. But times have changed. In an AI-dominated world, the trick is coordinating the activity of tens of thousands of servers to train a large language model—without any delay in communication. Now there’s an architecture optimized to do just that. Cornelis Networks says its CN500 networking fabric maximizes AI performance, supporting deployments with up to 500,000 computers or processors—an order of magnitude higher than today—and no added latency.

The new technology brings a third major product to the networking world, along with Ethernet and InfiniBand. It’s designed to enable AI- and high-performance computers (HPC, or supercomputers) to achieve faster and more predictable completion times with greater efficiency. For HPC, Cornelis claims its technology outperforms InfiniBand NDR—the version introduced in 2022— passing twice as many messages per second and with 35 percent less latency. For AI applications, it delivers six-fold faster communication compared to Ethernet-based protocols.

Ethernet has long been synonymous with local area networking, or LAN. Software patches have allowed its communication protocols to weather the test of time. The invention of InfiniBand was an improvement, but it was still designed with the same goal: connecting a small number of local devices. “When these technologies were invented, they had nothing to do with parallel computing,” says Philip Murphy, co-founder, president, and chief operation officer at Pennsylvania-based Cornelis.

When data centers started to spring up, engineers needed a new networking solution. Because different systems used different software, they couldn’t share resources—so scaling the likes of Ethernet and InfiniBand to accommodate the busiest periods of operations proved challenging. “That sparked the whole cloud evolution,” says Murphy. Sharing a cloud-based CPU among different computers or even different organizations became the solution du jour.

But while data center pioneers tried to maximize the number of applications running on one server, Murphy and his colleagues saw value in an opposite approach: maximizing the number of processors working on one application. “That requires a totally different networking solution,” he says, which is what Cornelis now offers. The company’s Omni-Path architecture, developed by Intel for supercomputing applications like simulating climate models or molecular interactions for drug design, offers maximum throughput with zero data packet loss.

Congestion-free data highway

Coordinating processors to train AI models requires the exchange of many messages—data packets—at very high bandwidth. The message rate per millisecond matters, and so does the latency, meaning how long a recipient takes to respond.

One major challenge with sharing so many data packets throughout a network is traffic congestion. Murphy explains, you need a way to…

Read full article: AI Networking: Cornelis’ CN500 Boosts Performance

The post “AI Networking: Cornelis’ CN500 Boosts Performance” by Rachel Berkowitz was published on 06/22/2025 by spectrum.ieee.org