Supercharged AI: Kove’s Software Outsources Memory for Speed

Supercharged AI: Kove’s Software Outsources Memory for Speed

Modern society is becoming increasing data hungry, especially as the use of AI continues to grow exponentially. As a result, ensuring enough computer memory—and power to sustainable support that memory—has become a major concern.

Now, the software company Kove has figured out a way to pool and dynamically outsource computer memory in a way that dramatically boosts computer memory efficiency. Kove’s system leverages external pooled memory to produce results even faster than can be achieved with local memory.

For one of Kove’s clients, the approach reduced the power consumption of servers by up to 54 percent. For another, it slashed the time needed to run a complex AI model, enabling a 60-day training run to be completed in just one day.

John Overton, the CEO of Kove, has been working on this software solution for 15 years. He emphasizes that meeting the high demand for memory is one of the most pressing concerns facing the computer industry. “You hear of people running out of memory all the time,” he says, noting the AI and machine learning algorithms require huge amounts of data.

Yet computers can only crunch the data as fast as their memory allows, and will crash mid-task without enough of it. Kove’s software-defined memory (SDM) solution aims to mitigate this problem by outsourcing memory needs to external servers.

How Software-defined Memory Works

Overton notes that many computer scientists thought that outsourcing memory—at least with the same efficiency as processing the data locally—was impossible. Such a feat would defy the laws of physics.

The issue comes down to the fact that electrons can only travel at the speed of light. Therefore, if an external server is 150 meters away from a mainframe computer, there would inevitably be a delay of ~500 nanoseconds in the electrons reaching the external server: roughly 3.3 nanoseconds of latency (delay) for each meter the data must travel. “People have presumed this problem is unsolvable,” says Overton.

SDM is able to overcome this issue and utilize pooled memory at super fast speeds because of the way it strategically divides up the data being processed. It ensures the data that would be most efficiently processed locally stays with the CPU, while the other data resides in the external memory pool. While this doesn’t actually transmit data faster than the speed of light, it is more efficient than processing all the data locally using a CPU. In this way, SDM can actually process data faster than it would if the data was kept locally.

“We’re clever about making sure that the processor gets the memory that it needs from the local motherboard,” Overton explains. “And the results are spectacular.” As an example, he notes that one of his clients, RedHat, experienced a 9 percent reduction in latency using SDM.

Energy Savings from Pooled Memory

Another key advantage with the SDM approach is a slash in energy needs. Typically, scientists need to run models on whatever server…

Read full article: Supercharged AI: Kove’s Software Outsources Memory for Speed

The post “Supercharged AI: Kove’s Software Outsources Memory for Speed” by Michelle Hampson was published on 05/06/2025 by spectrum.ieee.org