EnCharge AI Promises Low-Power and Precision in AI

EnCharge AI Promises Low-Power and Precision in AI

Naveen Verma‘s lab at Princeton University is like a museum of all the ways engineers have tried to make AI ultra-efficient by using analog phenomena instead of digital computing. At one bench lies the most energy efficient magnetic-memory-based neural-network computer every made. At another you’ll find a resistive-memory-based chip that can compute the largest matrix of numbers of any analog AI system yet.

Neither has a commercial future, according to Verma. Less charitably, this part of his lab is a graveyard.

Analog AI has captured chip architects’ imagination for years. It combines two key concepts that should make machine learning massively less energy intensive. First, it limits the costly movement of bits between memory chips and processors. Second, instead of the 1s and 0s of logic, it uses the physics of the flow of current to efficiently do machine learning’s key computation.

As attractive as the idea has been, various analog AI schemes have not delivered in a way that could really take a bite out of AI’s stupefying energy appetite. Verma would know. He’s tried them all.

But when IEEE Spectrum visited a year ago, there was a chip at the back of Verma’s lab that represents some hope for analog AI and for the energy efficient computing needed to make AI useful and ubiquitous. Instead of calculating with current, the chip sums up charge. It might seem like an inconsequential difference, but it could be the key to overcoming the noise that hinders every other analog AI scheme.

This week, Verma’s startup EnCharge AI unveiled the first chip based on this new architecture, the EN100. The startups claims the chip tackles various AI work with performance per watt up to 20 times better than competing chips. It’s designed into a single processor card that adds 200 trillion operations per second at 8.25 watts, aimed at conserving battery life in AI-capable laptops. On top of that, a 4-chip, 1000-trillion-operations-per-second card is targeted for AI workstations.

Current and Coincidence

In machine learning “it turns out, by dumb luck, the main operation we’re doing is matrix multiplies,” says Verma. That’s basically taking an array of numbers, multiplying it by another array, and adding up the result of all those multiplications. Early on, engineers noticed a coincidence: Two fundamental rules of electrical engineering can do exactly that operation. Ohm’s Law says that you get current by multiplying voltage and conductance. And Kirchoff’s Current Law says that if you have a bunch of currents coming into a point from a bunch of wires, the sum of those currents is what leaves that point. So basically, each of a bunch of input voltages pushes current through a resistance (conductance is the inverse of resistance), multiplying the voltage value, and all those currents add up to produce a single value. Math, done.

Sound good? Well, it gets better. Much of the data that makes up a neural network are the “weights,” the things…

Read full article: EnCharge AI Promises Low-Power and Precision in AI

The post “EnCharge AI Promises Low-Power and Precision in AI” by Samuel K. Moore was published on 06/02/2025 by spectrum.ieee.org