This spring, hype has been swirling around two AI-powered gadgets: the Humane AI Pin and Rabbit R1. Both promised AI automation and seamless conversation with an ever-present, always-helpful AI assistant.
They failed. Prominent tech reviewer Marques Brownlee called the Humane AI Pin the “worst product I’ve ever reviewed,” while the Rabbit R1 received the somewhat kinder verdict of “barely reviewable.”
“I don’t think it will be more than six months before we see actual [AI] applications that are run on PCs, or even mobile devices.” —Dwith Chenna, AMD
Dr. John Pagonis, Principal UX Researcher at Zanshin Labs, observed that any new consumer device needs to prove it’s better than what’s already available, a task both AI gadgets failed. “What is the problem [these devices] solve? What is the need that they cover? That is not obvious.”
So, that’s a wrap, right?
Not quite. While Humane and Rabbit failed, solutions to the problems that stymied these newcomers are right around the corner—and they could change consumer tech forever.
ChatGPT, are you there? Hello? Hello…?
Today’s best AI large language models (LLMs) face a common foe. Latency. People expect a reaction when they tap or talk, but the best LLMs reside in data centers, which can cause delays. That’s core to Humane’s and Rabbit’s woes. Reviewers complained the gadgets were slow to respond and useless when Internet access was unreliable or unavailable.
Yet there’s a solution: Put the LLM on the device. I reported on this possibility for IEEE Spectrumin December, and a lot has happened since then. Meta’s Llama 3, Microsoft’s Phi 3, and Apple’s OpenLEM—all announced in April of 2024—brought big gains in the quality of small AI models. Chipmakers like Apple, Intel, AMD, and Qualcomm are tackling the problem, too, boosting the performance of the AI coprocessors in laptops, tablets, and smartphones.
“All these apps, all these different notifications. It’s too much. It’s exhausting. There’s a lot of research that this shouldn’t be the way we interact with technology.” —Patricia Reiners, UX designer
Dwith Chenna, an engineer at AMD who specializes in AI inference, said these improvements make LLMs possible without the cloud. The Humane AI Pin and Rabbit R1 simply released too early to take advantage of these advancements.
Apple’s M4, which debuted in the new iPad Pro, has, the company says, “Apple’s fastest Neural Engine ever.”Apple
“There’s a lot of focus on trying to squeeze [large language] models, to run them on devices like PCs and mobile phones,” Chenna said. “I don’t think it will be more than six months before we see actual [AI] applications that are run on PCs, or even mobile devices.”
Bringing LLMs to consumer tech will also tackle another key problem. Privacy.
Rabbit’s R1 uses a Large Action Model (LAM) to automate apps and services, but some reviewers expressed discomfort with the idea. The LAM requires personal…
Read full article: Where Do AI Gadgets Go From Here?
The post “Where Do AI Gadgets Go From Here?” by Matthew S. Smith was published on 05/22/2024 by spectrum.ieee.org