AI-Powered Microdisplay Adapts to Users’ Eyesight

AI-Powered Microdisplay Adapts to Users’ Eyesight

Modern augmented reality (AR) and virutal reality (VR) headsets achieve pixel counts in the tens of millions. Yet that doesn’t guarantee a crisp image. Headsets often leave users squinting to focus on a blurry mess or, in some cases, a creeping sense of nausea.

Microdisplay manufacturer KOPIN, of Westborough, Mass., working in partnership with MIT’s Computer Science & Artificial Intelligence Laboratory, may have a solution: the NeuralDisplay. It combines eye-tracking with machine learning to compensate for a user’s vision on-the-fly without additional optics.

“We asked, how can we change the technology for the user, and not make the user try to change themselves, to use the technology? And the answer that came back was AI,” says Michael Murray, CEO of KOPIN.

A display that adjusts for your eyes

The first NeuralDisplay is a 1.5-inch square micro-OLED with a resolution of 3,840 x 3,840 and a maximum brightness of 10,000 candelas. These specifications place it in league with other leading micro-OLED, like Sony’s 1.3-inch 4K micro-OLED. It also has an unusual quad-pixel arrangement that places red, blue, and green sub-pixels alongside a fourth pixel containing a pixel imager.

NeuralDisplay packs 3,840 x 3,840 resolution and an onboard AI accelerator. KOPIN

The pixel imager doesn’t function as a display element. It has a different task: to measure the light reflected by the user’s eyes. It’s similar in concept to a digital camera, but simpler in execution, as the imagers operate in monochrome and concentrate on measuring brightness. That’s enough to deduce details about a user’s eyes, including the direction of their gaze, their eye position in relation to the screen, and the dilation of their pupils.

Those measurements feed into an AI model that learns to compensate for the quirks of each user’s vision by adjusting the brightness and contrast of the display. “Think of it like two knobs, brightness and contrast, that we can turn in real-time,” says Murray. The pixel imagers continue to take readings, which are fed back into the machine learning algorithm to continually adjust the image. “The eye tracking piece is to have a feedback loop in the system. Did these changes make any difference, and what’s the user experience like?”

The data isn’t sent to the cloud, or even to a connected device, but instead handled by an onboard AI accelerator integrated into the display. Keeping data local is necessary to process it with the speed human vision requires. Murray says the human brain can interpret what it’s seeing in as little as 500 microseconds, after which problems caused by a headset’s optics are noticeable. Placing the AI accelerator onboard keeps latency in check and ensures the AI model is reliably fed new data.

Demystifying headset customization

The NeuralDisplay isn’t a panacea, its makers confess, and its ability to compensate for near or farsightedness is uncertain. “We’re still testing that part,” says…

Read full article: AI-Powered Microdisplay Adapts to Users’ Eyesight

The post “AI-Powered Microdisplay Adapts to Users’ Eyesight” by Matthew S. Smith was published on 12/13/2023 by spectrum.ieee.org