AI generators can create weird, wacky, or wonderful images—all while emitting hefty amounts of carbon. Energy-hungry electronic computations drive the generative AI process, with the underlying diffusion models trained to produce novel images out of random noise.
Researchers at the University of California, Los Angeles aim to reduce this carbon footprint by employing photons instead of electrons to power AI image generation. Their optical generative models pair digital processors with analog diffractive processors that compute with photons. The group described their technology 27 August in the journal Nature.
Optical Generative Models Explained
Here’s how the process works. The first step is called knowledge distillation, in which a “teacher” diffusion model trains a “student” optical generative model to digitally process random noise. Next, the student model encodes random noise inputs into optical generative seeds, which are phase patterns representing the phase information of light—think of each seed as something like a slide for an overhead projector. These seeds are displayed on a spatial light modulator (SLM), which can control the phase of light passing through it. (The specific SLMs used by the researchers are liquid crystal devices.) When laser light shines through the seed, its phase pattern propagates through a second SLM. This SLM—the diffractive processor—decodes the phase pattern to create a new image captured by an image sensor.
“There’s a digital encoder, which gives you the seed rapidly, and then the analog processor is the key that decodes that representation for the human eye to visualize,” says Aydogan Ozcan, a professor of electrical and computer engineering at UCLA. “The generation happens in the optical analog domain, with the seed coming from a digital network. All in all, it’s replicating or distilling the information generation capabilities of a diffusion model.”
Generation happens at the speed of light: “The system runs end-to-end in a single snapshot,” Ozcan says. By harnessing the physics of optics, these systems can run more swiftly and potentially consume less energy than diffusion models that iterate through thousands of steps.
The team devised two versions of their model. The aforementioned “snapshot model” that generates an image in a single optical pass, and an iterative model that enhances its outputs successively. The iterative model creates images with higher quality and clearer backgrounds than its snapshot counterpart. Both models were able to produce monochrome and multicolor images—including representations of butterflies, fashion products, handwritten digits, and even Van Gogh-style art—that the researchers found closely resembled the output image quality of diffusion models.
Privacy Benefits of Optical Models
Optical generative models offer an added benefit of privacy, mimicking encryption capabilities. “If you look at the phase information of the digital encoder,…
Read full article: Optical AI Enables Greener, Faster Image Creation

The post “Optical AI Enables Greener, Faster Image Creation” by Rina Diane Caballar was published on 10/06/2025 by spectrum.ieee.org
Leave a Reply