The AI Arms Race to Combat Fake Images Is Even—For Now

The AI Arms Race to Combat Fake Images Is Even—For Now

This article is part of our exclusive IEEE Journal Watch series in partnership with IEEE Xplore.

The recent and dramatic increase in AI-generated images is blurring the lines of what’s real and fake—and underscores the need for more tools to discern between the two.

In a recent study, researchers in Italy analyzed a suite of AI models designed to identify fake images, finding these current methods to be fairly effective. But the results, published in the May-June issue of IEEE Security & Privacy, also point to an AI arms race to keep pace with evolving generative AI tools.

Luisa Verdoliva is a professor at the University of Naples Federico II in Italy who was involved in the study. She notes that while AI-generated images may be great in terms of entertainment, they can be harmful when used in more serious contexts.

“For example, a compromising image can be created for a political figure and used to discredit him or her in an election campaign,” Verdoliva explains. “In cases like this, it becomes essential to be able to determine whether the image was acquired by a camera or was generated by the computer.”

“The detectors get better and better, but the generators also get better, and both learn from their failures.” —Luisa Verdoliva, University of Naples Federico II

There are two types of clues that hint at whether an image is generated by AI. The first are “high-level” artifacts, or defects, in the images that are obvious to the human eye, such as odd shadows or asymmetries in a face. But as Verdoliva notes, these blatant errors will become less obvious as image generators improve over time.

Deeper within the layers of an image are artifacts not obvious to the human eye, but only through statistical analysis of the image’s data. Each of these “low-level” artifacts are unique to the generator that created the image.

The concept is akin to firearm forensics, in which a fired bullet will exhibit unique scratches based on the barrel of the gun from which it was shot. In this way, bullets can be traced back to the gun that fired it.

Similarly, each fake image has a distinct data pattern based on the AI generator that created it. Ironically, the best way to pick up on these signatures is by creating new AI models trained to identify them and link them back to a specific image generator.

In their study, Verdoliva and her colleagues tested 13 AI models—capable of detecting fake images and/or identifying their generator—against thousands of images known to be real or fake. Unsurprisingly, the models were generally very effective at identifying image defects and generators they were trained to find. For example, one model trained on a dataset of real and synthetic images was able to identify images created by the generator DALL-E with 87 percent accuracy, and images generated by Midjourney with 91 percent accuracy.

More surprisingly, the detection models could still flag some AI-generated images that they weren’t specifically…

Read full article: The AI Arms Race to Combat Fake Images Is Even—For Now

The post “The AI Arms Race to Combat Fake Images Is Even—For Now” by Michelle Hampson was published on 06/08/2024 by spectrum.ieee.org