Advertisement
Deepfakes

An Astronomy Technique Could Detect Deepfakes Through People’s Eyes

Looking at the light reflections of eyes could be a clue to detecting AI-generated impersonations.
An Astronomy Technique Could Detect Deepfakes Through People’s Eyes
Photo by Marina Vitale / Unsplash
🖥️
404 Media is an independent website whose work is written, reported, and owned by human journalists and whose intended audience is real people, not AI scrapers, bots, or a search algorithm. Supporters get access to this article. Learn why we require this here.

Astronomers are working on a new way to detect deepfakes by examining the reflections in people’s eyes. 

According to Nature, researchers at the University of Hull in the U.K. used the CAS system—a method used by astronomers that measures concentration, asymmetry and smoothness (or clumpiness) of galaxies—and a statistical measure of inequality called the Gini coefficient to analyze the light reflected in a person’s eyes in an image.

“It’s not a silver bullet, because we do have false positives and false negatives,” Kevin Pimbblet, director of the Centre of Excellence for Data Science, Artificial Intelligence and Modelling at the University of Hull, U.K., said when he presented the research at the U.K. Royal Astronomical Society’s National Astronomy Meeting last week, according to Nature. Pimbblet said that in a real photograph of a person, the reflections in one eye should be “very similar, although not necessarily identical,” to the reflections in the other.  

Adejumoke Owolabi, a data scientist at the University of Hull who used the techniques to try to predict whether images of people’s faces were fake, found that watching the eyes for light using those astronomy methods found the fakes about 70 percent of the time. Owolabi and Pimbblet’s findings aren’t yet published.

Scientists have been working on detecting deepfakes at scale using technology for almost as long as deepfakes have existed. Proposed detection methods include using AI to detect weird facial movements, looking for oddly-blinking eyes, and checking for a pulse in the subject’s face. Companies including OpenAI and Meta have poured millions of dollars into initiatives to detect deepfakes, and the Department of Defense and the FBI have both freaked out about deceptive deepfakes and their potential for harm. 

The problem with almost any AI detection method, especially as more realistic deepfakes flood the internet, is that they help AI models improve. Looking for the light in one’s eyes might be a useful detection method now, but by the time it’s useful at scale and fast enough to outpace people’s willingness to believe what they see online, it could already be obsolete.

Advertisement