Neither the US releases crocodiles to stop migration nor the Louvre has caught fire: how to detect if an image has been generated with AI | Technology

0
112

Not even the United States has released “8,000 crocodiles into the Rio Grande” to prevent the crossing of migrants neither The pyramid of the Louvre Museum has caught fire. They have also not found this giant octopus on the shore of a beach, Big Foot or the Loch Ness Monster. All this misinformation arises from images generated with artificial intelligence. While technology companies such as Google, Meta and OpenAI try to detect AI-created content and create tamper-resistant watermarks, users face a difficult challenge: discerning whether the images circulating on social networks are real or not. . Although it is sometimes possible to do it with the naked eye, there are tools that can help in more complex cases.

Generating images with artificial intelligence is becoming easier. “Nowadays anyone, without any technical skills, can write a sentence on a platform like DALL-E, Firefly, Midjourney or another message-based model and create a hyper-realistic piece of digital content,” says Jeffrey McGregor, CEO of truepicone of the founding companies of the Coalition for Content Provenance and Authenticity (C2PA), which seeks to create a standard that allows verifying the authenticity and origin of digital content. Some services are free and others don’t even require an account.

“AI can create incredibly realistic images of people or events that never happened,” he says. Neal Krawetzfounder of Hacker Factor Solutions and PhotoForensics, a tool to check if an image may have been manipulated. In 2023, for example, an image of Pope Francis wearing a Balenciaga down coat and others of former United States President Donald Trump fleeing went viral. from the police to avoid being arrested. Krawetz highlights that these types of images can be used to influence opinions, damage someone’s reputation, create misinformation and provide false context around real situations. In this way, “they can erode trust in otherwise reliable sources.”

AI tools can also be used to falsely represent people in sexually compromising positions, committing crimes, or accompanied by criminals, as noted VS Subrahmanian, professor of computer science at Northwestern University. Last year, dozens of minors from Extremadura reported that fake nude photos of them created by AI were circulating. “These images can be used to extort, blackmail and destroy the lives of leaders and ordinary citizens,” says Subrahmanian.

And not only that. AI-generated images can pose a serious threat to national security: “They can be used to divide the population of a country by pitting one ethnic, religious or racial group against another, which could lead to long-term unrest and political unrest. ”. Josep Albors, director of research and awareness at the computer security company ESET in Spain, explains that many hoaxes are based on this type of images to generate controversy and provoke reactions. “In an election year in many countries, this can tip the balance towards one side or the other,” he says.

Tricks to detect images generated with AI

Experts like Albors advise being suspicious of everything in the online world. “We have to learn to live with the fact that this exists, that AI generates parallel realities, and simply keep that in mind when we receive content or see something on social networks,” says Tamoa Calzadilla, editor-in-chief of Factcheckedan initiative of damn.es and Checked to combat misinformation in Spanish in the United States. Knowing that an image may have been generated by AI “is a great step to not be deceived and share misinformation.”

Some images of this type are easy to detect just by looking at details of the hands, eyes or faces, as Jonnathan Pulla, fact-checker at Factchequeado, says. It is the case of an AI-created image of President Joe Biden wearing a military uniform: “You can tell by the different skin tones on his face, the telephone cables that go nowhere, and the disproportionately sized forehead of one of the soldiers who appear in the image.”

He also gives as an example some manipulated images of the actor Tom Hanks posing with t-shirts with messages in favor or against of Donald Trump’s re-election. “That (Hanks) has the same pose and only changes the text on the shirt, that his skin is very smooth and his irregular nose indicate that they may have been created with digital tools, such as artificial intelligence,” Maldita.es verifiers explain about these images, which went viral in early 2024.

Many images generated by AI can be identified with the naked eye by a trained user, especially those generated by free tools, according to Albors: “If we look at the colors, we will often notice that they are not natural, that everything looks like plasticine. and that, even, some components of these images merge with each other, such as the hair on the face or various clothes with each other.” If the image is of a person, the expert suggests also looking at whether there is anything abnormal in their extremities.

While first-generation imagers made “simple mistakes,” they have improved markedly over time. This is indicated by Subrahmanian, who highlights that before they frequently represented people with six fingers and unnatural shadows. They also incorrectly displayed street and store signs, making “absurd spelling errors.” “Today, technology has largely overcome these shortcomings,” he says.

Tools to identify fake images

The problem is that now “an image generated by AI is already practically indistinguishable from a real one for many people,” as Albors points out. There are tools that can help identify these types of images such as AI or NOT, sensitivity, PhotoForensics either Hive Moderation. OpenAI is also creating its own tool to detect content created by its image generator, DALL-E, as announced on May 7 in a statement.

These types of tools, according to Pulla, are useful as a complement to observation, “since sometimes they are not very precise or do not detect some images generated by AI.” Factchequeado verifiers usually use Hive Moderation and FotoForensics. Both can be used for free and work in a similar way: the user uploads a photo and requests that it be examined. While Hive Moderation offers a percentage of how likely the content was generated by AI, FotoForensics results are more difficult to interpret for someone without prior knowledge.

When uploading the image of the crocodiles that have supposedly been sent by the US to the Rio Grande to prevent the crossing of migrantsof the Pope with the Balenciaga coat or one of a happy meal Satanic McDonald’s, Hive Moderation gives a 99.9% chance that both were generated by AI. However, with the manipulated photo of Kate Middleton, Princess of Wales, it indicates that the probability is 0%. In this case, Pulla found Fotoforensics and Invidthat “can show certain altered details in an image that are not visible”.

The results of the Hive Moderation tool with the image of the crocodiles supposedly released by the United States.
The results of the Hive Moderation tool with the image of the crocodiles supposedly released by the United States.Factchequeado / Maldita.es.

But why is it so difficult to know if an image has been generated with artificial intelligence? The main limitation of such tools, according to Subrahmanian, is that they lack context and prior knowledge. “Humans use their common sense all the time to separate real claims from false ones, but machine learning algorithms for deepfake image detection have made little progress in this regard,” he says. The expert believes that it will become increasingly unlikely to know with 100% certainty whether a photo is real or generated by AI.

Even if a detection tool were accurate 99% of the time in determining whether content was generated by AI or not, “that 1% gap at Internet scale is huge.” In just one year, AI generated 15 billion images. This is indicated by a report from Everypixel Journal, which highlights that “AI has already created as many images as photographers have taken in 150 years.” “When all it takes is one convincingly fabricated image to degrade trust, 150 million undetected images is a pretty disturbing number,” McGregor says.

Aside from manipulated pixels, McGregor stresses that it’s also nearly impossible to identify whether an image’s metadata—time, date, and location—is accurate after it’s created. The expert believes that “the provenance of digital content, which uses cryptography to mark images, will be the best way for users to identify in the future which images are original and have not been modified.” His company, Truepic, says it has launched the first deepfake transparent world with these brands —with information about its origin—.

An image marked with information about its origin.
An image marked with information about its origin.Truepic.

Until these systems are widely implemented, it is essential that users adopt a critical stance. In a guide prepared by Factchequeado, with support from the Reynolds Institute of Journalism, there are 17 tools to combat misinformation. Among them, there are several to verify photos and videos. The key, according to Calzadilla, is to be aware that none of them are infallible and 100% reliable. Therefore, to detect if an image has been generated with AI, a single tool is not enough: “Verification is carried out using several techniques: observation, the use of tools and the classic techniques of journalism. That is, contact the original source, review their social networks and verify if the information attributed to them is true.”

You can follow The USA Print in Facebook and x or sign up here to receive our weekly newsletter.