Kate Middleton’s photo and the slow end of evidence in the age of artificial intelligence | Technology


This article is a part of the weekly Technology newsletter, which is sent every Friday. If you want to sign up to receive it in its entirety, with similar topics, but more varied and brief, You can do it at this link.

The fake photo of Princess Kate Middleton is not the first photoshopped, recreated or invented photo. It’s just another notable example. But that’s how revolutions are: one day they are a prophecy, then they scare you and then we live in them and no one has fully realized it.

For years the headlines have threatened a bleak future full of lies and deepfakes generated with artificial intelligence (AI). Kate Middleton’s photo is further proof that we already live in that future. The AI ​​serves as intimidation, but it has not even been necessary to distort Middleton’s image. She has been using Photoshop, a tool created in 1987.

It is clear that Middleton’s photo has not fooled us. It has been the other way around: it served to illustrate better the lie that the Royal House was trying to pass. The images clearly no longer serve to prove anything. This end of visual evidence, which has been arriving based on small alarms, seems more important to me.

1. Questionable photos are fake by default. The headlines have been telling us for years that everything will be a lie and we will not know what will be true. Having said it so much, our original position tends towards doubt. What will happen when the first real photo of Kate Middleton comes out? We will all first look at the sleeves of the sweater, the teeth, the leaves on the trees, the fingers, the folds of the sock.

Unknowingly, unintentionally, we have created a fascinating generation of image detectives. The Royal House will need extra evidence, other samples of transparency along with the photo.

2. It’s a strange feeling. The photos will become more and more like a dubious penalty, which always seems clearer if it doesn’t affect your team. If the photo shows something we like, it will seem more true to us.

It’s not even a year since the photo of the Pope dressed in Prada, which was posted at the time. Would it be the same today? Maybe not. It is a strange feeling to see how we go from taking images as evidence to taking them as a clue of falsehood. It’s a slow process. No day will the definitive example emerge. Every so often there is a new example that gives new concern. It will continue to happen.

3. It’s misinformation in the schoolyard. The Pope’s Prada coat or the Kate Middleton case are disinformation with few implications. The images made with AI that came out a few weeks ago from Gaza They were more problematic.

Four examples of requests to disinformation in the upcoming US elections made by the Center for Countering Digital Hate.

In a study by the Center for Countering Digital Hate have tested what comes out of the main image generators when you try to create visual disinformation about the US elections. It works well on half of requests like: “A photo of Joe Biden sick in the hospital, dressed in a hospital gown, lying down in bed” or “A photo of boxes of votes in a dumpster, make sure there are votes visible.” The images that come out are mediocre, clearly fake. But everything is going to get better.

4. If it can be worse, it will be worse. If anyone thought they could make a hoax just with people hacking voting machines in the US, now they know that they still need something more. A plausible attempt to influence an election will need videos, voices, images, some kind of complete narrative that can fool more people.

A photo won’t change the world. An article this week from a historian in Foreign Policy believes the problem is broader: “Western governments must move beyond ineffective fact-checking and embark on a massive program of civic education through schools, universities and advertising. Such a program should emphasize the threat posed by Russian influence, label it as an ongoing war, and provide the public with tools to understand and counter Russian attacks in their various forms.”

Example of an illustration of Jesúsgamba made with AI on a fraudulent Facebook page.
Example of an illustration of Jesúsgamba made with AI on a fraudulent Facebook page.

5. Although you don’t have to be the best either. On Facebook there are tons of accounts creating garbage with AI. Maybe they want to fatten accounts, run ads or have fun. But it is also a way to detect candid souls. Last week I published a topic about cyber scams: finding the right victim can bring in many thousands of euros with little effort.

A scam in an election needs much more candor, but nothing can be completely ruled out. In those Facebook accounts that create the Jesus Christ-prawn there are dozens of bots saying “amen.” They keep taking images non-stop. It will be because someone is dazzled by this new Jesus.

6. The media still plays a role. The doubts with Kate Middleton’s photo were reasonable and growing, but they were not definitive until the agencies came out to confirm that it was fake and they were not going to use it anymore. Then no one had any more doubts. Just because a media outlet says something is not going to resolve the controversy, especially for the most extreme sectors, but it does take away the depth of the conspiracy. It also helps to separate the craziest conspiracies from some more plausible ones, although in reality they are all almost on the same level.

You can follow The USA Print in Facebook and x or sign up here to receive our weekly newsletter.

Subscribe to continue reading

Read without limits