Not if it’s my daughter: the ‘deepfake’ porn user sees it without hesitation, but he would report it if the victim were from his environment | Technology

She was not yet Italian prime minister, but she was already popular when a fake pornographic video was published four years ago with Giorgia Meloni’s face on another woman’s body. On July 2, she is called to testify in a lawsuit against those involved, a 40-year-old man, who created the images, and her 73-year-old father, who provided the line to publish it. She demands 100,000 euros from them as an “exemplary symbolic measure” that “contributes to the protection of women targeted by this type of crime,” according to lawyer Maria Giulia Marongiu. The deepfakes, hyper-realistic fake audiovisual materials, have doubled every year since the first complaint was registered in 2017 for non-consensual nudity and little has changed since then. An investigation of Home Security Heroes (MSM) confirms a panorama already identified: 98% is pornography and 99 out of every 100 victims are women and almost all of them are popular.

The most radical change has been technological. If at first knowledge of computers and image editing was required, now, one in every three available tools allows you to create fake creations in less than 25 minutes and at zero cost. Google, which serves as an indicator as it is the predominant search engine, has withdrawn, according to its latest transparency report, 8 billion links. Thousands of them are pages of deepfakesconcentrated in two portals, according to the Lumen database from Harvard University. The technology companies, forced by the new laws, begin to act.

The accessibility of the tools (60% online and 40% downloadable) joins the motivations of the abusers, who convince themselves that they only do it out of curiosity, the attraction to famous people, as in the case of the singer Taylor Swift , and the visualization of a fantasy, according to MSM. This childish perception means that 74% of users (according to a survey with 1,522 male participants) do not feel guilty.

But this supposed naivety is as false as the material they consume. “It is a problem of sexist violence,” he says. to the Massachusetts Institute of Technology (MIT) Adam Dodge, founder of EndTAB, a non-profit organization for education in technological uses. The EU Directive on combating violence against women includes these creations as aggression.

And the perception of this attack is so clear that even the vast majority of Internet users deepfakesAccording to the MSM study, making a display of hypocrisy, they would report them if the victim was someone close to them (73%) and they would feel “shocked and outraged” (68%) by the violation of their privacy.

The growth of non-consensual nudity has occurred despite laws that condemn these practices and protect victims from the alleged freedom of expression wielded by content creators. According to article 18.1 of the Constitution, the rights to honor, personal and family privacy and one’s own image are considered fundamental (…) Article 20.4 provides that respect for such rights constitutes a limit to the exercise of freedoms of expression”. This is how the Organic Law 1/1982 that regulates this matter.

“From a theoretical point of view, there is a possible framework of reference,” explains Ricard Martínez, director of the Chair of Privacy and Digital Transformation at the University of Valencia. In the United States, most claims are covered by Digital Millennium Copyright Act (DMCA) 1998.

“When you take the real image of a person, but modify it with any intention, there is an instrumental conduct that consists of treating their image without consent for a purpose that is not lawful,” explains Martínez. “Another thing,” he clarifies, “is a comedian who generates an image with a satirical spirit and in a clear context.”

But these regulations have proven insufficient, which is why Europe approved in November 2022 (it came into force last May) the digital services and markets laws to “protect the fundamental rights of users and establish equitable conditions of competition for companies.” These regulations oblige large companies to collaborate in risk assessment, identification, notification and removal of suspicious links.

“There are two important subjects: the one who offers the tool, who will always say that its application was not designed to commit a crime, and the one who offers the creation, the one who acts as a speaker. The law imposes more intense collaboration duties on the latter,” adds Martínez.

Google admits the new responsibilities and, in a brief written response, to the increase in complaints, declares: “We have policies for pornography deepfake non-consensual, so people can remove this type of content that includes your image from search results. And we are actively developing additional safeguards to help those affected. Furthermore, we have a takedown process that allows rights holders to protect their work on the Internet.”

Meta is also in this line. Nick Clegg, as president of global affairs, announced on February 6: “We apply labels of Imagined with AI to photorealistic images created with our function, but we also want to be able to do it with content created with tools from other companies.” It refers to Google, OpenAI, Microsoft, Adobe, Midjourney and Shutterstock as they implement their plans to add metadata to images created by their tools.

The big technology companies are thus joining the legal crusade against deepfakes and the recent approval of the European law on artificial intelligence, which requires unequivocal labeling of creations developed with this technology. Government of the United States is also moving in that direction. “It can no longer be argued that the use of the system or its results respond to the exercise of freedom of expression and freedom of creation,” celebrates the Valencian professor.

“The concern is common and we are beginning to see a confluence of interests from two different legal cultures. The message is beginning to be sent to these companies that not everything goes, that they cannot wash their hands saying ‘hey, I’m just a platform and I can’t be responsible for everything.’ Information society service providers have a decisive influence on the viralization of the content that is displayed. They are not a neutral operator or a mere container. They are part of the operation, of the game,” concludes Ricard Martínez.

You can follow The USA Print in Facebook and x or sign up here to receive our weekly newsletter.

Subscribe to continue reading

Read without limits

_

Our Free Online Tools

Instagram Hashtags Twitter Trends Youtube Trends Google Trends Amazon Trending Products Age Calculator EMI Calculator Love Calculator Percentage Calculator Margin Calculator

Latest Articles

Popular Article Categories

Related Articles

OpenAI puts flashy products before security, says former worker | ...

Jan Leike believes that OpenAI “is taking on an enormous responsibility on behalf of all humanity.” And Leike's opinion...
Read more
Before the 2020 US presidential election, more than 35,000 Facebook and Instagram users agreed to participate in an experiment. ...
This article is part of the weekly Technology newsletter, which is sent every Friday. If you want to sign...
Today the Council of Ministers approved the Artificial Intelligence Strategy 2024, the document that orders actions in this area for...
Just 24 hours after the presentation of ChatGPT-4o, the most advanced version of the Open AI conversational robot, Google has...
Ilya Sutskever, until now chief scientist at OpenAI, leaves the company that amazed the world with ChatGPT. This was...

LEAVE A REPLY

Please enter your comment!
Please enter your name here

x