the new fraud of cloning voices to steal from older relatives | The USA Print

the new fraud of cloning voices to steal from older relatives


Ruth Card and her husband Greg Grace received an unexpected call a few days ago: their grandson Brandon was in jail without a wallet or phone. He needed cash to pay the bail they requested and be able to go free. Her grandparents did not hesitate and rushed to several banks to withdraw the amount they requested. In the first entity, they took out $2,207. In the second, they were warned that another customer had received a similar call. And they began to suspect that it was a scam.

This American couple had a hard time believing it was a hoax, because their grandson’s voice was identical. Nothing made them suspicious. However, the warning in this second bank encouraged them to gather some information. They were not wrong.

They were victims of one of the many phishing scams taking place in the country, sponsored by artificial intelligence (AI).


read also

Mayte Rius

PHOTO ALEX GARCIA PHOTO ILLUSTRATION STATE RISK DUE TO IMITATION OF THE VOICE WE USE IN DEVICES WITH CONNECTIVITY 2022/09/07

His history has picked it up The Washington Post, which points to a worrying rise in these crimes. According to data According to the Federal Trade Commission (FTC), consumers lost $8.8 billion to fraud of some kind in 2022, an increase of more than 30% over the previous year. The second highest amount of loss came precisely from impostor scams, with losses of $2.6 billion reported in this period, compared to $2.4 billion in 2021.

Also Read  Amazon follows the TikTok trend with this new purchase option | The USA Print

Cyber ​​attacks and cybercrime are increasing not only in the US, but throughout Europe, and are becoming more sophisticated, says the European Council (EC). This trend will continue to worsen in the future, as 41 billion devices worldwide are expected to be connected to the Internet of Things by 2025.

with the AI

Everything is easier for cybercriminals and more complicated for users

Dark web, darknet and hacking concept.  Hacker with cell phone.  Man using dark web with smartphone.  Mobile phone fraud, online scam and cyber security threat.  Scammer using stolen cell.  AR data code.

Recreation of a man accessing the ‘dark web’ with a smartphone

Getty Images/iStockphoto

Everything is easier for cybercriminals and more complicated for users, thanks to the advances and democratization of AI. This technology is making it easier to mimic voices, convincing people, often the elderly, that their loved ones are in danger. “We thought we were talking to Brandon,” explains this couple, in statements to this medium.

Today there are plenty of cheap online tools that can translate an audio file into a replica of a voice very convincingly. This technology based on generative artificial intelligence, only needs an audio of a few sentences. From there, they pick up the tone and are able to reproduce all the necessary phrases.


read also

Mayte Rius

WiFi signals can be used to map a human body in detail through walls

Gone are old tricks, like the vishing, a type of fraud based on social engineering and identity theft. It is done through phone calls, where the attacker impersonates a company, organization or even a trusted person, in order to obtain personal information from their victims. But it never got as accurate as these apps capable of cloning voices.

Also Read  Here's a Python code the use of the Twitter API to get the present trending subjects for a given location:

Method

A small audio sample is enough to simulate a voice in a very credible way

Human face and mouth and sound waves - 3D illustration

Simulation of human sound frequencies

Getty Images/iStockphoto

More and more companies are dedicated to developing voice generation software, in charge of analyzing their characteristic features: age, gender, accent or tone. “It’s kind of a perfect storm…with all the ingredients you need to create chaos,” he explains to The Washington Post Hany Farid, a professor of digital forensics at the University of California at Berkeley.

This specialist ensures that it is very easy to clone the voices using AI. It only requires a small audio sample of the person in question, taken from YouTube, podcasts, commercials, TikTok, Instagram or Facebook videos.


read also

Marius Serra

FILE - Text from the ChatGPT page of the OpenAI website is shown in this photo, in New York, Feb. 2, 2023. (AP Photo/Richard Drew, File)

Companies emerge in this sector. One of them is ElevenLabs, which specializes in natural-sounding speech synthesis and text-to-speech software, using AI and deep learning. “The most realistic and versatile AI voice software ever. Eleven brings the most compelling, rich and realistic voices to creators and publishers looking for the best storytelling tools. It can be used for free in its paid version.

Microsoft has also jumped on the generative AI bandwagon. In January, it announced that it was developing VALL-E, a Text-to-Speech (TTS) language model capable of learning and imitating any voice based on a three-second recording. Not only does it imitate, but its developers claim that it is capable of “preserving the emotion of the speaker and the acoustic environment of the message”.

Also Read  The affect of telling most cancers in networks: “This is not Netflix; we are people” | Well being & Wellness | The USA Print





#fraud #cloning #voices #steal #older #relatives