ChatGPT: Dystopia as marketing: why OpenAI thinks we want to fall in love with a robot | Technology

0
63

This article is part of the weekly Technology newsletter, which is sent every Friday. If you want to sign up to receive it in its entirety, with similar topics, but more varied and brief, You can do it at this link.

“Wow, this little outfit looks great on you,” the new version of ChatGPT unexpectedly said to an OpenAI employee during its presentation. The employee, Barret Zoph, had just written “I love ChatGPT” on a piece of paper after the machine helped him solve a problem. “You’re so sweet,” ChatGPT replied.

This conversation lasted only a few seconds in the barely half hour of the presentation of the ChatGPT-4o version (the o is for “omni”) last Monday. Its new features are above all the elimination of latency, which gives more of a sense of live chat, and the ability to “watch”, understand and talk about what you see. It is still a language model like the one we see in the web version of ChatGPT. But now he talks, laughs at his own jokes, puts on a sarcastic voice and, of course, seems to flirt. In the OpenAI clip on YouTube they have cut that fragment.

The great reference we have of an AI that flirts is Her, with the voice of Scarlett Johansson. I’ve watched the movie again and it’s hard to believe that OpenAI boasts of wanting that future for humanity. Sam Altman, CEO of OpenAI, tweeted “her” during the presentation. Other researchers from the company tweeted along the same lines: “You’re all going to fall in love” (luckily he didn’t add “with her”), one said, and then added a tweet from a user who, along with a video, wrote: “The ChatGPT voice is… sexy?”

Still another employee wrote: “I saw again Her last weekend and I felt like watching Contagion in February 2020″. As Herwhich is from 2013, Contagion is a 2011 film about a pandemic.

The movie Her It is therefore somewhat of a reference for the creators of ChatGPT-4o. Or at least a topic they have debated internally. It’s hard not to. As the specialized journalist Brian Merchant recalls in his newsletter, dystopian science fiction references are a common resource in Silicon Valley. The most likely explanation is marketing. It’s easier to advertise “Her” than a “never seen conversational language model.”

Google presented its Project Astra on Tuesday, which is the same as ChatGPT-4o but with a more aseptic voice and tone. After putting down her cell phone, the Google employee used glasses to chat with her model. Google is more cautious with marketing, but its founder, Sergey Brin, dropped in supposedly casual conversations with journalists that Google Glasses had arrived too soon. In the Google demo, the camera looked out the window and said, “I’d say this is King’s Cross.” They are not that far from the Terminator vision.

In his day, Elon Musk boasted of his new vehicle, the Cybertruck, like the one they would have driven in blade runner. Or perhaps the best example is the metaverse, used by Meta, which comes from Ready Player One. Dystopias converted into references. It is worth highlighting how the marketing he purposely forgets the details of the movie:

1. The movie ends badly. Samantha, the robot voiced by Scarlett Johansson, disappears at the end because the company shuts it down. Shortly before, the protagonist had discovered that he was not her only boyfriend: “I’m in love with 641 people,” she had told him. He spoke to more than 8,000 people at a time. It is a business and, like almost everything in the digital world, we only rent it.

In order to have real sex, the robot sends a woman to act as the body. The machine denies that it is prostitution and the protagonist believes it. In the end he goes wrong, but his power of conviction is unbeatable.

2. The humanization of this technology has real problems. That’s where the big problem with this technology comes from: a voice that seems human, that laughs, that talks about what it sees, is going to convince us of many things. At the end of April, More than two dozen Google researchers published a scientific paper titled The Ethics of Advanced AI Assistants. They said: “Empirical findings show that when a digital virtual assistant uses a realistic voice rather than a synthetic one, people tend to be more emotionally trusting and have a greater impression of social presence. Additionally, assistants who speak more like humans generate perceptions of intelligence and competence, making people more likely to trust them with more tasks.”

For OpenAI it is better that its assistant looks like a real person. Let’s use it more. But we are also going to give him more love, and we don’t have that much.

3. It is always a sensual woman’s voice. One of the memes that came out of the OpenAI presentation was the drop in the value of “girlfriends.” If a robot will act as a fun, kind and sensual girlfriend, why a real one?

It is the same problem that the protagonist of the story has. Although he also manages to disappoint Samantha, his robot.

The female voice is also interesting. A friend of the protagonist also has a virtual boyfriend, but her voice never comes out. And in the debate there is hardly any talk about these human assistants becoming “boyfriends.”

The Google researchers’ article says this about stereotyping: “When the simulated voice of a digital virtual assistant imitates a tone femalepeople assign gender stereotypes even though it makes no sense to apply gender concepts to an entity that does not have it.”

4. It’s all an artifice to make money. Logically, none of all this would exist if OpenAI were not involved in a race to dominate the AI ​​agent sector and, soon, the search engine. OpenAI will have assessed that it was more beneficial to put Her and its sensual tone to attract attention and let Google appear as the responsible company.

Another article from October 2023 about humanity of these systems strongly recommends against doing so: “We recommend that future efforts to develop dialogue systems take special care in their design, development, release and description; and to pay attention to the many linguistic cues that can cause users to humanize them.”

Sam Altman posted a small post to praise ChatGpt-4o. That was precisely what stood out, its realism. “The new voice mode is the best interface I have ever used. “It looks like AI from the movies.” Of course it seems that way, with consequences that are difficult to foresee.

5. The big mistake of the movie. The film contains one of those wonderful mistakes due to lack of imagination. The protagonist works in a company that writes letters by hand. His website is BeautifulHandwrittenLetters, who is free today. That ability is precisely something that ChatGPT already did before she could be her girlfriend.

It is true that the merit of letter writers (apart from copying handwriting) is knowing the details of the members of the couple. But it is still easily repairable.

6. It’s hard to believe that we will talk alone. Although ChatGPT-4o talks like a human, I have a hard time imagining an entire subway car whispering into its headphones, as it appears in the movie. Or dozens of people talking to themselves on the street.

Maybe it’s a generational thing and 10-year-old children will end up chatting with their devices (and today’s parents obsessed with screens).

Altman says it will become natural soon: “Talking to a computer has never felt natural to me, but now it does.” But we don’t know if Altman is sincere. His business also depends on our emotions.

You can follow The USA Print in Facebook and x or sign up here to receive our weekly newsletter.

Subscribe to continue reading

Read without limits

_