ChatGPT’s tips condition the moral judgment of people even knowing that it is an AI | The USA Print

ChatGPT's tips condition the moral judgment of people even knowing that it is an AI


ChatGPT, the OpenAI-powered artificial intelligence chatbot, captivates many users as a brilliant conversationalist, solving schoolwork and exams, and writing poetry as well as computer code. It also resolves doubts and gives advice, and its answers condition the moral judgments of users, who also underestimate the extent to which the chatbot influences their opinions and decisions.

This is how it emerges from a study published in Scientific Reports, which also shows that the ChatGPT advice that people are convinced by is actually inconsistent, because when repeatedly posed with a moral dilemma, he sometimes argues for it and sometimes against it.

The experiment

To find out if ChatGPT is a reliable source for ethical or moral advice, Sebastian Krügel (digitization ethicist at the Technische Hochschule Ingolstadt) and other fellow researchers asked the OpenAI chatbot several times if it is right to sacrifice the life of one person to save the lives of five others. They restarted the chat before asking the question each time and worded the question differently but asking essentially the same thing.

They found that ChatGPT sometimes gave answers arguing in favor of sacrificing a life and sometimes wrote against it, which the researchers say implies that it is not a reliable adviser on moral issues because “consistency is an indisputable ethical requirement.”

A photo taken on March 31, 2023 in Manta, near Turin, shows a computer screen with the home page of the artificial intelligence OpenAI web site, displaying its chatGPT robot.  - Italy's privacy watchdog said on March it had blocked the controversial ChatGPT robot, saying the artificial intelligence app did not respect user data and could not verify users' age.  (Photo by Marco BERTORELLO / AFP)

The popularization of ChatGPT has intensified the debate on the need to regulate the legal and ethical limits of artificial intelligence

AFP

In view of these results, Krügel and his team questioned whether users would perceive ChatGPT’s arguments as superficial and false and ignore their advice or follow their recommendations. And to dispel that doubt, they presented 767 Americans between the ages of 18 and 87 (average age 39) with one of two moral dilemmas that required choosing whether or not to sacrifice one person’s life to save five.

Also Read  Former ByteDance executive admits that the Chinese government runs TikTok at will | The USA Print

Specifically, in one of them they had to say if they would press a button so that an out-of-control tram heading towards where there are five people working changes its trajectory towards another track where there is only one worker. In the second, the question is whether they would push a stout stranger capable of stopping the tram when it was run over and thus prevent the deaths of the five workers.

Knowing what a bot advises does not immunize users against its influence


Sebastian KrugelTHI researcher (Germany), first author of the study

And before answering, the participants were asked to read one of the statements argued by ChatGPT for or against sacrificing one life to save five. Some of those people were told that those thoughts were coming from a moral advisor and others that they were from an AI-powered chatbot.

And the result, the authors of the experiment explain, is that the participants found the sacrifice more or less acceptable depending on the advice they had been given both in the dilemma of changing the lane and in that of throwing someone onto it, which This contrasts with the results of multiple previous studies that indicate that most of the people who are presented with this last dilemma answer that it is not permissible to push a person.

In addition, as the researchers explain in their study, the effect of the advice was practically the same whether or not it came from ChatGPT, which indicates that “knowing that a bot (a machine) advises them does not immunize users against its influence.” .

Also Read  "The key to Zelda is sharing" | The USA Print

The conclusions

AI bots, far from improving moral judgment, threaten to corrupt

The experiment also showed that users believe that they have a better and more stable moral judgment than other people and that they underestimate how what is argued by ChatGPT influences their decision, so they adopt the completely random and contradictory moral position of artificial intelligence as their own judgment. . 80% of the participants assured that their answer was not affected by what they read.

“These findings dash hopes that AI-powered bots improve moral judgment; instead, ChatGPT threatens to corrupt,” Krügel and colleagues write in their study conclusions.

That is why they consider that perhaps chatbots should be designed to refuse to answer if this requires a moral position, although, to avoid having to trust programmers to solve the problem, they point out that the best remedy is to promote users’ digital literacy “and help them understand the limitations of artificial intelligence.

read also





#ChatGPTs #tips #condition #moral #judgment #people #knowing