risks of artificial intelligence | The USA Print

risks of artificial intelligence

Geoffrey Hintonan award-winning computer scientist known as “the godfather of artificial intelligence“, is having serious reservations about the fruit of his labors.

Hinton helped create technologies of artificial intelligence that have been decisive for a new generation of chatbots such as ChatGPT. But in recent interviews, he revealed that resigned from a senior position at Google specifically to share his concerns that the unrestricted development of artificial intelligence poses serious risks to humanity.

“I’ve suddenly changed my views on whether these things are going to be smarter than us,” he told MIT Technology Review in an interview. “I think they are very close now and that in the future they will be much smarter than us… how are we going to survive that?”.

Hinton isn’t the only one with such concerns. Shortly after the OpenAI firm launched its new version of the GPT-4 chatbot in March, more than 1,000 scientists signed a letter calling for a six-month pause in development of the app. artificial intelligence because it poses “deep risks to society and humanity”.

Here’s a look at Hinton’s top concerns:

Intelligence depends on neurons

Our brains are capable of solving mathematical equations, driving cars, and following the plot of a television series thanks to their inherent talent for storing and organizing information and producing solutions to thorny problems. What makes that possible are the approximately 86 billion neurons that we have in our skulls and, more importantly, the 100 trillion connections that those neurons forge with each other.

Also Read  Necaxa dismisses Andrés Lillini as its technical director due to the poor results in the Clausura 2023 | The USA Print

By comparison, the technology that grounds ChatGPT has between 500,000 and a billion connectionsHinton said in the interview. Although this seems less than the capacity of the human brain, Hinton points out that GPT-4, the model of artificial intelligence latest from OpenAI, he knows “hundreds of times more” than a human being knows. It is possible, he suggests, that it has “a much better learning algorithm” and is therefore more efficient at cognitive tasks.

He’s probably smarter than us already

Experts have long argued that artificial neural networks take much longer to absorb and apply new knowledge than people, since training them requires a great deal of energy and information. That is no longer trueargues Hinton, noting that systems like GPT-4 are capable of learning extremely quickly once trained by researchers. That’s not too dissimilar to the way that, say, a professional physicist is able to analyze results of an experiment much faster than an average high school student.

This leads Hinton to conclude that the systems of artificial intelligence they are probably already smarter than us. Not only can they pick up on things faster, but they can share the information with each other almost instantly.

“It’s a totally different kind of intelligence,” says Hinton, “it’s a new and better kind of intelligence.”The power to upend elections and start wars

What could systems do? artificial intelligence smarter than humans? One frightening possibility is that malicious individuals, groups, or states use them to achieve their purposes. Hinton is particularly concerned that these tools could be used to rig election results or start wars.

Also Read  Vatican to allow lay people, including women, to attend bishops' meeting for the first time | The USA Print

Electoral disinformation spread by chatbots, for example, could be the new version of the electoral misinformation that previously spread through Facebook or other social networks.

And that could be just the beginning. “Do not doubt for a moment that Putin would be willing to use intelligent robots to kill Ukrainians”Hinton said in the article. “He wouldn’t hesitate.”

A lack of solutions

What no one knows is how a power like Russia could be prevented from using the artificial intelligence to subjugate their neighbors or even their own citizens. Hinton suggests that an agreement similar to the 1997 International Chemical Weapons Convention may be needed to develop rules governing the use of artificial intelligence.

Notably, however, that convention did not prevent what researchers say were likely Syrian government attacks with chlorine and the neurotoxic sarin on civilians in 2017 and 2018 during the Syrian civil war.

News and services that extend the global reach of fact-based coverage

#risks #artificial #intelligence

Source Link