That artificial intelligence is one of the most important advances in technology in recent decades -one of the most disruptive, as experts say- is at this point a no-brainer. That raises important misgivings, too. But what is already more surprising is that four experts in this field warn, from different disciplines, of the dangers it contains. La Vanguardia has asked them what there is to fear from artificial intelligence. The use of weapons or the negative impact on those who do not have access to this technology are some of the risks, but above all the misuse that is given to it. “The problem is not Frankenstein, but Dr. Frankenstein,” they warn.
Nick Bostrom
“We must prevent actors with bad intentions from using artificial intelligence to develop weapons”
There are three big challenges in the transition to the age of artificial intelligence. First, the alignment problem: we need scalable methods for AI control, a difficult and still unresolved technical problem, although many of the smartest people I know are working on it. Second, there is a problem of governance, with many facets and which includes questions of justice, peace and democracy, but also requires that actors with bad intentions can be prevented from using AI tools to develop, for example, biological weapons. And third, there is what we might call an empathy problem: as we build ever more sophisticated digital minds, at some point AIs themselves may become subjects with forms of moral status, perhaps initially analogous to those of some animals. not humans. A lot to think about!
Director of the Institute for the Future of Humanity at the University of Oxford
read also

Ramon Lopez de Mantaras
“Humans are often better at inventing tools than at using them wisely”
A short answer: what I fear most is the misuse that can be and is already being made of AI.
And there is also a long answer. When talking about the dangers of AI, it is implied that it has its own objectives and intentionality, that is, that it has mental states. It is transmitted that he makes decisions completely autonomously and that in the future he might want to control the world. Those who have contributed the most to propagate this theory are those who speak of the technological singularity (the moment when AI will surpass human intelligence). Technological singularity is, in my opinion, an idea that is not scientifically sound. In fact, I think that this supposed danger of future artificial ultra-intelligences is a diversionary maneuver to try to hide the true dangers of AI. The most obvious example is lethal autonomous weapons.
But it is not only this, but also applications for the massive control of citizens through facial recognition or to manipulate the opinion of particular people for political purposes. There are also many applications already deployed that appear to be good but are, in fact, also examples of misuse of AI, such as algorithms that are supposed to be able to predict whether or not a defendant should be released on probation for a crime in awaiting trial, or predicting the risk of recidivism of aggressors in cases of gender violence (a software called viogen, from the Ministry of the Interior), or determining whether or not a report of robbery with violence and intimidation is false (a software called Veripol used by the Spanish police).
AIs are not moral agents. It is we humans who have the necessary attributes for moral agency. The ethics of artificial intelligences is the ethics of the people who design and apply them. AI is absolutely dependent on people at all stages, from the fundamental research phase to its deployment. The problem is not Frankenstein’s monster, the problem is Dr. Frankenstein.
We humans are often better at inventing tools than at using them wisely. I am more afraid of natural stupidity than artificial intelligence.
Researcher at the Institute of Artificial Intelligence of the CSIC

The ChatGPT logo in an illustration
Lasse Rouhiainen
“Many people are going to be unemployed”
Three or four months ago, generative Artificial Intelligence began to become popular with the launch of ChatGPT. It is the first time that everyone has free access to an impressive artificial intelligence with an easy-to-use interface. Those who know how to be part of this wave of democratization of Artificial Intelligence will take advantage of it, achieving 10 or 20 times greater productivity and will be able to quickly launch new businesses. It is the same as a salesman who travels on horseback and another who does so in a Ferrari. These are positive aspects that everyone should be aware of.
But is everyone? No. According to my analysis, perhaps 85% of the population will not be involved in this, they do not have the time and capacity to learn how this tool works. What is going to happen? It will be like those who use a typewriter instead of a state-of-the-art computer. My estimate is that in Spain, within six months, a year, a year and a half, many people who are in routine office jobs that do not use Artificial Intelligence tools are going to go unemployed. That is what we should fear. Politicians are not attentive to an issue like this that will cause major social problems.
I give an extreme example. In the upcoming US presidential elections next year, many videos or audios of candidates will be generated that will be false. Before, companies that had the ability to do that were limited by ethical issues. Now it doesn’t matter because there are also a lot of actors who can do it. We have to democratize technology, so that everyone can share the advances in AI, but we must identify the problems, because if we don’t we won’t be able to solve them. And we must solve them before they occur.
Writer, consultant and international expert in artificial intelligence, disruptive technologies and digital marketing
read also

Francisco Herrera
“It is necessary to regulate and control the risks of AI”
Artificial intelligence, like other technologies, may pose risks that are inherent in the data that is used for intelligent systems to learn and in the applications that we consider making, intelligent systems that discriminate against minorities, that may affect fundamental rights, or in scenarios where a wrong decision affects the safety of a person. These risk scenarios are included in the AI Act regulation of the European Commission that outlines prohibited artificial intelligence practices (Title II) and high-risk AI Systems (Title III) in terms of scenarios. For these latter systems, a series of requirements are established that must be met (risk management, data governance, transparency and communication of information to users, precision, robustness and cybersecurity, human surveillance.
For this reason, I believe that more than “fear” we must be aware of the potential risks and work on their regulation and control. Just as it has been done with technologies such as nuclear. It is true that AI leads us to a technological disruption, with great changes in the coming years at the level of work or economic development, and therefore special attention must be paid to it.
Director of the Andalusian Interuniversity University Institute in Data Science and Computational Intelligence
#artificial #intelligence #fearsome