Marc Serramià: “If we all trust tools like ChatGPT, human knowledge will disappear” | Technology

Marc Serramià (Barcelona, ​​30 years old) is concerned that the dizzying emergence of artificial intelligence (AI) into our lives is not accompanied by a serious debate about the risks involved in this technology. Given the ethical dilemmas that he raises, Serramià has decided to focus his research on the development of techniques “to control that the behavior of these systems is consistent with human values ​​and social norms.” His work earned him the Award from the Scientific Computer Society of Spain and the BBVA Foundation, which every year distinguishes young researchers who sign innovative doctoral theses.

The Catalan compares his work in the field of AI with the establishment of standards for the behavior of society in traffic regulation. “We have speed limits on the road because we value the lives of drivers more than the fact of reaching our destination quickly,” says this doctor in Engineering (with a specialty in artificial intelligence) from the University of Barcelona, ​​currently a professor in the Department of Computer Science at City University of London.

Ask. Some experts say the risks of AI should be taken as seriously as the climate emergency. What do you think?

Answer. I agree. A good example is medications. To put a drug on the market, not only must it be shown that it has a positive primary effect, but the side effects must not be worse than the primary effect. Why doesn’t the same happen with AI? When we design an algorithm, we know that the main function will make it look good, but not if it will have side effects. I think that in the case of medicines or weapons we see it very clearly, but with AI not so much.

Q. What dangers are we talking about?

R. There are many. One of them, on which I focus part of my research, is privacy. Even if we anonymize data, it is always possible to reverse engineer and infer things about you to serve you personalized advertising, to grant you bank loans or not, or for a potential employer to judge whether you are the profile they are looking for. Our work suggests this: since we use algorithms to study you, why not use them for good things too, like learning your privacy preferences? In other words, if I tell you that I don’t want you to share my location, don’t ask me again. What we have proposed is that an AI can learn from the user and can act as a representative in this process and define their preferences by predicting them from the information it has about them. We made a very simple AI tool and yet our data shows that it was able to predict actual user preferences with good reliability.

In October, Serramià obtained one of the six awards from the Spanish Computer Science Society and the BBVA Foundation awarded to promising young researchers.Jaime Villanueva

Q. What other problems would you highlight beyond privacy?

R. Smart speakers, like Alexa, were launched on the market very quickly, but they are failing. For example, sending sensitive conversations to contacts with whom you do not want to share information. Less everyday, but surely more transcendent, is the danger posed by autonomous weapons.

Q. To what extent should we fear autonomous weapons?

R. They are very advanced at the production level. My thesis director participated in a conference at the United Nations on this topic and the majority discourse that she saw among politicians and military personnel present there was: well, we don’t want them, but if we don’t develop them, another country will. Balance is very complicated. There will always be someone willing, and that will drag others along.

Q. When we talk about autonomous weapons, are we referring to drones?

R. For now I think it is the most widespread, yes. In the future we can refer to armed humanoid robots. At the moment, drones with explosives are being used in the war in Ukraine and Russia. But you can also give them weapons to shoot.

We must stop the development of autonomous weapons with decision-making capacity, because we are creating things that we do not know how they work or what effects they can have.

Q. Is there a way to stop that? Or is the automation of war inevitable?

R. What we recommend is to try to stop or slow down the development of autonomous weapons with decision-making capacity, because in reality we are creating things that we do not know how they work or what effects they may have. And this is very dangerous. The problem is that companies know that if they don’t do it, others will do it, and in the end a kind of competition sets in. It would be nice if there was some type of certification in this area. You should start with consumer products, such as smart speakers: if you go to a store and see one that is certified, as there has been an ethical study behind it that ensures that it respects privacy, it is likely that you will buy that one and not another.

Q. Does ethical artificial intelligence really exist?

R. Yes, although it is not very visible. It’s new ground: the first International Ethical Artificial Intelligence Conference was in 2018. One topic I’m working on is using AI to improve participatory budgeting processes, like Decidim Barcelona. One of the problems they have is that few people participate, and it has been studied that generally the most disadvantaged classes vote less. Therefore, this implies biases in project selection. We made them an algorithm that could implement the value system of people who do not participate, either because they cannot or because they do not want to, in a way that took their sensitivities into account. The objective is to minimize the possible biases that may arise from decisions voted for by only a few. The interesting thing is that in our experiments we have seen that we can find a good balance in which the participants are satisfied and that also represents those who did not participate.

Q. Is it possible to code algorithms to be ethical?

R. On a theoretical level, yes. My research is limited to that plane, I focus on multi-agent systems (several intelligent systems that interact with each other). The idea is to think about how to design tomorrow, when AI surrounds everything, a system of standards that ensures that the systems will be aligned with our values. Then there is another investigation that is how we transfer this to a practical level, but we would not go into it here.

Q. And how can it be done?

R. Artificial intelligence can be seen as a mathematical formula that tries to change the state of the world to try to maximize that formula. Although it seems to have intelligent behavior, it is still an optimization mechanism. You can put rules in the code, or also modify that mathematical formula to penalize when the rule is broken. You’ll just want to do it right, you’ll go with whatever helps you get to the design goal of that system, but you don’t know what you’re doing.

On a theoretical level, it is possible to code algorithms to be ethical

Q. But then those algorithms are used by someone who can bypass those rules.

R. Of course, in the end intelligence is as ethical as the one who uses it. But our research focuses on seeing how we can make algorithms free of bias. It is a theoretical work for a future in which we imagine that we will coexist with sophisticated AI systems.

Q. What do you think of generative AI, the one behind ChatGPT or Gemini? What ethical problems does it raise?

R. They focus more on explaining what is generated, or on the fact that you cannot ensure that what is generated makes sense. The algorithm doesn’t understand anything, all it does is find things similar to what you’ve shown it, puts them together and generates something. The term machine learning can be misleading, because the machine has not learned or understood anything. It has a sophisticated mathematical formula that is modified, so that if you ask it to give you an illustration of a cat, it will look for an illustration of a cat, but it does not understand what a cat is.

Q. The effect that these tools may have on certain profiles has not been measured. A person committed suicide after weeks of conversation with an intelligent chatbot that encouraged him to take that step.

R. There are several things here. The first is that there is a problem of ignorance: people do not know how these systems work. No matter how human-like the text it produces, it only bounces back probable results. It is not at all intelligent, and even less emotional, although it can give that impression. There is also a problem in the field of education. It is no longer just that students use ChatGPT to do their homework, but if we all rely on these types of tools, human knowledge will disappear. The algorithm will make a mistake and no one will know it did it. And it has already been seen that many models invent answers. On tobacco packets it says that smoking kills. The same should happen with AI.

Is it enough to put a message that says ‘generated by AI’? People ask ChatGPT which party to vote for in the next election or what medicine to take

Q. It refers to a type of seal or certification.

R. Exact. The industry has grown rapidly and governments are always slower. We are at that moment in which there is a lot of development and little certification and regulation. I believe that in the end this will be fixed and we will even be better. But now is a dangerous time.

Q. What do you think of the European AI regulation?

R. It seems like a good first step to me. In any case, perhaps we have been too permissive with generative AI. For example, ChatGPT and other similar tools are language models. Its virtue is writing text that looks human, not writing real text. However, companies are selling them to us as such. Can we be sure that putting a message that says “generated by AI” is enough? Incredibly, people ask ChatGPT things like which party they should vote for in the next election, whether they should hire a certain person, or what medication to take if they have such symptoms. And let’s not talk about questions like “I don’t want to live, what should I do?” I think more should be demanded of generative AI. There are issues that they cannot talk about and others that, if they can, guarantees should be demanded. Much of the debate so far has focused on copyright, which is also very important, but this other debate also seems crucial to me.

Q. Should we be afraid of AI?

R. No, I think we should have respect for him. And we should demand as citizens that governments get to work and regulate this well. We, the consumers, should not use products or services that we believe do not meet certain standards. If we all behave like this, we will force the industry to opt for more ethical options.

You can follow The USA Print in Facebook and x or sign up here to receive our weekly newsletter.

Subscribe to continue reading

Read without limits

_

Hot this week

Happy Birthday Wishes, Quotes, messages, Facebook WhatsApp Instagram status, images and pics (Updated)

From meaningful Birthday greeting pics to your family and friends. happy birthday images, happy birthday gif, happy birthday wishes, happy birthday in spanish happy birthday meme, belated happy birthday, happy birthday sister, happy birthday gif funny, happy birthday wishes for friend

150+ Birthday Quotes, Wishes and Text Messages for Friends and Family (Updated)

Whatsapp status, Instagram stories, Facebook posts, Twitter Tweet of Birthday Quotes, Wishes and Text Messages for Friends and Family It is a tradition to send birthday wishes and to celebrate the occasion.

Merry Christmas Wishes, messages, Facebook WhatsApp Instagram status, images and pics | theusaprint.com

Merry Christmas 2024: Here are some wishes, messages, Facebook, WhatsApp and Instagram stats and images and pictures to share with your family, friends.

Vicky López: from her signing on the beach of Benidorm to making her senior debut at 17 years old | Soccer | ...

“Do you play for Rayo Vallecano?” that nine-year-old girl...

Related Articles

Popular Categories