The black box of AI that resists researchers | Technology

When a neural network is executed, not even the most specialized researchers know what is happening. And we are not talking about biology but about an artificial intelligence algorithm—those based on deep learning—which is called that because it imitates the connections between neurons. These types of systems maintain an indecipherable black box for data scientists, the brightest minds in academia or engineers at OpenAI and Google, and they just received the Nobel Prize.

The mathematics underlying these algorithms is well understood. But not the behaviors generated by the network. “Although we know what data goes into the model and what the output is, that is, the result or the prediction, we cannot clearly explain how this output was reached,” says Verónica Bolón Canedo, AI researcher at the Information and Communication Technologies Research Center of the University of Coruña.

This happens with ChatGPT, Google Gemini, Claude (the model from the startup Anthropic), Llama (the one from Meta) or any image generator like Dall-E. But also with any system based on neural networks, facial recognition applications or content recommendation engines.

Other artificial intelligence algorithms, such as decision trees or linear regression, used in medicine or economics, are decipherable. “Their decision processes can be easily interpreted and visualized. You can follow the branches of the tree to know exactly how a certain result was reached,” says Bolón.

This is important because it injects transparency into the process and, therefore, offers guarantees to whoever uses the algorithm. Not in vain, the EU AI Regulation insists on the need to have transparent and explainable systems. And this is something that the architecture of neural networks itself prevents. To understand the black box of these algorithms, we must visualize a network of neurons or nodes connected to each other.

“When you put data on the network it means that you start calculating. You trigger the calculations with the values ​​that are in the nodes,” says Juan Antonio, research professor at the CSIC Artificial Intelligence Research Institute. The information enters the first nodes and from there it spreads, traveling in the form of numbers to other nodes, which in turn bounce it to the following ones. “Each node calculates a number, which it sends to all its connections, taking into account the weight (the numerical value) of each connection. And the new nodes that receive it calculate another number,” adds the researcher.

It must be taken into account that the models of deep learning (deep learning) today have thousands or millions of parameters. These indicate the number of nodes and connections that the network has after having been trained and, therefore, all the values ​​that can influence the result of a query. “In deep neural networks there are many elements that multiply and combine. You have to imagine this in millions of elements. It is impossible to get an equation that makes sense from there,” Bolón asserts. The variability is very high.

Some industry sources have estimated that GPT-4 has almost 1.8 trillion parameters. According to this thesis, for each language it would use about 220,000 million parameters. This means that there are 220,000,000,000 variables that can impact the algorithm’s response every time it is asked something.

On the hunt for biases and other problems

Due to the opacity of these systems, it is more difficult to correct their biases. And the lack of transparency generates mistrust when using them, especially in sensitive areas, such as medical care or justice. “If I understand what the network does, I can analyze it and predict if there are going to be errors or problems. It is a security issue,” warns Rodríguez Aguilar. “I would like to know when it works well and why. And when it doesn’t work well and why.”

The big names in AI are aware of this lack and are working on initiatives to try to better understand how their own models work. The OpenAI approach It consists of using a neural network to observe the mechanism of another neural network in order to understand it. Anthropic, the other startup leading and whose founders come from the previous one, study the connections that are formed between the nodes and the circuit that is generated when information propagates. Both look for elements smaller than nodes, such as their activation patterns or their connections, to analyze the behavior of a network. They go to the minimum with the intention of escalating the work, but it is not easy. “Both OpenAI and Anthropic try to explain much smaller networks. OpenAI is trying to understand GPT-2 neurons, because the GPT-4 network is too large. They have to start with something much smaller,” clarifies Rodríguez Aguilar.

Deciphering this black box would have benefits. In language models, the most popular algorithms at the moment, erroneous reasoning would be avoided and the famous hallucinations would be limited. “A problem that you could potentially solve is that many times the systems give inconsistent answers. Now it works in a very empirical way. Since we do not know how to interpret the network, the most exhaustive training possible is done and, if the training works well and the test is passed, the product is launched,” explains Rodríguez Aguilar. But this process does not always go well, as became clear with the launch of Google Gemini, which initially generated images of Nazis with Asian features or black Vikings.

This information gap about how algorithms work would also be in line with legislative aspirations. “The European AI Regulation requires developers to provide clear and understandable explanations about how AI systems work, especially in high-risk applications,” says Bolón, although he clarifies that the systems can be used if users receive sufficient explanations about them. the bases of the decisions made by the system.

Rodríguez Aguilar agrees that there are tools to explain the results of the algorithm, although it is not known exactly what happens during the process. “But the most worrying thing, more than explainability and transparency, for me is the issue of robustness, that the systems are secure. What we seek is to identify circuits in the network that may not be secure and give rise to unsafe behavior,” he highlights.

The ultimate goal is to keep AI under control, especially when it is used in sensitive issues. “If you are going to put an AI that suggests treatments in a hospital, that directs an autonomous vehicle or that gives financial recommendations, you have to be sure that it works.” Hence the obsession of researchers with understanding what happens in the guts of an algorithm. It goes beyond scientific curiosity.

Hot this week

Happy Birthday Wishes, Quotes, messages, Facebook WhatsApp Instagram status, images and pics (Updated)

From meaningful Birthday greeting pics to your family and friends. happy birthday images, happy birthday gif, happy birthday wishes, happy birthday in spanish happy birthday meme, belated happy birthday, happy birthday sister, happy birthday gif funny, happy birthday wishes for friend

Merry Christmas Wishes, messages, Facebook WhatsApp Instagram status, images and pics | theusaprint.com

Merry Christmas 2024: Here are some wishes, messages, Facebook, WhatsApp and Instagram stats and images and pictures to share with your family, friends.

150+ Birthday Quotes, Wishes and Text Messages for Friends and Family (Updated)

Whatsapp status, Instagram stories, Facebook posts, Twitter Tweet of Birthday Quotes, Wishes and Text Messages for Friends and Family It is a tradition to send birthday wishes and to celebrate the occasion.

Vicky López: from her signing on the beach of Benidorm to making her senior debut at 17 years old | Soccer | ...

“Do you play for Rayo Vallecano?” that nine-year-old girl...

Related Articles

Popular Categories