This is how machines incorporate biases and stereotypes | The USA Print

This is how machines incorporate biases and stereotypes

Artificial intelligence (AI) floods our lives: it is used to create personalized recommendations that reach our mobiles, improve medical diagnoses, write essays, find errors in texts, create games or even decide on the granting of a mortgage. Everything is done by a perfectly trained machine, capable of processing huge amounts of data, looking for a pattern and offering the optimal solution.

But they are not perfect machines; they can also become macho or racist in their conclusions. Now that ChatGPT is on everyone’s lips for its incredible features, many will remember Tay, the Microsoft robot capable of having a conversation with Twitter users. It didn’t even last 48 hours, because the machine went from tweeting things like “humanity is cool” to others like “Hitler was right and I hate the Jews”, in addition to endless sexist proclamations.

If AI is used to determine who gets a mortgage, men will have more options than women

ChatGPT rendering as an android made by OpenAI's Dall-E AI

Gemma GaldonFounder of Eticas Research and Consulting

The responses from these machines are biased because the data it collects is biased as well. “AI works through the use of algorithms and mathematical techniques, which allow patterns to be extracted from large amounts of data. It is what we call learning. And they are not representative of the phenomenon or of the population studied. Therefore, they have a partial vision of reality”, explains the director of the Master’s in AI at the UOC, Josep Curto.

Also Read  The most curious thing about MWC 2023: robot dogs, 3D without glasses and facial recognition of pets | Technology | The USA Print

read also

Ramon Peco


“For example, if a bank wants to use AI to determine who gets a mortgage and who doesn’t, the machine takes previous data. Based on these data, men will have more chances than women, because they have historically granted us less and because the system assigns us a risk profile”, points out the PhD in Technological Policies, algorithm auditor and founder of Eticas Research and Consulting, Gemma Galdon.

This illustration depicts a Janus head with one female human face and one female robotic face.  The human face is shown with features such as eyes, nose, and mouth, while the robotic face is depicted with metallic features including sensors and antennas.  Both faces are facing in opposite directions, similar to the Roman god Janus who is depicted as a double-headed figure standing at the threshold between the past and the future.  The human face appears to be looking towards the past, while the robotic face is gazing towards the future.  This illustration could symbolize the concept of human and machine having different perspectives and complementing each other.

The problem is not only in the data that these machines collect, plagued with biases or stereotypes, but also in those who are in charge of them.

Getty Images/iStockphoto

The expert gives another example, in this case in the health field: “If we use an AI to detect what certain symptoms correspond to, it will surely never tell you endometriosis, because traditionally female diseases are much less studied and diagnosed than male ones. It will also be less accurate in diagnosing a woman’s heart attack. The symptoms are different from those of a man, and that is why it is likely that the system works worse in their case, because it has been fed with other types of data.

“If you use historical data and it’s not balanced, you’ll probably see negative conditioning related to black, gay and even female demographics, depending on when and where that data comes from,” continues Juliana Castañeda Jiménez, a UOC student and lead researcher. of a recent investigation Spanish published in the magazine Algorithms.

read also

Hector Farres

The robot has learned toxic stereotypes through these faulty neural network models

To find out the scope of the problem, the researchers analyzed previous works that identified gender biases in data processes in four types of AI: the one that describes applications in natural language processing and generation, the one in charge of decision management, and the one that facial and voice recognition.

Also Read  encrypted messages, video and voice calls? | The USA Print

AI system designers include biases in all phases of the project: from preparing the data to presenting the results.

ChatGPT rendering as an android made by OpenAI's Dall-E AI

Joseph CurtoDirector of the Master’s in AI at the UOC

In general, they found that all of the algorithms better identify and classify white men. In addition, they observed that they reproduced false beliefs about what the physical attributes that define people should be like based on their biological sex, ethnic or cultural origin, or sexual orientation. They also stereotypically associated masculinity and femininity with the sciences and the arts, respectively. The problem also came with the image or voice recognition applications: they had problems with the highest pitched voices, which mainly affects women.

“AI is always limited by the past. You cannot generate new things. You can only make patterns based on the data that has been given for training. Anything new will always be underrepresented, which makes it a terribly conservative force, which leads us to reproduce what already exists and not to create new things”, says Galdón.

The problem is not only in the data that these machines collect, plagued with biases or stereotypes, but also in those who are in charge of them. “AI system designers include these biases throughout the project: in the preparation of the data, in the model, in the interpretation of the results and/or in the presentation of the results,” adds Curto.


Can ChatGPT become macho?


ChatGPT, the conversational chatbot that dazzles the planet for its advanced features, can also fall for these biases


ChatGPT, the conversational chatbot that dazzles the planet for its advanced features, can also fall for these biases. One of the main advantages of this tool developed by OpenAI is the number of sources it uses. Not only does it use the data that circulates through forums or social networks, a priori of poorer quality, but also receives news from the press or even doctoral theses.

Also Read  4 points to draw up your balance sheet as an entrepreneur! | The USA Print

But you can also fall into this error. “This (and other systems) are susceptible to veracity problems, include biases and even perpetuate stereotypes. In fact, this is known by the company behind it, OpenAI, which has dedicated efforts to alleviate these problems through human healing. However, there is still a lot of work to be done, because problems can emerge throughout the entire cycle of the creation of the AI ​​system (from the generation and capture of the data to the presentation of the answers) and, sometimes, they are detected when these systems are opened to the public”, warns this teacher.

read also

Lluis Uría


These biases can (and should) be avoided. “The solution goes through multiple tasks. The first is to improve the data sets with which we train the system at the level of quality, veracity and identification of biases. The second is to include problem monitoring systems throughout the life cycle of the AI ​​system. Lastly, interpretability and explainability mechanisms must be included in the results to understand where they come from and why they propose a particular response”, says Curto.

Galdón talks about algorithmic audits, essential to guarantee the equality of all groups. “It allows, in a specific context, to see how an algorithmic system is having an impact and ensure that those especially vulnerable or discriminated groups are protected, systematically and constantly measuring what those impacts are and ensuring that they are fair”, he concludes.

#machines #incorporate #biases #stereotypes