The algorithm that evaluates the dangerousness of Catalan prisoners works in a “random” way | Technology

0
58

The Catalan penitentiary system has been using an algorithm, RisCanvi, for 15 years, which supposedly helps judges make sensitive decisions, such as granting third-degree parole or conditional release. A reverse audit (conducted without access to official data) concludes that the contribution of this program to the process is questionable. “The system we have audited seems to behave in a random manner, in the sense that similar combinations of factors and behaviors (of prisoners) can lead to the assignment of very different risk levels,” reads the report prepared by the Eticas Foundation, to which EL PAÍS has had access.

When an inmate in Spain applies for certain prison permits, the judge who must authorise or deny the request receives a report on the inmate. This document, prepared by professionals from the centre where the inmate is housed, provides elements on which to base the decision (history, behaviour, etc.). In Catalonia, part of this report is prepared by RisCanvi, which calculates the risk of reoffending of subjects based on a series of variables to which it assigns different weights. This key information, the structure of the algorithm, is not public.

Despite dealing with such a sensitive issue as prison permits, RisCanvi has not been subject to an impact assessment process, as required by the European Regulation on Artificial Intelligence. “Although RisCanvi’s algorithm does not use artificial intelligence (AI), but is actuarial, as it is a technical process that makes decisions related to high-risk issues, such as criminal justice and penitentiary matters, it falls within the broad definition of AI made by the regulation, and is therefore subject to the regulations,” stresses Gemma Galdon, founder and CEO of RisCanvi. Ethics.aia firm specializing in algorithmic audits, which includes the Eticas Foundation.

After the Catalan government rejected the offer of a free conventional audit, Galdon’s team decided to carry out a reverse audit. This type of analysis is called reverse engineering, the process carried out to obtain information about the components and manufacturing process of a product from a finished model. In the case of RisCanvi, the Eticas researchers began by asking six former prisoners to whom the algorithm had been applied, as well as two social educators familiar with it, three specialised psychologists, four lawyers and an activist. They also analysed public data from 3,600 individuals who were released from Catalan prisons in 2015. With this information, they tried to evaluate the effects of the algorithm to try to understand how it works.

“RisCanvi is a system that is unknown to those it impacts, the inmates; that is not trusted by many of those who work with it, who are also not trained on its operation and weights; that is opaque and has not adhered to the current regulations on the use of automated decision-making systems in Spain, where AI audits have been required since 2016,” the audit states. “But, above all, our data show that RisCanvi may not be fair or reliable, and that it has failed to (…) standardize results and limit discretion. In line with previous studies, we do not consider RisCanvi to be reliable, as this would require a clear relationship between risk factors, risk behaviors, and risk scores.”

“These are the conclusions we have reached by analysing the data we have been able to collect. If the Department of Justice has other data, we will be happy to verify and compare them,” says Galdon, who complains about the lack of cooperation from the Generalitat. His team has been working on this project for two years. “When Justice found out that we were doing this work, it prohibited workers linked to penitentiary institutions from responding to the first information request we made,” he explains. They stopped the investigation because the Ministry promised to carry out an internal audit. “When six months passed without any news of any movement in that direction, we resumed the work,” says Galdon.

One of the psychologists interviewed for the report says that very few inmates know that they are being evaluated by RisCanvi. “They do not know that this algorithm decides the quality and circumstances of the prison benefits they will enjoy,” the audit states. “I do not believe that a tool can predict human behavior. Although it collects a lot of information and the result it produces is apparently objective, we are talking about people,” says a lawyer consulted by the Ethics team.

What is known about RisCanvi

RisCanvi is updated every six months with data provided to the system by officers who enter their inmate reports into the computer. First, a short version of the system is applied (RisCanvi Screening), which consists of ten items, ranging from the inmate’s violent history (if any) or the age at which he or she first committed an offence, to whether he or she has had or has problems with drugs or alcohol, or whether he or she has resources and family support.

The result of the algorithm can be low risk or high risk. If the risk is high, the extended version of RisCanvi is activated, which consists of 43 elements and requires an officer to conduct a scheduled interview with the inmate. In the full version of RisCanvi, issues such as the distance between the inmate’s home and the prison, the criminal history of his or her surroundings, educational level, socialization difficulties, IQ, personality disorders, whether the person concerned is the main source of income for the family or his or her level of impulsiveness and emotional instability are taken into account. Each of these elements is automatically weighted and its weight varies depending on sex, age and nationality. The final verdict of the algorithm can be low, medium or high risk.

However, the weight assigned to each of these variables in the formula, which is key information for understanding how the algorithm works, is not public. Some of the defenders of this tool argue that, if they were made public, prisoners would know what to do to try to adulterate the result of the RisCanvi evaluation with small actions to make it favour their interests. Galdon does not understand this criticism. “Knowing what you are being judged on is a basic rule of the rule of law. To suggest that this is a problem is to not understand what very fundamental things in our legal system are based on,” he explains.

An insufficient evaluation

There are experts who consider it pointless to carry out an algorithmic audit, or that even doing so would legitimise a way of working that has been questioned for years. “Rivers of ink have been written about the actuarial method explaining why it contradicts the concept of justice of our legal culture, which is methodologically individualistic, but uses a generic statistical method (and therefore not individual) to evaluate a person,” summarises Lorena Jaume-Palasí, an expert in algorithmic ethics and advisor to the European Parliament, among other institutions.

“The algorithm tries to make everyone equal. It aims to help professionals make their work more efficient, but it is not binding,” he said in a statement. An interview with Antonio Andrés Pueyoprofessor of Psychology of Violence at the University of Barcelona and director of the research group that developed RisCanvi.

“It is not binding, but it is obligatory,” Jaume-Palasí replies. The expert believes that it is time to explain what the task of justice is “and why this type of system does not constitute a change in the administrative process, but rather is an old method that has simply been digitalized to continue doing what has always been done.”

You can follow THE COUNTRY Technology in Facebook and X or sign up here to receive our weekly newsletter.

Subscribe to continue reading

Read without limits

_