Roxana Daneshjou: “A very clear bias in dermatology is that AI is not trained with images of dark or black skin” | Artificial Intelligence

0
69

Professor, doctor and researcher Roxana Daneshjou is a present-day reference projected into the future. A doctor at a clinic in Palo Alto (not far from Stanford University in California where she trained), she is also an expert in the application of AI models in her particular specialty, dermatology, and in health in general, with numerous scientific articles published on its promises and dangers.

Profiles like yours, which combine the practice of medicine with AI research, are not yet common…

I studied bioengineering in college and was already interested in using technology to improve healthcare. When I learned about deep learning and early forms of AI, I realized that because dermatology is very visual, computer vision could have a huge impact on our field. So I wanted to understand how models are built and tested, and understand the technical aspects while practicing medicine. It helps me understand real-world problems and think about how we can use technology to address them.

Are there excessive expectations in this area too?

Anything that claims that we are going to replace doctors can be considered hype (short for hyperbole, which has entered tech jargon as a synonym for overreaction to emotions). But there are also plenty of opportunities, such as in tools that help with administrative tasks or support decision-making. Many, if tested properly, will be useful.

Which ones have more potential?

For example, a model that can listen to the conversation between doctor and patient and then automatically attach the necessary medical documentation. This would allow healthcare professionals to spend more time with the patient instead of worrying about paperwork. Also interesting are tools that view images such as X-rays or MRIs to help identify diseases and assist the doctor reading those images. There are many more possibilities, but the most important thing is that everything is validated and for this, prospective clinical trials are needed to follow up on patients to ensure that there are no harmful effects and everything works well. Evaluations are also needed to see how well these tools work in the real world.

I am concerned about applications that are used directly by patients. For example, in dermatology we have seen mobile apps that claim to diagnose skin cancers. We do not know if most of them work, if they have been tested or validated.

Could some of these tools be dangerous?

I am concerned about applications that are used directly by patients. For example, in dermatology we have seen apps There are mobile phones that claim to diagnose skin cancers. Most of them we don’t know if they work, have been properly tested or validated, but they can be easily downloaded and have the potential to cause a lot of harm if they fail to reassure the patient or cause unnecessary worry. There are no dermatological models that can identify these things autonomously, without a doctor reviewing the images.

Should there always be supervision by a doctor?

Certain AI models in dermatology can help the primary care physician and increase their accuracy, but that doctor’s intuition and wisdom still make the final decision. For example, one study has shown how these tools can improve the ability of general practitioners to detect skin diseases. If these tools help the primary care physician do a better job and figure out who needs an appointment, that can be very useful.

Will AI destroy jobs?

I don’t think so, at least in this sector and at the moment. It will be integrated into the healthcare system to work in collaboration with healthcare professionals, but we are nowhere near the precision needed for it to displace doctors and healthcare professionals.

There are impressive examples of AI, but we are still in the early stages of figuring out how to apply it to patient care.

It may not replace them, but will it force them to update and learn how to use it?

Yes, they will have to train. It is always said that doctors will be replaced by other doctors, those who understand and know how to use technology. It is exciting to see how the new generations of doctors and healthcare professionals show interest in these tools. More is needed, but there is already a certain awareness that they are going to be important, that they need to be understood and helped to build them.

And what about engineers? Will they have to train in ethics and medicine?

As someone who works between the technological side and the practice of medicine, I find it extremely important when designing an algorithm to understand the problem you are trying to solve and how it manifests itself in the real world. Several studies have already shown that if you do not take into account biases and the social factor, you can build algorithms that cause harm to vulnerable populations.

Can you give an example?

One very clear thing in dermatology is that many models are not trained with images of diseases in dark and black skin, and that is why they are very bad at identifying diseases and cancers in those patients. It is a case of bias in their design, because they can do it with white skin.

Are patients ready to understand that Dr. ChatGPT does not exist?

There was already misinformation on Google, obviously, but when you talk to ChatGPT it might give you wrong information or hallucinate by saying things that aren’t true. It’s a big risk with this type of language model.

Daneshjou, the daughter of Iranian immigrants in the United States, has been involved in campaigns against the expulsion of some professors and students from Iranian universities for belonging to the Bahá'í community.
Daneshjou, the daughter of Iranian immigrants in the United States, has been involved in campaigns against the expulsion of some professors and students from Iranian universities for belonging to the Bahá’í community.Carlos Rosillo

Does the addition of companies like Google and Microsoft pose a privacy issue?

Patient privacy is hugely important and any company working in this field must be aware of and protect it. How patient information is going to be used, who is going to have access to it – everything must be treated with the utmost transparency.

Should they be informed if AI is used in diagnostics?

Transparency is also needed here: patients should know exactly when AI is used and have the right to ask for a second, human opinion, without AI.

And will they be able to have that second opinion?

That’s a good question. I think we don’t know yet. For example, some insurance companies are using AI to deny coverage and so far there are no mechanisms to override those AI-generated denials. In fact, there is already litigation on this in the US.

Is there a risk of two levels of healthcare coverage, one with access to human doctors and another with access only to AI models?

This can be a problem in countries without universal public health care, such as the US. That is, vulnerable populations without access to health care have some sort of AI assessments, while people with resources have access to the best coverage. Technology may be a luxury only available to people with access, but it can also be used to provide care that is worse than the human interactions received by people without that access.

Despite these cautions, are we at the beginning of a revolution?

I think AI will change healthcare, and I hope for the better. In the best-case scenario, we will reduce the burden on professionals, help them deliver more accurate and timely care that is fair and not harmful to patients, with a better outcome. That is the hope. The worst-case scenario is tools that don’t work or are biased and end up harming vulnerable populations with substandard care… As for the future, I am neither an optimist nor a pessimist, I am a realist. I see the opportunities and I am aware of the potential dangers.