Go Back Up

back to blog

Is medical AI dangerous for patients?

Medical Pharmaceutical Translations • Aug 9, 2021 12:00:00 AM

AI has infiltrated many aspects of our lives, including healthcare and wellness. While there are many benefits, AI can put patients in danger. If you’re thinking about some sort of I, Robot-like scenario, it’s not quite as overtly dramatic as that! Still, the consequences could lead to tragedy.

How is AI being used in healthcare?

Artificial Intelligence is being used in a wide range of ways - from wearables like smartwatches, to complex medical devices. Some of the latter are used to provide patient screenings and diagnoses.

Why is AI an advantage in healthcare?

Whatever form it takes, the main advantage to AI in healthcare is autonomy. For instance, a person can monitor things like their heart rate with a wearable device, or even simply be told to get up and exercise. In hospitals, AI can be used to evaluate and diagnose patients with certain conditions, giving doctors and other healthcare providers more time to focus on discussions with them during appointments.

These are just two examples. As with technology in general, the ways AI is being used in the healthcare and wellness fields will continue to grow. But that growth isn’t a totally positive thing….

Why can medical AI be dangerous?

There are two main reasons AI can pose threats to patients:

1. Most devices aren’t regulated by the FDA (or any other organization). As this article explains, there are several reasons for the lack of regulation. These include the fact that many devices that use AI, like step counters, aren’t considered particularly important.

Another issue, journalist Liz Szabo claims, is cost. Medical AI is a thriving billion-dollar industry. Slowing product release because of regulatory steps would mean less profit.

2. AI used in things like screening devices can interpret information incorrectly, leading to dangerous assumptions and generalities.

Szabo gives two compelling examples of this phenomenon. The first is a device that was meant to predict the likelihood of a patient developing Alzheimer’s disease, based on their language usage. As Szabo points out, the error in this case went even beyond AI; after all, not all patients will have the same vocabulary level or level of English fluency.

The second example is more complex. Doctors discovered that AI technology being used to predict pneumonia risk based on chest x-rays wasn’t actually analyzing chest x-rays themselves, but where the x-rays had been taken.

health law and bioethics professor Sharona Hoffman points out that algorithms can also lead to issues like discrimination against certain types of patients.

For instance, one algorithm interpreted patients who spent less money on health care as being healthy, and denied them priority status. In reality, many of those who were spending less on care were from low-income backgrounds and couldn’t always afford it.

Discrimination can also occur between the sexes. Hoffman cites the example of heart attack symptoms. Nowadays we know that men and women have different heart attack symptoms, but many older medical texts include only the symptoms suffered by men, which means AI may misdiagnose a woman who is actually experiencing a deadly event.

Additionally, older medical sources often included statistics about the health issues of different races that have been proven incorrect. If this inaccurate information was included in sources to create AI algorithms, it could lead to misdiagnosis and other problems.

How can we make medical AI safer?

Fortunately, there are ways to change this troubling trend. For instance, Hoffman suggests focusing on four areas to change things for the better: litigation, regulation, legislation and best practices.

Each area has its own particular instructions, but the unifying principle is that AI needs to be regulated by law and monitored by organizations like the FDA.

Hopefully, as time goes on and we better understand the risks that lurk behind medical AI errors, this kind of regulation will be put into place. For now, healthcare providers may have to do the fact-checking themselves, considering AI tools as potentially helpful, but not always accurate.

Image source

Contact Our Writer – Alysa Salzberg

Ready to Transform your Business with Little Effort Using Vertical?

Alysa Salzberg