Go Back Up

back to blog

Does the healthcare industry rely too much on AI?

Medical Pharmaceutical Translations • Feb 6, 2023 12:00:00 AM

A few weeks ago, many users of online mental health provider Koko were outraged when when co-founder Rob Morrison tweeted that the platform had used trendy AI program GPT-3 instead of real humans for 4,000 mental health consultations.

To be more precise, the AI was used alongside human beings who would verify its responses. But still, users felt betrayed. Some mental health apps and sites are open about using chatbots, but unlike these, Koko never disclosed to users that 100% of their conversations weren’t exclusively with a real person.

Mental health chatbots can be useful for certain things, like guided meditation or helping to diagnose certain mental health issues and then directing users to resources for help.

But nothing can replace a real mental health professional. As advanced and impressive as it can sometimes be, AI is far from perfect. Issues like misdiagnosis, irrelevant advice, and not picking up on signs that a user may be in distress are among numerous very real risks.

So, why did Morrison choose to substitute AI for real people? According to him, it was simply an experiment. But maybe the truth is that when you look at things from a financial perspective, replacing real people with bots is a very interesting prospect. You don’t have to pay AI, give it benefits or sick leave, and it’s available 24/7, a feature that consumers would find very attractive.

Morrison isn’t the only person in healthcare who’s tried to implement AI into his work. In fact, AI is far more present in healthcare than you might think. You’ll find it being used in diagnostic tools and chatbots, but also as a way to fill out common medical and insurance forms, search research results and data for patterns, or even analyze medical imagery to check for risk factors for specific health issues.

In many ways, AI has improved the quality of healthcare. For instance, by lightening the load when it comes to standard paperwork, it’s allowed some clinicians to spend more one-on-one time with patients.

Benefits like these make many healthcare professionals eager to use AI. But there’s a dark side, too.

The biggest issue with artificial intelligence is that its “intelligence” is limited by what it can “learn”. Machines learn by analyzing algorithms and sources like online documents and communications. But, as many people have pointed out, while this may involve millions of resources, there are a lot of areas that are underrepresented.

An easy example of this is with languages. Some languages, like English, have a very strong presence online. But others are more limited - and many of the world’s 7,000 spoken languages may barely have an online presence at all. This is one of the reasons AI can have trouble with general translations, let alone specialized ones involving medical terminology.

Another issue is bias. For instance, more healthcare documents and research have been focused on white adult males than any other group, so AI will have less knowledge and may even have trouble diagnosing for other groups. One way this could turn deadly is that AI might recognize symptoms of a heart attack based on criteria for male patients, whereas we’ve recently discovered that heart attack symptoms are very different for female patients.

There’s also the fact that, as Dr Victor Volovici, points out, AI is programmed by humans who think that it learns like we do. This includes understanding when to disregard certain details. But finding errors and making exceptions is something that AI also needs to be taught. Otherwise, the results can range from inaccurate to life-threatening.

A report in Science Daily cites the example of a program that was being used at the start of the pandemic to analyze images of patients’ lungs in order to predict COVID infection. When humans checked the the results, they realized that the AI was picking up on the letter “R”(for “right lung”) that was printed somewhere on each image, and counting it as a spot, rather than dismissing it.

It’s clear that AI needs to be improved, but recently, some people have said that there’s at least one ‘bot that’s close to perfect. Released to the public in late November 2022, ChatGPT has made headlines with its ability to generate written documents that seem to have been composed by a real human being. Countless tests have shown that these documents are hard for other AI, or even humans, to recognize as being machine-generated.

Rheumatologist Dr. Clifford Stermer was thrilled by this new technology for a specific reason; in his stead, it could write letters to insurance companies that explained why a patient required specific treatment. This would save him valuable time. He excitedly shared his findings in a TikTok that soon went viral.

So is ChatGPT AI that we can finally rely on?

….Well, not really, as it turns out.

A few days later, Dr. Stermer discovered that while the bot’s prose was flawless, the sources it cited were incorrect or attributed to the wrong authors.

The article that covered this news cites Dr. David Canes, who’s also tried to use ChatGPT in his practice. He’s found that it can be helpful in small ways but that its intelligence has been overestimated:

Just like when our pets do anything remotely human-like, we….ascribe emotion and knowledge to the pet….Similarly here, ChatGPT knows statistically what the next word should be. It can come up with wildly impressive-looking results, but it is prone to error. If relied upon for medical research or asking medical questions, it might get the question right, but it will also confidently put forth completely fantastic-sounding garbage.

AI’s false attributions and inaccurate information have become a major concern for some medical researchers. Among them, a group who published an article on the subject in Nature Medicine

are encouraging medical journals to ask why AI is being used in each research project. The group is creating guidelines for AI use that it hopes will become standard practice in medical research around the world.

AI has changed medicine, often in positive ways. But its potential often blinds us to the reality: We still can’t totally rely on it, especially in fields where human lives could be at risk.

Image source

Contact Our Writer – Alysa Salzberg

Ready to Transform your Business with Little Effort Using Vertical?

Alysa Salzberg