AI Has Long Way to Go to Be Helpful in Patient Care

AI

Despite skepticism, artificial intelligence (AI) can be a boon for physicians…but some improvements are needed first.

Notably, AI algorithms need to be validated, and bias toward particular groups of patients must be removed from them if they are to achieve their full potential, John Halamka, MD, president of the Mayo Clinic Platform, said last week at the annual convention of the Health Management and Information Systems Society in Las Vegas, Nevada.

In an interview that was streamed to reporters, Halamka noted that in the future, AI algorithms, applied to very large datasets, might augment the knowledge and experience of physicians by providing insights on the basis of outcomes of millions of patients. But for that to occur, doctors must be confident that the algorithms are reliable and are free of bias.

Serenity Bay Chronicles

We’re a long way from that ideal state, he pointed out. Many observers have expressed concern about the potential racial bias of algorithms that are based on data that are not representative of the patients whom doctors see in their daily practice. Currently, Halamka said, “there’s no way to describe the bias of an algorithm or the heterogeneity of data in an algorithm.”

Four “Grand Challenges” for AI to Help Patients

AI is fundamentally about probability, and the validity of an algorithm in patient care is based on the data used to “train” it, he pointed out. But in most cases, users of the algorithm don’t know anything about how it was created.

“This is our issue: the algorithms are only as good as the underlying data. And yet, we don’t publish statistics with each algorithm describing how it was developed,” Halamka said.

This is just one of the four “grand challenges” that must be met to make AI useful in diagnosing and treating patients, he said. Another one is “gathering novel data,” including smartphone data on a person’s geolocation, sleep patterns, and so forth. “There aren’t a lot of standards for this nonstandard, high-velocity data,” he noted.

The “discovery” of algorithms must also be democratized so that more clinicians can be involved in creating them, Halamka said. Very large datasets must be assembled and curated, and the proper tools must be provided to “investigators of all kinds, so they can look at the deidentified data and create new algorithms.

“Mayo put 60 petabytes of deidentified data into a secure container on the Google cloud and made an AI factory available to all Mayo Clinic faculty, and they developed 60 new algorithms. How do you empower those without AI experience to be engaged in algorithm development?” he asked.

He said experts have to figure out how to deliver the results of an algorithm into the clinical workflow. “Nobody is going to read a PDF about the AI results the next day,” he noted. “This needs to be, ‘The patient is right in front of me now. I push a button, and 12 milliseconds later, the advice I need is in my workflow as I’m putting in an order to care for that patient.’ That’s our fourth grand challenge.”

Although the US Food and Drug Administration might address the safety of AI algorithms, Halamka said, the best way to approach the efficacy and potential biases of an algorithm is for a public-private collaboration to work out industry standards that can help overcome the challenges.

A consortium involving the industry and academic medical centers is already in the works and will be announced soon, he said. Although he didn’t name any participants, he implied that the Mayo Clinic might be one of them.

——————————————————

Originally Published On: MedScape

Photo courtesy of: Getty Images

Follow Medical Coding Pro on Twitter: www.Twitter.com/CodingPro1

Like Us On Facebook: www.Facebook.com/MedicalCodingPro

CPC Exam Study Guide
CCA Exam Study Guide
CCS Exam Study Guide
CPB Exam Study Guide
CRC Exam Study Guide
Facebook
Twitter
LinkedIn
Pinterest