This article originally appeared in Healthcare Musings.
Recently I was chatting with an interesting person who works closely with young companies and who himself is considering whether he should further his own education in health tech. He asked me this question: What are the biggest challenges in the adoption of AI in healthcare and what would have to happen to change those barriers to adoption?
I have to say that I have an immediate kneejerk reaction when anyone says “AI” to me these days. A thousand fleece vests flash before my eyes and my ears snap shut. I have heard that acronym so many times over the last 15-20 years that it now sounds like what Ginger the dog hears in that famous Far Side cartoon: “Blah blah blah…” And the topic has become far more complex over time.
It used to be that you just needed to say “AI” to get a venture capitalist to write you an absurdly large check. Now you are required to add a fancy-sounding modifier to the term to prove you’re totally hip and fresh: Generative AI, Explainable AI, Responsible AI, Theory of Mind AI, etc. As I have said and written elsewhere, we are soon going to need an AI system to tell us which AI product to use to best effect.
But back to the question at hand, what are the biggest challenges and, more pointedly, what must happen for adoption to reach the so-called hockey stick inflection point that so many entrepreneurs show their investors? It is a question worth considering given that we are clearly not going to escape this jargon bingo word as quickly as we did “bitcoin.”
Let’s start with the fact that AI, or at least a variety of forms of it, is already being used in the healthcare world. The two most prevalent examples are in drug discovery and image analytics. Numerous companies are applying forms or subsets of AI to medical imaging to find abnormalities that may have been missed by the human eye. This is a great application, as it allows human doctors to perform better by giving them more actionable knowledge. Granted, this can also lead to overuse of diagnostic interventions, but fundamentally, finding a cancer that may otherwise have been missed is a worthwhile endeavor.
On the drug discovery front, the jury is still out. Lots of amazing effort is happening here to find molecules to hit targets that people have yet to see on their own. The idea is to get better, faster, cheaper drug discovery. We may get to the holy land–there are several AI-generated new molecules now in Phase I clinical trials–but those in the know are still a ways off from deciding if these are safe and effective and some recent attempts have not borne fruit. We all know that the devil is in the FDA’s details and, usually, when deciding between better, faster, cheaper, you only get to pick two of those outcomes. It turns out that AI isn’t the answer to every pharma executive’s prayers.
There are many other applications of AI in healthcare now, but mainly they are machine learning-based chatbots and the like – not the fancy stuff of full-on, adjective-modified AI.
But focusing on the question about what must happen to get full-blown AI adoption in the sticky wicket of healthcare, here is my list:
- Proof: If healthcare has taught us anything, it is that the introduction of technology into its midst is more difficult than explaining the meaning of life. Vendors of AI products need to demonstrate that the application of this stuff delivers what was promised in the marketing materials and actually makes life better for the buyer. As to that last one, “better” is defined by the buyer: for health systems it’s more revenue; for payers it’s less cost; for pharma/medtech manufacturers it’s better products used by more people and greater speed to market. Everyone has a different definition of “proof.” Delivering that proof is not inconsequential given the requisite life cycle of demonstrating it.
- Understandability: A corollary issue to proof is the ability to understand how the AI product is getting to its answer. Many of the people who would use AI in healthcare are scientists in reality or at heart. These people want to know not only that the experiment worked, but that it is reliable and repeatable. Given that AI is a decidedly black box, the concept of Explainable AI has been born. What we don’t yet know is whether opening the AI black box will put light where there has been darkness or release what Pandora was worried about all along. And I must know, will there also be a category called Inexplicable AI? Or maybe that’s what it should be called now.
- Data without Bias: Many, including I have written about the fact that the data that feeds our AI systems is basically a mash-up of what humans have written down before. And we all know what that means: garbage in, garbage out. If the majority of the knowledge base of ChatGPT was defined by young white men looking through their own Casper-colored lenses, then we should just start calling it ChadGPT and get it over with. We need a much wider aperture of perspectives to feed the AI world to be sure the results it brings to healthcare are not biased against large segments of our rainbow-colored world, or that give results that do not apply to women, etc. This is a huge problem right now, and one that stands in the way of responsible adoption for clinical purposes, even knowing that that AI can already outperform doctors in certain instances. Doctors make mistakes, but AI that was informed by medical systems that trained those doctors can also make mistakes, particularly when the data set came from a population that looks like the spectrum of people at the average Rush concert.
- Ethics: Data bias is a problem to be addressed, but ethical issues in AI are even more thorny. Thankfully, enough people care about the ethical use of AI that it has become a serious conversation and not entirely an afterthought that ends in saying, “Oops, we accidentally eliminated the human race.” We must figure out how to guard against the misuse of AI to intentionally drive false conclusions that benefit a particular group, party, evil dictator, or greedy technology billionaire who may or may not be setting up a cage-fight with his counterpart.
- Self-Interest: As the famous Upton Sinclair quote goes, “It is difficult to get a man to understand something when his salary depends on him not understanding it.” To be very direct, what rational physician, scientist, insert healthcare job title here will elect to advance AI initiatives if it ultimately serves to dis-intermediate themselves? AI solutions may get imposed on organizations top down, but that is a pretty rough row to hoe in healthcare. While it does happen (see: Epic EHR), the process is loud, painful and lengthy and many workarounds ensue. There is going to have to be a much clearer alignment of incentives between wo/man and machine for us to achieve broad-based adoption.
- Simplicity: Systems must be easy enough for regular humans to use them to meaningful effect to get fully adopted, even if said humans are soon-to-be-replaced-by-AI. Many of the AI products take hard-core data scientists and engineers to put them into practice. Not every healthcare enterprise has loads of those people hanging around. Yes, it is true that ChatGPT (or ChadGPT), has made it sort of easy for regular people to use Generative AI, but consider this: a whole new job category has now been created for people who are exceptionally good at writing Generative AI queries in order to ensure that the right answers come out. That’s yet another specialty expertise and we are in the first inning of this game. I think a good analogy is WordPress, the internet-based software originally created so any idiot (aka, me) can make their own website. Except it’s not really that simple to use and eventually, when the website gets mildly complex, we idiots must hire WordPress experts to help us get next level. Then, when next level needs to rise to full-blown enterprise grade, WordPress just doesn’t cut it anymore and we must hire experts who build and maintain complex websites. I suspect that AI will prove to be far more complex and expensive than this for businesses to utilize effectively, thus creating a weird dichotomy between businesses that can afford to apply AI and those that can’t. And that will drive even more industry consolidation, which drives up cost which drives me crazy.
- Workforce Issues: It’s interesting, but with some exceptions (and you will, I’m sure, tell me who you are), deep tech people are not healthcare people and healthcare people are not deep tech people. The healthcare industry is, as you may have surmised, filled with healthcare people and a bit light on deep tech people. And the deep tech people are usually reluctant to flood into the healthcare world because…logic doesn’t enter into healthcare. Without a healthy integrated workforce of people who get both sides of this nerdy pair of worlds, we cannot achieve its full positive potential. Right now, the playing field tends to look like that time when Michael Jordan did a stint as a minor league baseball player. Nice idea and he could have achieved greatness out of his typical environment, but the rest of the sport was confused. It’s going to take a while to bring the deep tech and healthcare cultures and expertise together to achieve AI nirvana. Note to self: ask ChatGPT how to achieve AI nirvana.
I’m sure there are other issues to consider, but this list will keep the healthcare world busy for the next several dozen years while a whole generation or two exits the workforce, thus opening the doors much wider for our robot overlords to take over. Or for AI to deliver great improvements. Or for there to be 52 new adjectives to put in front of AI to specify its particular use. Or all of the above.
Bruce Fryer says
Lisa,
I did see a Microsoft demo this year for patient scheduling that attempted to interpret a patient’s portal request and decipher what actually was being requested and the actual patient medical complaint. It was a step up from NLP. I suspect it will gain acceptance if it can assign the CPT code automatically and bill the insurance company before the patient shows up.
Lisa Suennen says
Bruce- ouch! But no doubt you are 100% correct. Extra points if they can bill pre-visit and the visit is cancelled! L
Helen Burstin says
Thank you Lisa! Such an excellent explainer.
Lisa Suennen says
Thanks Helen! L
Patrice Wolfe says
ChadGPT. Brilliant and, sadly, so accurate.
Lisa Suennen says
Patrice, thanks for enjoying my sick sense of humor. L
Sumita Jonak says
I’d be happy to see something as simple as scheduling become easier. Why should we have to call a bunch of dr offices to see if an open slot is available next week? People cancel appointments all the time.
(It’s not a tech problem, more so a systematic problem of disjointed scheduling software…much like EHRs are not longitudinal across health systems.)
If someone is working on this, I want to learn more and help!
Lisa Suennen says
Sumita, thanks for the note. No kidding! For me it’s filling out the damn forms. Kill me now! Lisa