Don’t worry. It won’t.
by Steve Lefar, CEO of Applied Pathways
Artificial intelligence (AI) or machine learning is the latest buzzword and investment theme in healthcare. Because the term AI is used to encompass everything from expert systems, to recommendation engines, to machine or deep learning, to real sentient AI, most of us interact with AI systems on a daily basis. “AI is transforming Google search. The rest of the Web is next,” says Cade Metz of Wired. “This approach, called deep learning, is rapidly reinventing so many of the Internet’s most popular services, from Facebook to Twitter to Skype.”
The places where AI is having the most day-to-day impact – search, online shopping, connecting with friends, and certain machine learning uses in industrial settings – are nothing like healthcare where a singular definitive answer to an often moderately specific question with limited data is required for true AI. In healthcare, it’s still in the early days of the hype cycle with vendors making all sorts of claims that are not yet supported with peer-reviewed, evidence-based research.
Many organizations are working hard to bring forms of AI to healthcare. IBM’s Watson, numerous startups like Enlitic* (full disclosure: I’m an investor), Arterys, Zebra, and iRhythm are focused on supervised (learning to recognize patterns in multi-week cardiac monitoring) and unsupervised (finding lung nodules on images) AI. The real focus is shifting to augmenting physicians via a recommendation engine approach versus the initial approach many took of being the soothsaying Magic 8 ball answer machine. Computers will dramatically enhance the diagnosis of disease, pattern recognition and recommendation of treatment options and identification of gaps in care better than clinicians flying solo with only the knowledge base in their brains.
The problem is we have a new batch of startups trying to raise the next billion-dollar valuation by making wild claims of instant diagnoses via chat bots and online interviews. We must temper the hype or we will suffer yet another “AI nuclear winter” as we saw all the way back in 1984 during the American Association of Artificial Intelligence meeting.
The Wizard of Oz in Healthcare?
Read the fine print right now. Every one of these new startups disclaims their results as being for informational purposes only and not diagnostics. They want to have investors buy into HAL 2001 or Star Trek M5 (1968!). What they are really signing up for is the Wizard of Oz model – “pay no heed to the man behind the curtain”.
The real challenges are deep, especially in unsupervised machine learning where we don’t even know why the computer gives us the answer it does. Are we to presume it is accurate? That’s fine for a recommendation to buy a coffee mug but not to determine a singular choice of treatment without human concurrence. Nowhere is this more real than in autonomous driving where the behavioral, legal, social, and technical challenges of the technology are becoming more and more public.
Recently, people have started to express concern that, since computers are taught by humans, they absorb all of our implicit and explicit biases. According to this recent article by Brian Resnick, computers are just as likely to be racist as we are. With AI’s entrance into healthcare, the possibilities become potentially alarming. Brian notes, “It’s long been known that women get surgery at lower rates than men (One reason is that women, as primary caregivers, have fewer people to take care of them post-surgery.) Might AI then recommend surgery at a lower rate for women?”.
Others are talking about how the vast majority of the peer-reviewed journal articles that are consumed by AI engines to create these models retain the implicit bias of the researcher which skews what we believe to be the latest and greatest research. This opens up the opportunity for abuse by corporations around the globe that want their drugs, devices or procedures to be the next great thing regardless of their efficacy.
Are we ready for the Star Trek M5 (season 2 Episode 24 1968)!?
In 1968 when the starship enterprise was turned over to a black box, Gene Rodenberry envisioned a computer declaring Kirk and Bones non-essential personnel. it went downhill from there and the AI engine misunderstood what humans would do. This black box approach – where we see the end result, potentially after catastrophic damage, without seeing or understanding the path taken to get there — simply doesn’t work in healthcare. Doctors and patients alike need to understand WHY an illness was diagnosed or a treatment path was recommended. And we need to be assured that the implicit and explicit biases that plague us as humans aren’t determining our healthcare fate as decided by a computer.
Until AI can do that and eliminate learning our own human frailties it will likely remain a tool to augment us not replace us. In healthcare, augmentation of the experts will be the answer for the foreseeable future. Investors, consumers and providers should take heed and not believe the great and mighty Oz just yet!