Despite all of the talk about whether artificial intelligence algorithms will replace doctors, Eric Topol isn’t worried. Topol is a cardiologist at the Scripps Research Institute, a geneticist, and the author of several books about the future of health care.
His newest book is called Deep Medicine: How Artificial Intelligence Can Make Healthcare Human Again (out now from Basic Books). Topol argues that humans will always crave the bond of being cared for by other humans and that AI can help enhance that bond and bring it back — if doctors are willing to stand up to business interests.
The Verge spoke to Topol about how health care works today, health privacy concerns in the age of AI, and the importance of physician activism.
This interview has been lightly edited for clarity.
Before we talk about how AI will affect health care and the patient-doctor relationship, can you tell me what that relationship looks like right now?
The relationship has deteriorated. It’s actually dreadful. The patient is short-changed because you get so little time with your doctor and also don’t even get eye contact out of that little bit of time. And it’s not just time. It’s the distraction of doctors serving as data clerks. You’re not going to be a good listener in that case. Doctors are terribly disenchanted and disillusioned and burned out and depressed.
Plus, we have all of this data — collected from genomes and sensors — and it’s all dressed up with nowhere to go. I open the book with one of the stories that reinforced that for me, which is that this information was not being processed by doctors and the people who looked after me, and that hurt me and my recovery. That reinforced that if all of this data were teed up for people, we could make things safer and have better outcomes and have things be more efficient, but clinicians are too busy. They can’t possibly get their arms around each person’s data. I think the transformative potential of AI is in its power to enhance the human aspect of medicine, which is something we’ve lost.
The human aspect of medicine is also why you think AI will never replace doctors, right?
Not only do we need human oversight because you can’t always trust an algorithm even when it’s validated — it could be hacked or glitches — but I think we will always have a quest for that bond, that intimacy of communication. We used to have it. It was precious, and I can remember that. What’s happened over time is that the business of medicine took over, and all of these forces eroded that relation. We can get it back.
Let’s talk about AI diagnosis first. Seemingly every week, there’s a study about how AI can diagnose some condition better than doctors can. How will this play out?
AI can see things that humans can’t. Deep learning trains machines to see things far better than what a human will ever see, and we’re starting to realize all of these things that we never would have guessed before. There are so many examples now. You can determine potassium in your blood on your watch without any blood. You can analyze the retina to see whether it’s male or female with high accuracy. You can analyze a colonoscopy, and machine vision will pick up polyps that are missed by GI doctors. The list goes on and on.
The missing piece, of course, is the careful, rigorous, prospective studies with the validation and replication. We have the promise now. We’ve seen enough data and it’s as exciting as anything I’ve seen in my four decades in medicine, but we also need to take it from excitement and hyperbole to the level of reality and unequivocal proof.
You envision a world where we have AI help with diagnoses and also algorithms integrating all of these sources of data together. What would that look like in the clinic?
Then we have a different world. When you see patients, you’re not trying to work your way through all of these different pages and sources of data. You’re at, “Okay, I’m going to contextualize this for my patient. I’m going to have a meaningful relationship and understand the presence of the person so that I can give my human wisdom and empathy.” It’s a whole different look than what we have now.
Everyone benefits if you get more efficient and doctors stand up and say, “We’re going to give this efficiency back to our patients.” People get more charge and power with their data and support by algorithms while at the same time, they are decompressing the burden of clinicians. And then clinicians are getting their performance and efficiency up, and they remember why they did medicine in the first place. It creates a flywheel effect. You get both sides getting this performance enhancement, and it basically changes the whole outlook for clinicians.
What about privacy?
That’s really important. It’s vital that each person owns their data and pulls it all together so it’s not just what’s in your medical records but dispersed over many places and hospitals and sensors. Right now, nobody has all of their data, though you want it from the moment you’re in the womb to when you’re getting an assessment.
The biggest thing we can do is give data ownership to people. We have to assess the security of data and privacy, but it also involves a different ownership model.
How do you know that the efficiency will be given back to patients, instead of forcing doctors to just see more patients in a shorter period of time?
I spent a couple of years commissioned by the UK government to help review and evaluate the National Health System. We had economists working on it, and it was striking that every minute you save in these voice recognition environments [where doctors aren’t sitting at a keyboard inputting information] translates to an enormous amount of time to free up for physicians. The exponential impact is quite amazing.
AI can rev up efficiency and productivity and workflow, but if we go that route, we have to have the will to stand up for our patients. That hasn’t happened in the past. If we continue living just as we have, the medical community will be get squeezed further, and there will be more burnout and more depression and suicide. The real test is if the medical community can stand up to the financial business interests. We’ve sometimes been passive, and we can’t afford to have that again. It’s going to take activism to make humans more humane while machines get better and enhance the human effects.
What types of activism can doctors do?
You didn’t use to see doctors standing up. Only in recent years did you see the National Rifle Association saying “stay in your lane” on gun policy, and you saw doctors stand up. These tend to be the younger folks, not the old dogs, which is too many of us in medicine. You start seeing physicians speaking out, and we can do that on a grand scale for the most important thing of all, which is the restoration of care in health care. I’m confident there’s a way to do that.
How far away is this all of this?
We have evidence that people are interested. It’s moving so fast, but over the years, I’ve learned that whatever I estimate for how long it should take, I should probably multiply by four or five. I’m learning that even when you have something as exciting as this, it just takes much longer.
There’s a lack of investment in doing the high-quality research that’s needed. A lot of the best work in this space are startups that are developing a radiology algorithm or a dermatology algorithm or voice recognition at the clinic. They don’t necessarily have the resources, unless they’re acquired by a Google or an Amazon (which don’t have a track record of doing rigorous medical research either).
It’s hard to show proof. The medical community and, for that matter, patients, are not likely to accept this reconfigured health care without proof. Of course, it should be rigorous because if you have a faulty algorithm, you can hurt a lot of people really quickly so there should be very stringent criteria and requirements for research that’s in large numbers on diverse people and diverse venues. We need proof no one can argue with, and then it’ll move much faster, and we’ll start to get the momentum we need.