What Is the Role of AI in Medicine?

By John Palmer

Do humans make better caregivers than machines?

It depends whom you ask and what the machines are used for. With the rise of robotic technologies in surgical suites, and recent experimentations with artificial intelligence (AI) in bedside operations, healthcare is getting to a point where fewer human hands are needed to take care of patients. The safety and ethics of this trend have created much controversy, and the AMA’s Journal of Ethics devoted an entire issue to the subject.

The use of surgeon-controlled robotic assistance in healthcare, especially in intricate surgeries, is hardly a new thing. The da Vinci system, a robotic assistance surgical device that is installed in many U.S. hospitals, recently gained FDA clearance for use in hysterectomies and prostatectomies.

Mechanical assistance through devices enables surgeons to perform a variety of surgical procedures while only requiring small incisions into a patient’s body. Once inside the body, the surgical instruments can be controlled by the surgeon through mechanical arms while viewing the surgical site in 3D high definition. This type of minimally invasive surgery may help reduce pain, blood loss, scarring, infection, and recovery time in comparison to surgical procedures that do not use these devices.

But we’re still a long way from perfection. In fact, the FDA in February warned against the use of robotic assistance devices for mastectomies and other cancer surgeries, asserting the products may pose safety risks and result in poor outcomes for patients.

Perhaps more controversial is the experimentation with AI technology in hospitals. Many of us are familiar with Alexa, Amazon’s voice assistant that allows you to command a virtual “person” to do everything from playing music to turning on your house lights and ordering your groceries. Commanding a virtual assistant with your voice is a novel idea in the household, but imagine a charge nurse being replaced by a virtual machine, or Alexa being tasked with prescribing and ordering medications for a patient in a critical care unit.

Healthcare isn’t there yet, but it’s quickly warming to the possibilities. In fact, a 2018 survey by Tata Consultancy Services estimates that 86% of healthcare provider organizations, technology vendors, and life science companies are already using some form of AI. For the record, AI is defined as machine intelligence that “performs tasks that normally require human intelligence.”

Some hospitals have started using voice assistants to allow patients to order lunch, check medication regimens, and get on-demand medical advice at home. Devices manufactured by Amazon, Google, Apple, and Microsoft are being considered for new uses in ICUs and surgical recovery rooms. We could theoretically reach a point where Alexa goes beyond assisting with menial tasks and suggests treatments for a particular patient’s needs. Such an AI system might also monitor doctor-patient interactions or detect voice changes in a patient and warn caregivers of a possible impending health crisis.

“We believe that the technology that exists in patients’ homes will be a demand that patients will have sooner than later,” said Vishwanath Anantraman, chief innovation architect at New York’s Northwell Health, in a report from STAT magazine. “Voice tech can help improve service requests and deliver real-time analytics to the staff to ensure patient satisfaction and patient safety.”

He told the magazine that the hospital is planning to introduce several uses for voice technology and bots, which run automated tasks over the internet, during the next few months.

As another example, Mayo Clinic has started using Alexa-enabled programs to deliver first aid instructions to consumers, which could come in handy in rural areas where patients can’t get to hospitals quickly. For patients leaving the hospital and recovering at home from surgery, the technology could help remind them about post-discharge instructions, versus printed instructions that they might throw away or forget about.

In the future, AI technology could be used for such lifesaving applications as diagnosing and predicting illnesses. Some companies are developing machines that could analyze speech signals such as tone and intensity to help uncover certain diseases. According to STAT magazine, these diagnostic tools could detect “subtle shifts in tone, clarity, and cadence” and help predict the onset of psychotic episodes, stroke, and other health problems before they become chronic emergencies.

“While it might appear that it is only a matter of time before physicians are rendered obsolete by this type of technology, a closer look at the role this technology can play in the delivery of health care is warranted to appreciate its current strengths, limitations, and ethical complexities,” wrote Michael J. Rigby in the AMA Journal of Ethics. “This powerful technology creates a novel set of ethical challenges that must be identified and mitigated since AI technology has tremendous capability to threaten patient preference, safety, and privacy.”

Major concerns as well as major potential

Despite the billions of dollars that technology companies are pouring into the development of AI technology, there are still several factors that will prevent Alexa from winding up in the surgical suite anytime soon.

For one, privacy issues will always be a big deal in healthcare, and technology companies still have a way to go in developing an interactive speaker that is fully HIPAA compliant. If it’s easy enough for a criminal down the street to steal your credit card information or a computer hacker across the globe to hold a hospital’s network ransom, how easy would it be for someone to steal protected patient information over the internet through a hacked AI system?

Next, there’s the issue of patient consent and how to approach it with the consumer. Where one patient might love the idea of ordering meds using voice commands, another may not be comfortable with the practice. Can hospitals develop systems that allow for patient choice when using AI technology, especially if they already employ AI in the course of many treatments? Also, as an unconscious person cannot make medical consent decisions, who gets to make the decision of whether to use AI while in the middle of an operation?

Indeed, physicians will need a firm understanding of how AI systems work and the ability to communicate the pros and cons of such systems to a skeptical audience.

“We suggest that companies provide detailed information about AI systems, which can help ensure that physicians—and subsequently their patients—are well informed,” wrote Daniel Schiff, MS, and Jason Borenstein, PhD, in their Journal of Ethics article “How Should Clinicians Communicate With Patients About the Roles of Artificially Intelligent Team Members?”

“By explaining to patients the specific roles of health care professionals and of AI and robotic systems as well as the potential risks and benefits of these new systems, physicians can help improve the informed consent process and begin to address major sources of uncertainty about AI,” they continued.

If AI becomes mainstream in the future, what will happen to traditional doctors and medical schools? Physician training already takes many years, and medical schools rely on years of tradition and protocol to educate the next generation of doctors and nurses. How will medical schools and the training hospitals behind them need to step up their game to educate physicians to work alongside virtual assistants?

In the article “Reimagining Medical Education in the Age of AI,” Steven A. Wartman, MD, PhD, and C. Donald Combs, PhD, lament that already, medical students are required to consume so much information that many are suffering from “stress-induced mental illness.” If that’s so, how can adding the need to understand and work beside AI systems be an improvement?

One suggestion they present is for medical schools to change their curriculums from a focus on “information acquisition”—that is, rote memorization—to an emphasis on “knowledge management and communication.”

In other words, medical students will need to study AI systems to a point where they understand how the systems come to diagnostic decisions—and can confidently sell the technology to a nervous patient.

“The ability to interpret these probabilities clearly and sensitively to patients and their families represents an additional—and essential—educational demand that speaks to a vital human, clinical, and ethical need that no amount of computing power can meet,” they argued.

Lastly, and perhaps most importantly, are the legal and moral ramifications of allowing AI into the realm of medical care. Since the days of Hippocrates, the medical industry has relied on the oath of doing no harm, and that’s largely hinged on the human ability to understand the meaning of harm and feel empathy toward another humans. Can an emotionless artificial system take the place of a human doctor?

There are many questions when considering the future of AI in medicine. Who oversees the safety of the systems in any given facility? Who gets in trouble (and ultimately gets sued for malpractice) if an invisible avatar makes a wrong judgment? Who gets to overrule the decision Alexa makes?

Currently, malpractice law falls under the jurisdictions of tort law, which allows civil cases through which patients can seek out damages for wrongful injury caused by the wrongdoing of others, also referred to as negligence. How can an AI system that doesn’t know right from wrong get sued for malpractice?

“For example, if the designers of AI cannot foresee how it will act after it is released in the world, how can they be held tortiously liable?” argued Hannah R. Sullivan and Scott J. Schweikart, JD, MBE, in a Journal of Ethics piece. “And if the legal system absolves designers from liability because AI actions are unforeseeable, then injured patients may be left with fewer opportunities for redress.”

Perhaps the device’s manufacturer could be sued, but only if the device is considered a medical device. Some suggestions call for conferring a “personhood” to the AI device, giving it the same legal responsibility as a human, or making a group of humans responsible for evaluating the care protocols that are assigned to a so-called “black box.”

“New legal solutions that craft novel legal standards and models that address the nature of AI, such as AI personhood or common enterprise liability, are necessary to have a fair and predictable legal doctrine for AI-related medical malpractice,” the authors argued.

John Palmer is a freelance writer who has covered healthcare safety for numerous publications. Palmer can be reached at johnpalmer@palmereditorial.com.