learned discrete facts. Incredibly, and somewhat troublingly, a neural network cannot provide clear step-by-step rationales for its decisions. It may be able to accurately identify photos of chickens, say, and you could tease apart the factors it weighed to determine what it considered most important in identifying a particular bird as a chicken, but it could not provide you with a protocol for improving your own ability to distinguish chickens from parakeets.
Artificial intelligence is already seamlessly integrated into our lives. Individualized Netflix suggestions, tailored Google search results, and optimized driving routes on Google Maps—all driven by AI. Smart assistants such as Siri and Alexa use AI to understand our verbalized demands and dating sites such as Hinge are now exploring the use of machine learning to improve their suggestions. Far from being a futuristic prospect, our actions and beliefs are already guided and shaped by decisions made by computer algorithms and neural networks.
Researchers have, of course, already begun to train machine learning software on medical datasets. A group of dermatologists published a letter in Nature in 2017 describing their use of a deep neural network to classify photographs of skin lesions as either benign or malignant. On average, the neural network outperformed dermatologists, suggesting that the technology has the capacity to very quickly and accurately reach levels of mastery in skills that humans take years to learn to a lesser degree of accuracy. Some have gone so far as to assert that certain tasks, especially those requiring image analysis, will soon be taken over entirely by deep learning algorithms, as physician and author Siddhartha Mukherjee describes in his 2017 piece on the subject for The New Yorker.
There are many other avenues for artificial intelligence to become integrated into medical practice. The broad movement towards using electronic health records (EHRs) creates an obvious opening for the use of natural language processing software, which can “understand” human-written language to analyze and form recommendations based on patterns in patients’ written medical records. Indeed, researchers have already explored the use of deep neural networks for earlier prediction of diseases and events relevant to hospital performance metrics, such as hospital readmissions. Some have suggested using AI technologies to assist patients with dementia. Others make the case for developing AI tools that can approximate the decisions incapacitated patients would have made had they still possessed all their faculties to ease the difficulty of relying on surrogate decision makers. Given the integration of AI into other areas of our lives and the advances in its use in medical contexts, it is reasonable to expect that we will increasingly see its deployment for medical purposes in the coming years.
Ethical Concerns
Many of the ethical considerations surrounding the use of AI in medicine center on the largely “black box” nature of machine learning software. Without knowing how decisions are made, we run the risk that AI tools will unwittingly perpetuate and amplify human biases based on the datasets that are entered by humans into the software. One of the most oft-cited examples of this is the use of risk-assessment tools by criminal justice systems to predict the likelihood that a person convicted of a crime will reoffend. In an investigation conducted by the news site ProPublica, these algorithms were nearly twice as likely to incorrectly predict that Black defendants would reoffend than white defendants. Remarkably, the algorithms did not use direct data about the races of the individual defendants but rather relied on defendants’ answers to a series of questions, demonstrating that bias can still emerge even in scenarios that appear to be neutrally designed. It is not difficult to imagine machine learning software intended to predict the best individualized treatment plan for a particular condition inadvertently exacerbating healthcare disparities along racial or health-literacy lines.
Sometimes, algorithms can simply misinterpret the information they are given, yielding potentially devastating consequences. In one striking example, engineers trained a machine learning tool to predict the risk of a patient dying of pneumonia and then offer a recommendation about whether the patient should be treated as an inpatient or whether they could safely be treated as an outpatient. Paradoxically, the tool determined that pneumonia patients who also suffered from asthma had a considerably lower risk of death and recommended that they be treated as outpatients. In reality, such patients are generally at a much higher risk of death and more consistently receive ICU levels of care, leading to better outcomes. This layered approach to success led the algorithm to the erroneous (and circular) conclusion that because this subset of pneumonia patients generally had good outcomes, they could be treated safely as outpatients. The researchers in this study noted that because they used an “intelligible” machine learning tool to predict patient outcomes, they were able to identify the software’s reasoning; had the researchers used a neural network, it would have likely come to the same conclusions but in a more opaque fashion, making it much more difficult for the researchers to identify the source of the error.
The competing motivations of AI software creators, buyers, and users are also an important consideration. Even though we generally trust them to have their patients’ best interests in mind, physicians are not, of course, flawless, unbiased actors; employers’ expectations of profits and physicians’ fears of malpractice lawsuits can often lead to unnecessary testing and overtreatment of conditions. In theory, AI could be taught to neutralize motivations that threaten to compromise patient care, but, in a profit-driven healthcare system, it is reasonable to assume that the software employed may reflect motives beyond simply improving patient care.
Conscience and Authority in Medical Decision-Making
A future in which AI software governs a physician’s every move is not on the horizon quite yet. The idea of such an extreme, however, offers an opportunity to explore how physicians should respond in situations in which their authority is supplanted by a supposedly superior, yet still fallible, authority. To develop a framework for how physicians can navigate this hypothetical new world, we must first establish how to determine whether an authority is legitimate and what this means in a medical context. Then we can identify how an individual physician’s conscience operates in a normal medical setting.
We often view “authority” within purely political or legal limits, but it is relevant to the world of medicine too, which is notoriously hierarchical. In a theoretical sense, “authority” describes someone who is an expert; in a practical sense, it refers to a person or entity that has the right and power to guide and control a group of people. There are different views about how to determine whether a practical authority is legitimate, but, for the purposes of this discussion, I will define a legitimate authority as one that is “right, justified, [and] supported by good reasons” requiring obedience, even if inconsistently enforced.
In an ideal 21st-century patient-doctor relationship, the physician acts as a theoretical authority, but practical authority is distributed across the patient, the physician, and a set of collectively held expectations about standards of care. No longer is the physician the paternalistic authority figure from a bygone era. Today, physicians are obligated to make available a standard of care determined by the broader medical community while still ensuring that the patient is an active participant in their care decisions. Ongoing trust that all parties have the best interest of the patient in mind grants legitimacy to the physician’s and healthcare system’s authority; conflicting interests, such as the desire to increase profits, deliver a blow to the credibility of the physician and healthcare system.
When a physician perceives that their
18