professional and moral obligations diverge, the role of the physician’s conscience comes into play. For our purposes here, I will define conscience using a definition from the Stanford Encyclopedia of Philosophy: conscience is an internal “sense of duty” that compels us to “act according to moral principles or beliefs we already possess.” An individual’s conscience does not always correspond to perceived objective morality and does not need to originate in any particular source; it can arise from religious belief, one’s own moral code, or an intuitive sense of right and wrong.
A simple example of a conscience-triggering conflict is a case in which a patient requires a lifesaving abortion, but the physician treating the patient believes that abortion is murder and does not want to be complicit by performing or assisting with the abortion. In this scenario, the physician’s conscience serves as a check on the actions other practical authorities (the patient and medical establishment) expect them to perform. Essentially, the exercise of one’s conscience can provide a means of overriding an authority in situations where a physician perceives the authority’s judgment to be in error. A survey of physicians found that 42 percent believed that physicians should never be expected to do something that conflicts with their conscience, reflecting how important it is to many physicians to maintain their individual moral integrity. The American Medical Association recognizes that doctors are human beings who cannot be expected to divorce their personal beliefs from their actions at work. It therefore offers guidelines for how doctors can maintain their personal moral integrity while also ensuring that their patients receive the appropriate standard of care. If a physician’s conscience can be said to act as a check on the overreach of other authorities, then the recommendations put forth by the AMA close this loop by acting as a check on the unfettered exercise of a physician’s conscience—which could itself lead to overreach and diminish the quality of patient care—and thereby ensure that the physician’s authority remains legitimate.
Physician Conscience and Artificial Intelligence
Subjects in Stanley Milgram’s well-known electric shock experiments demonstrated the human propensity for following authority without question, in part offering a possible explanation for why physicians in Nazi Germany were able to seemingly unquestioningly commit atrocities. In both situations, the actions being committed—”euthanizing” disabled children, on the one hand, for example, and delivering (fake) fatal doses of electricity, on the other—were objectively bad, despite the fact that those ordering them often presented such actions as means to positive ends. Taken to its logical conclusion, a healthcare system driven by AI would be one that introduces a new source of authority—potentially to the exclusion of the current three sources of authority outlined above, since, as we have seen, deep neural networks have the capacity to outperform humans at complex tasks. In this scenario, the authority would likely be intended as a genuine improvement upon a healthcare system that relies on the efforts of well-meaning yet error-prone humans. But the potential pitfalls seem nearly as varied as the humans the algorithms could one day replace and the potential for error just as grave or worse.
The growing use of AI in medicine will undoubtedly create situations in which physicians disagree with computers’ decisions. Perhaps a computer algorithm deems a woman’s very severe abdominal pain to be insufficiently suggestive of ovarian torsion and recommends outpatient treatment for constipation. Perhaps it decides that a white man’s MELD (Model for End-Stage Liver Disease) score of 34 is more deserving of a liver transplant than a Black man’s MELD score of 35. Perhaps it performs a behind-the-scenes cost-effectiveness analysis and determines that a suicidal teenager’s inpatient psychiatric care should not be covered by insurance. In such situations, the appeal to a physician’s conscience may require expansion from an expression of personal morality to a tool for patient advocacy. The potential risks to the physician in speaking up, however, remain, making such situations especially challenging. In an era of defensive medicine, it may feel safer to defer to the algorithms. It may seem preferable, for instance, that a malpractice lawsuit land in the lap of the software manufacturer who failed to correctly fine-tune the algorithms to distinguish between constipation and ovarian torsion than in the lap of a physician who chose to override the computer’s suggestion to give the patient Miralax with a decision to perform emergency surgery.
A careful approach to implementing AI in medicine will require an ongoing dialogue among physicians, healthcare administrators, and software developers regarding the legitimacy of AI’s authority. Is it “right, justified, [and] supported by good reasons”? Will we hold onto the expectation that physicians deliver a certain standard of care, or will that standard be relaxed if the algorithm says it should be? Will practical authority one day reside only in AI, or will AI simply have a seat at the table along with the other sources of medical authority? Certainly, we will need to be mindful of historical precedent and create an environment where conscientious human decision-making is explicitly allowed and encouraged. A road paved with good intentions will not lead to hell if it is engineered with an understanding of the potential risks along the way and traveled with caution.
=====
Amelia Haj, MD, PhD is a transfusion medicine physician in the Department of Pathology at Massachusetts General Hospital and the medical director of the MGH Kraft Family Blood Donor Center. She was a 2018 FASPE Medical fellow.
20