professional and moral obligations diverge, the role of the physicianâs conscience comes into play. For our purposes here, I will define conscience using a definition from the Stanford Encyclopedia of Philosophy: conscience is an internal âsense of dutyâ that compels us to âact according to moral principles or beliefs we already possess.â An individualâs conscience does not always correspond to perceived objective morality and does not need to originate in any particular source; it can arise from religious belief, oneâs own moral code, or an intuitive sense of right and wrong.
A simple example of a conscience-triggering conflict is a case in which a patient requires a lifesaving abortion, but the physician treating the patient believes that abortion is murder and does not want to be complicit by performing or assisting with the abortion. In this scenario, the physicianâs conscience serves as a check on the actions other practical authorities (the patient and medical establishment) expect them to perform. Essentially, the exercise of oneâs conscience can provide a means of overriding an authority in situations where a physician perceives the authorityâs judgment to be in error. A survey of physicians found that 42 percent believed that physicians should never be expected to do something that conflicts with their conscience, reflecting how important it is to many physicians to maintain their individual moral integrity. The American Medical Association recognizes that doctors are human beings who cannot be expected to divorce their personal beliefs from their actions at work. It therefore offers guidelines for how doctors can maintain their personal moral integrity while also ensuring that their patients receive the appropriate standard of care. If a physicianâs conscience can be said to act as a check on the overreach of other authorities, then the recommendations put forth by the AMA close this loop by acting as a check on the unfettered exercise of a physicianâs conscienceâwhich could itself lead to overreach and diminish the quality of patient careâand thereby ensure that the physicianâs authority remains legitimate.
Physician Conscience and Artificial Intelligence
Subjects in Stanley Milgramâs well-known electric shock experiments demonstrated the human propensity for following authority without question, in part offering a possible explanation for why physicians in Nazi Germany were able to seemingly unquestioningly commit atrocities. In both situations, the actions being committedââeuthanizingâ disabled children, on the one hand, for example, and delivering (fake) fatal doses of electricity, on the otherâwere objectively bad, despite the fact that those ordering them often presented such actions as means to positive ends. Taken to its logical conclusion, a healthcare system driven by AI would be one that introduces a new source of authorityâpotentially to the exclusion of the current three sources of authority outlined above, since, as we have seen, deep neural networks have the capacity to outperform humans at complex tasks. In this scenario, the authority would likely be intended as a genuine improvement upon a healthcare system that relies on the efforts of well-meaning yet error-prone humans. But the potential pitfalls seem nearly as varied as the humans the algorithms could one day replace and the potential for error just as grave or worse.
The growing use of AI in medicine will undoubtedly create situations in which physicians disagree with computersâ decisions. Perhaps a computer algorithm deems a womanâs very severe abdominal pain to be insufficiently suggestive of ovarian torsion and recommends outpatient treatment for constipation. Perhaps it decides that a white manâs MELD (Model for End-Stage Liver Disease) score of 34 is more deserving of a liver transplant than a Black manâs MELD score of 35. Perhaps it performs a behind-the-scenes cost-effectiveness analysis and determines that a suicidal teenagerâs inpatient psychiatric care should not be covered by insurance. In such situations, the appeal to a physicianâs conscience may require expansion from an expression of personal morality to a tool for patient advocacy. The potential risks to the physician in speaking up, however, remain, making such situations especially challenging. In an era of defensive medicine, it may feel safer to defer to the algorithms. It may seem preferable, for instance, that a malpractice lawsuit land in the lap of the software manufacturer who failed to correctly fine-tune the algorithms to distinguish between constipation and ovarian torsion than in the lap of a physician who chose to override the computerâs suggestion to give the patient Miralax with a decision to perform emergency surgery.
A careful approach to implementing AI in medicine will require an ongoing dialogue among physicians, healthcare administrators, and software developers regarding the legitimacy of AIâs authority. Is it âright, justified, [and] supported by good reasonsâ? Will we hold onto the expectation that physicians deliver a certain standard of care, or will that standard be relaxed if the algorithm says it should be? Will practical authority one day reside only in AI, or will AI simply have a seat at the table along with the other sources of medical authority? Certainly, we will need to be mindful of historical precedent and create an environment where conscientious human decision-making is explicitly allowed and encouraged. A road paved with good intentions will not lead to hell if it is engineered with an understanding of the potential risks along the way and traveled with caution.
=====
Amelia Haj, MD, PhD is a transfusion medicine physician in the Department of Pathology at Massachusetts General Hospital and the medical director of the MGH Kraft Family Blood Donor Center. She was a 2018 FASPE Medical fellow.
21
professional and moral obligations diverge, the role of the physicianâs conscience comes into play. For our purposes here, I will define conscience using a definition from the Stanford Encyclopedia of Philosophy: conscience is an internal âsense of dutyâ that compels us to âact according to moral principles or beliefs we already possess.â An individualâs conscience does not always correspond to perceived objective morality and does not need to originate in any particular source; it can arise from religious belief, oneâs own moral code, or an intuitive sense of right and wrong.
A simple example of a conscience-triggering conflict is a case in which a patient requires a lifesaving abortion, but the physician treating the patient believes that abortion is murder and does not want to be complicit by performing or assisting with the abortion. In this scenario, the physicianâs conscience serves as a check on the actions other practical authorities (the patient and medical establishment) expect them to perform. Essentially, the exercise of oneâs conscience can provide a means of overriding an authority in situations where a physician perceives the authorityâs judgment to be in error. A survey of physicians found that 42 percent believed that physicians should never be expected to do something that conflicts with their conscience, reflecting how important it is to many physicians to maintain their individual moral integrity. The American Medical Association recognizes that doctors are human beings who cannot be expected to divorce their personal beliefs from their actions at work. It therefore offers guidelines for how doctors can maintain their personal moral integrity while also ensuring that their patients receive the appropriate standard of care. If a physicianâs conscience can be said to act as a check on the overreach of other authorities, then the recommendations put forth by the AMA close this loop by acting as a check on the unfettered exercise of a physicianâs conscienceâwhich could itself lead to overreach and diminish the quality of patient careâand thereby ensure that the physicianâs authority remains legitimate.
Physician Conscience and Artificial Intelligence
Subjects in Stanley Milgramâs well-known electric shock experiments demonstrated the human propensity for following authority without question, in part offering a possible explanation for why physicians in Nazi Germany were able to seemingly unquestioningly commit atrocities. In both situations, the actions being committedââeuthanizingâ disabled children, on the one hand, for example, and delivering (fake) fatal doses of electricity, on the otherâwere objectively bad, despite the fact that those ordering them often presented such actions as means to positive ends. Taken to its logical conclusion, a healthcare system driven by AI would be one that introduces a new source of authorityâpotentially to the exclusion of the current three sources of authority outlined above, since, as we have seen, deep neural networks have the capacity to outperform humans at complex tasks. In this scenario, the authority would likely be intended as a genuine improvement upon a healthcare system that relies on the efforts of well-meaning yet error-prone humans. But the potential pitfalls seem nearly as varied as the humans the algorithms could one day replace and the potential for error just as grave or worse.
The growing use of AI in medicine will undoubtedly create situations in which physicians disagree with computersâ decisions. Perhaps a computer algorithm deems a womanâs very severe abdominal pain to be insufficiently suggestive of ovarian torsion and recommends outpatient treatment for constipation. Perhaps it decides that a white manâs MELD (Model for End-Stage Liver Disease) score of 34 is more deserving of a liver transplant than a Black manâs MELD score of 35. Perhaps it performs a behind-the-scenes cost-effectiveness analysis and determines that a suicidal teenagerâs inpatient psychiatric care should not be covered by insurance. In such situations, the appeal to a physicianâs conscience may require expansion from an expression of personal morality to a tool for patient advocacy. The potential risks to the physician in speaking up, however, remain, making such situations especially challenging. In an era of defensive medicine, it may feel safer to defer to the algorithms. It may seem preferable, for instance, that a malpractice lawsuit land in the lap of the software manufacturer who failed to correctly fine-tune the algorithms to distinguish between constipation and ovarian torsion than in the lap of a physician who chose to override the computerâs suggestion to give the patient Miralax with a decision to perform emergency surgery.
A careful approach to implementing AI in medicine will require an ongoing dialogue among physicians, healthcare administrators, and software developers regarding the legitimacy of AIâs authority. Is it âright, justified, [and] supported by good reasonsâ? Will we hold onto the expectation that physicians deliver a certain standard of care, or will that standard be relaxed if the algorithm says it should be? Will practical authority one day reside only in AI, or will AI simply have a seat at the table along with the other sources of medical authority? Certainly, we will need to be mindful of historical precedent and create an environment where conscientious human decision-making is explicitly allowed and encouraged. A road paved with good intentions will not lead to hell if it is engineered with an understanding of the potential risks along the way and traveled with caution.
=====
Amelia Haj, MD, PhD is a transfusion medicine physician in the Department of Pathology at Massachusetts General Hospital and the medical director of the MGH Kraft Family Blood Donor Center. She was a 2018 FASPE Medical fellow.
Bibliografia:
1 A. Esteva, i in., Dermatologist-Level Classification of Skin Cancer with Deep Neural Networks, âNatureâ 542 (2017), nr 7639,
s. 115â118, doi:10.1038/nature21056.
2 S. Mukherjee, A.I. Versus M.D., âThe New Yorkerâ, 19 czerwca 2017, www.newyorker.com/magazine/2017/04/03/ai-versus-md.
3 J. Angwin, i in., Machine Bias, âProPublicaâ, 23 maja 2016, www.propublica.org/article/machinebias-risk-assessments-in-criminal-sentencing.
4 R. Caruana, i in., Intelligible Models for HealthCare: Predicting Pneumonia Risk and Hospital 30-
Day Readmission, [w:] Proceedings of the 21st ACM SIGKDD International Conference on Knowledge
Discovery and Data Mining â KDD â15, Sydney 2015, doi:10.1145/2783258.2788613.
5 D. S. Char, i in., Implementing Machine Learning in Health Care â Addressing Ethical Challenges, âNew England Journal of Medicineâ 378 (2018), nr 11, s. 981â983, doi:10.1056/NEJMp17142.
6 M. Lacewing, Authority and Legitimacy, Londyn 2018, cw.routledge.com/textbooks/alevelphilosophy/data/AS/WhyShouldIBeGoverned/Authorityandlegitimacy.pdf.
7 Conscience, [w:] Stanford Encyclopedia of Philosophy, plato.stanford.edu/entries/conscience/#ConsMotiActMora.
8 M. S. Swartz, âConscience Clausesâ or âUnconscionable Clausesâ: Personal Beliefs Versus Professional Responsibilities, âYale Journal of Health Policy, Law, and Ethicsâ 6 (2006), nr 2, p. 269â350.
9 R. Lawrence i in., Physiciansâ Beliefs About Conscience in Medicine: A National Survey, âAcademic Medicineâ 84 (2009), nr 9,
p. 1276â1282.
10 American Medical Association, Code of Medical Ethics Opinion 1.1.7, Ethics: Physician Exercise of Conscience, www.ama-assn.org/delivering-care/physician-exerciseconscience.