Memoria [EN] No. 102 | Page 16

ARTIFICIAL INTELLIGENCE

AND PHYSICIAN CONSCIENCE

Amelia Haj, FASPE

This piece was written in 2018 after my experience on the FASPE trip that summer. What I saw and learned there inspired me to reflect on how AI might change decision-making for physicians and other healthcare professionals. Back then, AI was an emerging technology but not one with the near ubiquity we see today. LLMs are now a larger part of healthcare than in 2018. In this sense, this reflection remains relevant. We have not yet outsourced care to these algorithms, but their growing role suggests the need for deeper and more sustained thought about how individual consciences will interact with their recommendations and even decisions. Only through committed attention to our moral frameworks can we hope to avoid potential complicity in wrongdoing.

In viewing the crimes of Nazi-era physicians, it is easy to find ourselves passing judgment. It is difficult to imagine any but the most depraved doctors willingly participating in murdering the disabled, experimenting on and torturing innocents, and supporting a regime that worked tirelessly to exterminate entire swathes of the population. No authority in this day and age, we think, could possibly compel us to set aside our morals so effortlessly.

In the decades since the 1930s and 1940s, the practice of medicine has changed dramatically: advances in medical knowledge and in standards of patient-physician interaction have both empowered patients to be active participants in their care and created expectations of near perfection in physicians’ diagnostic and treatment abilities. In our ongoing effort to further improve our standards of care, artificial intelligence (AI) software is being developed that can, in some cases, outperform physicians. A future in which doctors’ actions are guided by computerized algorithms no longer seems fantastical. In this piece, I argue that modern day physicians face an even more insidious, because well-intentioned, authoritative threat than did their Nazi-era counterparts: the gradual incorporation of AI into medical diagnostics and decision-making—a shift that will require physicians to carefully examine the role of their own consciences in their daily work to a greater degree than they already do.

Artificial Intelligence and Medicine

At its most basic, a medical diagnosis carried out by a physician relies on algorithms of a sort: a patient’s symptom(s) or complaint(s) trigger a series of questions, each leading the clinician down a path to a diagnosis and subsequent treatment. Years of study go into developing an understanding of these algorithms and learning when to apply them. Then, with experience, physicians can go on to gain a deeper understanding of the population they treat and can begin to account for less tangible factors, such as minor, seemingly irrelevant details in a patient’s history, subtle changes in body language, or an unusual blip on a scan. It is these intangibles, this “gut sense” attained after seeing hundreds of cases, that make physicians more than mere mouthpieces for the content in medical textbooks. Anyone with enough coding skills to write a series of if/else statements could develop a tool that would roughly approximate the stepwise process used by a clinician to determine the cause of a symptom such as, for instance, chest pain. Only in recent years, however, has software engineering become sophisticated enough to begin to dynamically adapt to new information in ways more similar to the human brain.

Many definitions of artificial intelligence (AI) exist, but all converge around the use of software that can mimic human decision-making. Simpler forms of AI usually rely on convoluted decision trees, but machine learning, a more sophisticated subtype of AI, incorporates the ability to learn by recognizing patterns in datasets. Deep learning is a subcategory of machine learning that trains artificial neural networks on large datasets, allowing them to examine and interpret documents and images, much as humans can learn to recognize patterns without a conscious awareness of having

Fellowships at Auschwitz for the Study of Professional Ethics (FASPE) promotes ethical leadership for today’s professionals through annual fellowships, ethical leadership trainings, and symposia, among other means. Each year, FASPE awards 80 to 90 fellowships to graduate students and early-career professionals in six fields: Business, Clergy, Design & Technology, Journalism, Law, and Medicine. Fellowships begin with immersive, site-specific study in Germany and Poland, including at Auschwitz and other historically significant sites associated with Nazi-era professionals. While there, fellows study Nazi-era professionals’ surprisingly mundane and familiar motivations and decision-making as a reflection-based framework to apply to ethical pitfalls in their own lives. We find that the power of place translates history into the present, creating urgency in ethical reflection.

Each month one of our fellows publishes a piece in Memoria. Their work reflects FASPE’s unique approach to professional ethics and highlights the need for thoughtful ethical reflection today.

16