FROM THE PRESIDENT by Thomas Higgins, MD, MSPH, MBA
The Double-Edged Sword of AI: Defending the Art of Medicine in the Age of Artificial Intelligence
The promise of Artificial Intelligence( AI) in medicine is undeniable. We are frequently presented with a future in which algorithms predict sepsis hours before a clinical spike in white blood cell counts, administrative burdens vanish under the weight of automation and the“ art of medicine” is finally liberated from the drudgery of data entry.
However, many Kentucky physicians are beginning to feel the sharper, more dangerous edge of this technological sword. Instead of liberation, we are witnessing the rise of“ shadow downcoding,” where opaque algorithms second-guess clinical judgment without a human ever reviewing the medical chart. Rather than clarity, we face“ black box” liability, where state medical boards can hold physicians 100 % accountable for the errors of proprietary software that we are legally barred from inspecting. As we stand at this technological crossroads, the question for the Greater Louisville Medical Society( GLMS) is not whether we should reject innovation, but how we survive it. We must ensure that AI remains a tool in the physician’ s hand rather than a master of the patient’ s fate.
The View from Frankfort: The Kentucky AI Task Force
The ongoing deliberations of the Kentucky Artificial Intelligence Task Force underscore the urgency of the current legislative environment. Throughout late 2025 and into the first quarter of 2026, this body has sought to define the Commonwealth’ s strategic position within the burgeoning AI economy. Currently, the legislative climate in Frankfort is focused on“ regulatory sandboxes,” structured environments that permit technological experimentation, with reduced oversight, to catalyze innovation.
While the task force has evaluated the economic utility of AI in logistics and manufacturing, its discourse regarding broader clinical AI applications has lacked substantive engagement with patient safety protocols. This highlights a larger regulatory issue that the prevailing consensus still favors non-intervention to avoid“ stifling innovation,” often deferring to federal guidelines that may not keep pace with rapid clinical integration. A regulatory framework aimed solely at attracting tech startups may unintentionally expose Kentucky’ s health care system to risks from unproven diagnostic and surgical algorithms. The clinical community must take an active role in ensuring that the guardrails are expanded into a comprehensive framework for patient safety.
Balancing Uniformity and Safety in Federal AI Regulation
The backdrop to this challenge includes rapidly shifting federal mandates. The“ One Big Beautiful Bill Act”( OBBBA), enacted in July 2025, and an Executive Order on Dec. 11, 2025, have set an agenda for“ National AI Uniformity.” While uniformity sounds efficient, in practice, it often acts as a ceiling rather than a floor.
This is evidenced by the recent rescinding of the“ Richardson Waiver.” Established in 1971, this policy was a voluntary pledge by the Department of Health and Human Services( HHS) to subject its rules on public benefits, including Medicaid and Medicare, to the standard“ notice-and-comment” process. In rescinding this waiver in 2025, the federal government signaled that speed and cost-savings now outrank clinical consensus. Without this waiver, the administrative state can bypass the very experts who understand the clinical reality of these policies, allowing for the integration of AI-driven reimbursement models without a formal period for identifying potential patient harms. For Kentucky physicians, this creates a dangerous dynamic: if the federal floor for safety drops, insurers and hospital systems may pressure practitioners to work at that lower standard, risking both patient lives and professional licensure.
Ambiguous Liability: Who Pays for the“ Hallucination”?
The most immediate threat to medical practice is the widening liability gap. The Federation of State Medical Boards( FSMB), in its 2024 report, stated that physicians are“ ultimately responsible” for AI-influenced decisions. If a clinical decision support tool“ hallucinates,” for example, by fabricating a drug interaction or missing a diagnosis, the physician who clicked“ accept” is held liable.
4