AS GUIDELINES BECOME MORE
FUZZY
Tom James, MD
T
he evolution of clinical guidelines continues. In the early days of guideline
development in the 1970s and 1980s,
detractors denounced them as “cook book
medicine.” There were concerns that the art
of medicine would become supplanted by
algorithmic rules. Nowhere in the Norman
Rockwell paintings of the respected physician is there a depiction of a tablet computer
guiding the doctor with the next appropriate step in diagnosis. The
classically training of internists prior to the computer chip was to
consider all possibilities for a diagnosis regardless of probability.
The worry was not to miss the “zebra” among the herd of horses. So
we learned the skills of ordering labs, consults or imaging studies to
rule out all possibilities. This is TV’s Dr. House at his finest.
Over time it became clear that while the most skillful physicians
were able to accurately diagnose the obscure condition, the variation in clinical outcomes of patients treated by different doctors
was significant. Standardization of clinical practice to achieve more
uniform clinical outcomes trailed similar evolutionary steps in manufacturing and other service industries. Doctors and patients raised
legitimate concerns about the complexities of human physiology
but there was agreement that initial approaches to diagnosis and
treatment could be standardized and allow for individual variations
based upon clinical response. So the management techniques of
Edward Deming, Joseph Juran, and others were embraced by the
professional sector…and guidelines started to mature from post-op
order sets to whole approaches to diagnosis and treatment.
Two sentinel works caused the medical community to re-think its
antipathy toward guidelines. Those were the small area variation in
care analysis of Jack Wennberg at Dartmouth, and the 1999 publica-
tion by the Institute of Medicine entitled To Err is Human: Building
a Safer Health System. By developing greater standardization in
clinical approaches to patient care, then there would be a reduction
in unnecessary variation and more standardized clinical outcomes.
During that time in Louisville, The Physicians Inc. (TPI) adopted
this philosophy and produced large numbers of consensus-based
clinical guideline. At that time I was medical director for Alternative
Health Delivery System (AHDS)—a joint venture between Anthem
and four area hospitals. AHDS and TPI worked collaboratively to
publish, deploy and encourage these clinical guidelines. This work
was disconnected from any financial impact and, more importantly,
was not immediately accessible during patient care. Paper-based
guidelines just are not helpful to clinical practice. So these guidelines
were just not used.
Over the past decade, guideline development has become more
sophisticated. Point of care issues have been taken into account
through incorporation of key elements of guidelines into electronic
health records and specialty registries, like the American College
of Cardiology Pinnacle registry (http://www.ncdr.com/WebNCDR/
pinnacle/home). These tools put the relevant guideline elements—
but not the entire guideline—in the EHR screen while the patient
is still in the exam room, and are therefore more accessible to
the treating physician. The guidelines are often used for measure
development. While a guideline may be a longitudinal branching
decision tree, there are a number of federal, other government and
insurance measurement tools which focus on a single point within
the guideline on which to build a measure. Those measures then
become the underpinnings of “Value Based Purchasing.” Thus we
see the development of financial consequences for adherence to
guidelines.
But have we put too much credence in these guidelines? There
MAY 2015
17