October 2017 April 2016 | Page 15
Te Puawai
Medical journals acknowledge the importance of such stories in health care practice: Annals of
Internal Medicine includes a regular doctor-as-patient stories, just as the British Medical Journal
invites authors to submit stories about memorable patients, mistakes, and anything else that
conveys “instruction, pathos, or humour.”
Despite the example set by medicine and sociology, nursing is restricting, rather than expanding,
what it allows authors to present. This is a situation which requires rapid redress. In the paragraphs
to come, I will describe how the journals which stand for the mouthpiece of nursing have become
overly concerned with presenting its scholarship and talking about its discipline in a standardised
and exclusionary manner. This reflects a positivistic, audit-oriented belief in knowledge generation
that is stymieing our profession and its scholars. This approach emerges from a devotion to
evidence-based practice, and persists to the detriment of the field. An overreliance on systematic
review trivialises nursing’s intellectual autonomy, instead, instilling method and design into a
hierarchically unjustified supreme position.
The idea of combining the results of more than one study of a similar phenomenon in order to
increase their impact is at the heart of the systematic review. Early attempts at this approach were
undertaken by Karl Pearson[7,8] and Ernest Jones, whose work was only “discovered” in 2003[9]
by an Anglocentric field, ignorant of Jones’ publication (written in French) which reviewed material
published predominantly in French and German. Ronald Fisher presented statistical techniques for
using the results of independent studies to predict probabilities in 1932.[10]
But the practice did not become prevalent until the second half of the 20th century. In the late
1970s, a number of summarizing research papers were published, including Hall’s[11] “Gender
Effects in Decoding Nonverbal Cues,” Smith and Glass’[12] “Meta-analysis of Psychotherapy
Outcome Studies,” and Rosenthal and Rubin’s[13] summary of 345 experiments studying the
tendency of researchers to obtain results they expect because of their influence in shaping
responses. This study did not attempt to assess the quality of the individual experiments, rather to
encompass the results of all existing studies. Their paper, they suggested, could serve as a
methodological template for summarizing other entire areas of research.
Evidence based practice enhanced the prominence of this method, as both rely upon the same
premises. Archie Cochrane’s 1972 diatribe on Effectiveness and Efficiency is at the base of the
contemporary evidence based practice movement. There, he lamented the absence of
measurement of effectiveness of medical interventions and described the randomised controlled
trial as a tool for “open[ing] up the new world of evaluation and control” and perhaps saving the
national health service.[14]
The systematic review is “the application of scientific strategies that limit bias to the systematic
assembly, critical appraisal, and synthesis of all relevant studies on a specific topic”.[15, p167] This
definition emerged from the Potsdam Consultation: a consortium organised to assess and address
the production of high quality meta-analysis and review of randomised controlled trials. The
Potsdam Consultation developed a list of guiding principles and a methodological overview
covering protocol development, search strategy, study selection, quality assessment, analysis,
evaluation of heterogeneity, subgroup analyses, sensitivity analyses, presentation, interpretation,
and dissemination.[15]
© Te Puawai
College of Nurses Aotearoa (NZ) Inc
13