Massage & Fitness Magazine 2019 Winter 2019 | Page 47

2. Methodology: How well was the experiment setup and performed?

We tend to pay more attention to the conclusions and headlines of the research paper than how it is done (methodology) and what the researchers found (results). That’s probably because they are quite dry and foreign when we see those numbers, symbols, and abbreviations in those parenthesis (EX: One-way ANOVA: F(1, 20) = 8.496, p<.0035, ηp2 = . 198). If we don’t have a strong background of statistics and methods, our minds may glaze over the text while our brain enters a “screensaver” mode, overlooking data that could tell us more about the validity of the experiment.

“The Introduction and Discussion sections are subjective and could be very well be biased to make the study look favorable and important,” Dr. Anoop Balachandran said in an online interview with me, who is the Assistant Professor of Exercise Science at Queens College at the City University of New York in New York City. “Remember that the methods and the results section are the only objective sections of a study.”

He suggests that clinicians follow the “PICO” of the study, which stands for population, intervention, control, and outcomes. Balachandran describes PICO as follows:

P stands for population. It tells us what the population is used in the study. For example, it could be people with chronic pain, acute pain or hip or knee osteoarthritis, older adults, males and so forth. Research is very population specific and an intervention that works for acute pain or in older adults may not work well in younger or who have chronic pain. So, define the population clearly.

 

I stand for intervention. This could be manual therapy or pain education or exercise and so forth. It is also important to understand the specifics of the intervention. For example, if an exercise intervention, specifics—such as how many days/week, intensity or dosage, duration, and so forth—should be noted. The specifics of an intervention can make a big difference.

For example, a 10-week intervention could have different outcomes compared to a similar intervention but with a longer duration of one year.

 

C means control or comparator group. The C tells us what they are comparing the intervention against. People always talk about the intervention but ignore the control group. For example, a new exercise intervention could look spectacular if my control group is just a sedentary control, but it will look mediocre if I compare it to group that is given a standard exercise protocol.

 

O stands for outcomes: What are their main outcomes? The primary outcome for manual therapists’ studies are pain or/and physical function. What people care about is if they feel better or move better. For example, improving ROM (range of motion) is usually an outcome, but this outcome has no meaning in a person's life whatsoever. This outcome only becomes meaningful if the improved ROM helps with tasks that were previously impossible, for example washing his/her hair or wearing a jacket that was unable due to the lack of shoulder ROM. So outcomes do matter.

 

Once you have identified the PICO, you have managed to understand the bulk of the study in a very short time.

_____________________

 

What would be considered as a “good” methodology vs. “poor” in manual therapy research?

 

“All the good methodical practices recommended for an RCT (randomized controlled trial) would apply to manual therapy studies, too,” Balachandran explained. “Some of them are: did the participants have an equal chance of falling in either of the groups ( were they randomized?), could the participant or investigator have predicted which groups they will fall into (concealed allocation), were the participants and investigator blind to the treatment (double-blind), and did they include all the participants in their final analysis (intention to treat)?

 

“In my opinion, the most important methodological concern for manual therapy studies is blinding. Did the participants knew if they were getting the real or the sham treatment? Ideally, we want the sham treatment to look and feel exactly the same as the intervention minus the part that is supposedly giving the benefits of the intervention.

 

“For example, in good acupuncture studies, the sham treatment uses a needle identical to an acupuncture needle but which retracts into itself after an initial small prick. Mind you, even though the needle look and feel exactly like a regular acupuncture, it still may have some beneficial effects due to the pressure on the skin. Also, in good methodological studies, they also report if the blinding worked or not by asking the participants after the study. Of course, the doctor is aware that he is giving the sham or the real treatment so there is a high risk of bias. So there is some bias that cannot be completely eliminated even in the most rigorous studies. In short, studies with good methodology should have a well-validated sham treatment and also report if the blinding worked based on the feedback from the participants.”7

 

Balachandran also considered dropouts in a study that can affect its bias. “This is one aspect that most folks ignore or are not aware of. It is now pretty clear that dropouts in a study could be related to the prognosis. In manual therapy studies, this simply means that people who drop out from a study might be the ones who didn’t experience any benefits or may have worsened their pain. So the people who remain in the intervention group are the ones in which the treatment worked well and naturally the group averages tend to be better than it truly is. This becomes more important especially when one group has greater dropouts than the other group.

 

“We will never know if the results of a study is the actual truth, but the better the methods, the more confident we are that we are closer to the truth. After all, science is all about finding the truth.”

massagefitnessmag.com 45