There are two prerequisites that clinicians and therapists need to accept, embrace and implement in clinical practice, and they are like two sides of the same coin. Now, there are multiple fundamental problems in the musculoskeletal field. However, the problems presented below are two of the most substantial barriers to the progress and development of high-quality, evidence-based care.
Modern-based high-quality care should be research-based (Kamper 2018). As modern clinicians, we need to make clinical reasoning based upon the current scientific consensus. As well as use the goldmine of knowledge we have from current research within the musculoskeletal field. There are multiple reasons we should use research, as mentioned by Prof. Steve Kamper in his excellent clinical series “Evidence in Practice” published in The Journal of Orthopaedic & Sports Physical Therapy.
Our treatments should be based on what we currently know
Dogmatic and unprogressive musculoskeletal therapists may find themselves being phased out of their traditional roles if other, more updated professionals grasp the current ‘evidence-based-care’ paradigm. Our treatments and modalities should be based upon what we currently know, not what we wish to know in 5-10 years or what we knew some 30 years ago. A modern pragmatic approach can be found in the second installment of Prof. Kamper’s clinical series. Kamper explains how clinicians need to ask a specific sequence of questions that can drive the evidence-based practice process.
Clinicians need to ask themselves three questions:
1. What am I going to do with this patient?
2. What does the research evidence say?
3. Integration of clinical experience and patient values
Unfortunately, many clinicians skip step 2 and proceed directly to step 3, integrating their clinical experience, totally ignoring the research. They skip the valuable step because they believe that clinical experience is more valuable than evidence. As noted by Zadro et al. 30% of therapists’ think their experience is more valuable than evidence.
As such, we are paradoxically ourselves a severe roadblock and barrier towards delivering a more modern, high-quality research-based care model. We too often resist updating our interventions and models of care. As clinicians, we actually need “research” more desperately that you need toilet paper when taking a shit in the woods!
Prerequisite no. 1 – Ineffective interventions and modalities can seem effective even if they are not
Many otherwise knowledgeable clinicians fall prey to the therapeutic illusion, believing their intervention has an effect when, in reality, the intervention does not.
Prerequisites no. 2 – Outcome measures measure outcomes, not effects of the intervention.
In clinical practice, we only see outcomes, not the effects of our intervention, as many professionals think. It is arrogant to assume that only our intervention influences our patients. There are 168 hours in a week; it is pretty cocky to believe that the 30 min the patient was with us made a positive impact and not the 167,5 other hours of the week. That is why we need to look at experiments like RCT to isolate the effects of an intervention with a higher degree of certainty. Clinical outcomes are always multifactorial. Because of this fact, we need to look at the research.
So the two barriers are that clinicians think they can know what “works” through the outcomes they get with people they provide care for, and even ineffective interventions can appear to be effective, even if they are not. Clearly, both are erroneous and wrong.
Combined, the two barriers mean that many clinicians falsely believe they do not need to read and integrate research into their clinical practice. Thinking that we can use our subjective experiences to show what “works”, is highly problematic for many reasons. There is a high risk of confounding variables and post hoc errors, and then there is a major problem with sorting out regression to the mean from the effects of the intervention.
Using a too-small sample like a personal experience also commits multiple fallacies simultaneously, like the hasty generalization fallacy, since no single-person sample is large enough to generalize.
Not to mention, when using experience and outcomes, typically, no valid measure is used, and there is no objective documentation. As such, it opens up for both interpretative errors and recall bias.
Causality (that is, a cause-effect relationship or what “works”) can only be extracted from experimental research like RCTs (Perry-Parrish et al.). Even some types of research, like epidemiological studies and case studies, cannot show causation, and they can only provide correlative data. However, this cannot be used to extrapolate a cause-effect relationship. As mentioned by Prof. Brad Schoenfeld, PhD.
The end results of the two errors and barriers are extensively summarised by researcher and Associate Professor Adam Rufa below:
“We need to realize that our observations of the world are faulty and the conclusions we make based on our observations are prone to bias. Putting too much trust in our ability to make accurate conclusions based on our experience seems to be a big barrier to EBP. Just about every time I talk to someone about non-science based interventions they justify it by bringing up the experience leg of EBP.” Associate Professor Adam Rufa, PT, DPT, PhD, OCS
References: