Fundamental prerequisites for modern high-quality care

High quality care-Fotor

There are two prerequisites that clinicians and therapists need to accept, embrace and implement in clinical practice, and they are like two sides of the same coin.

 Now, there are multiple fundamental problems in the musculoskeletal field. However, the problems presented below are two of the most substantial barriers to the progress and development of high-quality, evidence-based care.

Modern-based high-quality care should be research-based (Kamper 2018). As modern clinicians, we need to make clinical reasoning based upon the current scientific consensus. As well as use the goldmine of knowledge we have from current research within the musculoskeletal field.  

There are multiple reasons we should use research, as mentioned by Prof. Steve Kamper in his excellent clinical series “Evidence in Practice” published in The Journal of Orthopaedic & Sports Physical Therapy.

Our treatments should be based on what we currently know

Dogmatic and unprogressive musculoskeletal therapists may find themselves being phased out of their traditional roles if other, more updated professionals grasp the current ‘evidence-based-care’ paradigm. Our treatments and modalities should be based upon what we currently know, not what we wish to know in 5-10 years or what we knew some 30 years ago.

A modern pragmatic approach can be found in the second installment of Prof. Kamper’s clinical series. Kamper explains how clinicians need to ask a specific sequence of questions that can drive the evidence-based practice process.

Clinicians need to ask themselves three questions:

1. What am I going to do with this patient?

2. What does the research evidence say?

3. Integration of clinical experience and patient values

Unfortunately, many clinicians skip step 2 and proceed directly to step 3, integrating their clinical experience, totally ignoring the research. They skip the valuable step because they believe that clinical experience is more valuable than evidence. As noted by Zadro et al. 30% of therapists’  think their experience is more valuable than evidence.

As such, we are paradoxically ourselves a severe roadblock and barrier towards delivering a more modern, high-quality research-based care model. We too often resist updating our interventions and models of care. As clinicians, we actually need “research” more desperately that you need toilet paper when taking a shit in the woods!

 

Prerequisite no. 1 – Ineffective interventions and modalities can seem effective even if they are not

“Unfortunately, practitioners’ accumulated, day-to-day, informal impressions of diagnostic reliability and clinical efficacy are of limited value. To help clarify why even treatments entirely lacking in direct effect can seem helpful” Hartman 2009



Many otherwise knowledgeable clinicians fall prey to the therapeutic illusion, believing their intervention has an effect when, in reality, the intervention does not.

 

Prerequisites no. 2 – Outcome measures measure outcomes, not effects of the intervention.

In clinical practice, we only see outcomes, not the effects of our intervention, as many professionals think.  It is arrogant to assume that only our intervention influences our patients. There are 168 hours in a week; it is pretty cocky to believe that the 30 min the patient was with us made a positive impact and not the 167,5 other hours of the week. That is why we need to look at experiments like RCT to isolate the effects of an intervention with a higher degree of certainty. 

Clinical outcomes are always multifactorial. Because of this fact, we need to look at the research.

“Perhaps it is unfortunate that the physiotherapy profession has responded to the perception that physiotherapists must justify what they do by routinely measuring clinical outcomes. The implication is that measures of outcome can provide justification for intervention. Arguably that is not the case. Outcome measures measure outcomes. They do not measure the effects of intervention. Outcomes of interventions and effects of interventions are very different things.”

“Clinical outcomes are influenced by many factors other than intervention, including the natural course of the condition, statistical regression, placebo effects, and so on. (Tuttle (2005) makes this point clearly in his article, in this issue, on the predictive value of clinical outcome measures.) The implication is that a good outcome does not necessarily indicate that intervention was effective; the good outcome may have occurred even without intervention. And a poor outcome does not necessarily indicate that intervention was ineffective; the outcome may have been worse still without intervention. This is why proponents of evidence-based physiotherapy, including ourselves (Herbert et al 2005), argue it is necessary to look to randomised trials to determine, with any degree of certainty, the effects of intervention.“ Herbert et al. 2005

So the two barriers are that clinicians think they can know what “works” through the outcomes they get with people they provide care for, and even ineffective interventions can appear to be effective, even if they are not. Clearly, both are erroneous and wrong.

Combined, the two barriers mean that many clinicians falsely believe they do not need to read and integrate research into their clinical practice. 

Thinking that we can use our subjective experiences to show what “works”, is 
highly problematic for many reasons. There is a high risk of confounding variables and post hoc errors, and then there is a major problem with sorting out regression to the mean from the effects of the intervention.

Using a too-small sample like a personal experience also commits multiple fallacies simultaneously, like the hasty generalization fallacy, since no single-person sample is large enough to generalize.

Not to mention, when using experience and outcomes, typically, no valid measure is used, and there is no objective documentation. As such, it opens up for both interpretative errors and recall bias.

Causality (that is, a cause-effect relationship or what “works”) can only be extracted from experimental research like RCTs (Perry-Parrish et al.). Even some types of research, like epidemiological studies and case studies, cannot show causation, and they can only provide correlative data. However, this cannot be used to extrapolate a cause-effect relationship. As mentioned by Prof. Brad Schoenfeld, PhD.

The end results of the two errors and barriers are extensively summarised by researcher and Associate Professor Adam Rufa below:

“We need to realize that our observations of the world are faulty and the conclusions we make based on our observations are prone to bias. Putting too much trust in our ability to make accurate conclusions based on our experience seems to be a big barrier to EBP. Just about every time I talk to someone about non-science based interventions they justify it by bringing up the experience leg of EBP.” Associate Professor Adam Rufa, PT, DPT, PhD, OCS

 

References:

Hartman SE. Why do ineffective treatments seem helpful? A brief review. Chiropr Osteopat. 2009 Oct 12;17:10.

Herbert R, Jamtvedt G, Mead J, Hagen KB. Outcome measures measure outcomes, not effects of intervention. Aust J Physiother. 2005;51(1):3-4.

Kamper SJ. Engaging With Research: Linking Evidence With Practice. J Orthop Sports Phys Ther. 2018 Jun;48(6):512-513.

Kamper SJ. Asking a Question: Linking Evidence With Practice. J Orthop Sports Phys Ther. 2018 Jul;48(7):596-597.

Perry-Parrish C, Dodge R. Research and statistics: validity hierarchy for study design and study type. Pediatr Rev. 2010 Jan;31(1):27-9.

Zadro J, Peek AL, Dodd RH, McCaffery K, Maher C. Physiotherapists’ views on the Australian Physiotherapy Association’s Choosing Wisely recommendations: a content analysis. BMJ Open. 2019 Sep 20;9(9):e031360.