A major problem in pain management right now is that there is an epidemic of erroneous reasoning. This is a pandemic of “broscience”, non-scientific thinking and dysrationalia. In debates, when people are faced with an argument and/or evidence that goes against their preconceived beliefs, the common answer is “but I know it works,” or “I have seen it work”.
“The first principle is that you must not fool yourself and you are the easiest person to fool.” Richard P. Feynman
There are multiple fundamental problems with this line of thinking, and it presents one of the most substantial barriers to the progress and development in pain rehabilitation. Much of our future path lies in our ability to update our theories, narratives, philosophies, and world-view that we are governed by. To some extent, we are blindfolded by our outdated world-view that we currently use in pain management and also in physiotherapy.
Paradoxically we are often a substantial roadblock and barrier towards having a more modern (science-based) model of care and view of pain. We often resist updating our models of care which we use with people living with pain, so our care would be based upon current, more valid models. There are both multiple logical and scientific errors present when committing the “I have seen it works” argument. I will try to touch upon some of the largest ones briefly below:
When saying it “works” we are missing the fact that outcomes and effects of interventions are two separate things. As stated by Herbert et al. 2005 “Outcome measures measure outcomes, not effects of intervention”. Clinical outcomes are influenced by many factors other than the given intervention, like regression to the mean, placebo effects, the natural course of the condition, and many more (Herbert et al. 2005).
Multiple factors like sleep or the mere passing of time could play a huge factor in the improved outcome. Ignoring the potential effect various factors have on the patient outcome is a large error. By missing out these factors that are influencing the patient, we are also missing the potential therapeutic benefit that knowledge of these factors could have on our treatment outcomes.
We must also not forget what Herbert et al. 2005 states: “a good outcome does not necessarily indicate that intervention was effective; the good outcome may have occurred even without intervention. And a poor outcome does not necessarily indicate that intervention was ineffective; the outcome may have been worse still without intervention.”
When the “I have seen it works” argument is made, it is also erroneous from a logical standpoint – it’s commiting in the post hoc fallacy. The full name of this common thinking error is post hoc ergo propter hoc (from latin: “after this, therefore because of this”). An often used example of this error is this: Since the rooster crows immediately before sunrise; therefore the rooster causes the sun to rise. This is of course wrong. However, we are ourselves committing the post hoc fallacy if we conclude that our intervention “worked” because the patient got better after some time.
As stated by Dr. Jonathan Fass, DPT: “I wish that we could all learn to separate clinical outcomes from post hoc rationalizations of physiological mechanisms of action.”
We must remember that temporal priority (or chronological order) is only one of the indicators of a possible causal relationship. Other indicators might be a spatial connection or a history of regularity. But temporal priority alone is insufficient to establish a causal relationship, because if it was enough, then any event that preceded another event could be believed to be in a causal relationship with it; clearly, this is not the case (Damer 2009).
So the problem of the argument comes down to two distinct issues:
1. How do we know there was an effect? What measure was used, and is it a valid measure??
2. How can we assess clinically that it was the effect of the intervention? And not some other factor, like sleep, time, the natural course of the musculoskeletal diseases, or another unknown confounding factor that caused the effect?
When we are making objective causal “truth” claims, like: “I have seen it work”, we are trespassing in the realm of science and epistemology. When doing so, we should as a bare minimum have a basic understanding of the forest (of science and epistemology) we are so very abruptly trespassing in. When making causal claims, the following questions below could serve as a blueprint for reflecting upon the validity of the claims. It should also give an estimation of the truthfulness and plausibility of the claim, and make sure that you are in fact just not fooling yourself, as Prof. Feynman would say.
How do you know it “works” may I ask? How did you calculate the strength of this causal inference? How did you deal with the problem of regression to the mean? And survivorship bias? Or the difficulty of separating correlation from causation? Or other endogeneity problems? Or the problem of having no control group? Or the potential problem of sample selection bias? And various other potential biases? How did you control for multiple confounding variables? What measure did you use? And was, is it a valid measure? Did you only use PROMSs? (Patient Reported Outcome Measures).
I want to make it clear that I see no problems with people sharing their subjective experiences. What I have a huge issue with, is when people make objective causal “truth” claims based only upon their own subjective experiences. If you are making an objective claim, you should be able to provide objective evidence to support your claim.
So the underlying question remains: Can we subjectively assess what we experience and remember with some degree of objectivity?
Hill et al. looked at the validity of self-reported energy intake as determined using the doubly labeled water technique. Doubly labeled water is used as a method of measuring energy consumption. Hill et al. mentioned that people who were categorized as “large-eaters” overestimated their intake by 19%, and people categorized as “small-eaters” under-reported their intake by 46%. Schoeller et al. even advised against the use of self-report estimates of energy intake (in research), this due to their potential inaccuracies and biased reporting.
Can we then in clinical practice use our experience to detecting small and large effects of treatments?
Prof. Howick PhD answers this question in his book The Philosophy of Evidence-based Medicine: “To sum up, experience alone is usually an insufficient tool for detecting small and large effects.”. This is a lot like a statement made by Dr. Neil O’Connell PhD: “You can’t tell if a treatment works just from clinical observation and experience”
Some of the reasons why we can’t trust our own experience are summarized by Higgs & Jones in their book; Clinical Reasoning in the Health Professions:
“No matter how much we may think we have an accurate sense of our practice, we are stymied by the fact that we are using our own interpretive filters to become aware of our own interpretive filters! This is the pedagogic equivalent of a dog trying to catch its own tail, or of trying to see the back of your head while looking in the bathroom mirror. To some extent we are all prisoners trapped within the perceptual frameworks that determine how we view our experiences. A self-confirming cycle often develops whereby our uncritically accepted assumptions shape clinical actions which then serve only to confirm the truth of those assumptions.”
One of the fundamental problems here, as stated by Lacy et al., is: “findings from basic psychological research and neuroscience studies indicate that memory is a reconstructive process that is susceptible to distortion.”
This means that to a large degree with can’t trust what we remember. There are many flaws in our memory, intuitively we all know this, that is why we use calendars, to-do lists, and use a shopping list when we go shopping, and we don’t want to forget anything. As noted by Prof. Lotus in a lecture, our memories are reconstructive, and our memory works a little bit like a Wikipedia page. So it can be edited after the event, memory is “reconstructive” in nature.
Can we even use our experience to assess and estimate patients’ benefits and harms of interventions, or tests?
As is stated in the systematic review by Hoffman et al.: “Clinicians rarely had accurate expectations of benefits or harms, with inaccuracies in both directions. However, clinicians more often underestimated rather than overestimated harms and overestimated rather than underestimated benefits. Inaccurate perceptions about the benefits and harms of interventions are likely to result in suboptimal clinical management choices.”
So the answer is no, we can’t.
To escape all these errors, and to make more informed choices, we need to look to experimental research and randomized controlled trial (RCT) to determine, with any degree of certainty, the effects of a given intervention (Herbert et al. 2005). Modern pain rehabilitation should be informed by both qualitative and quantitative research, and use the large goldmine of research that there currently is. Even if an RCT on a particular disease doesn’t exist, with the specific population (like obese people, children, premenstrual women, etc.), there is still a goldmine of knowledge that can inform our clinical reasoning, and make our treatments better.
The primary purpose of using science in healthcare is to increase the quality of care, and to enable us to make more informed choices based upon current valid models. Furthermore, and more importantly, to make sure we are not repeating the errors of the past.
As Prof. Jules Rothstein, PT, PhD states “We need to make certain that, as we move to a better form of practice, we continue to put patients first. Nothing could be more humanistic than using evidence to find the best possible approaches to care. We can have science and accountability while retaining all the humanistic principles and behaviors that are our legacy.”
Recommended further reading:
Herbert et al. Outcome measures measure outcomes, not effects of intervention, Clinical reasoning in the health professions by Higgs and Jones, The Philosophy of Evidence-Based Medicine by Howick, In Evidence We Trust by Hale.
References:
Damer, T. Edward. Attacking faulty reasoning : a practical guide to fallacy-free arguments (6th ed. ed.). Wadsworth,Cengage Learning. 2009
Herbert R, Jamtvedt G, Mead J, Hagen KB. Outcome measures measure outcomes, not effects of intervention. Aust J Physiother. 2005;51(1):3-4.
Higgs, J., & Jones, M. A. (2008). Clinical reasoning in the health professions, 3rd Edition. Oxford: Butterworth-Heinemann.
Hill RJ. Davies PS. The validity of self-reported energy intake as determined using the doubly labelled water technique. Br J Nutr. 2001 Apr;85(4):415-30.
Hoffmann T, Del Mar C. Clinicians’ Expectations of the Benefits and Harms of Treatments, Screening, and Tests A Systematic Review. JAMA Intern Med. doi:10.1001/jamainternmed.2016.8254. Published online January 9, 2017.
Howick J. The Philosophy of Evidence-Based Medicine. Wiley-Blackwell, BMJ Books (2011).
Roger Kerry, The Philosophy of Evidence-Based Medicine. Manual Therapy, Volume 16, Issue 6, 2011, Page e7. Doi 10.1016/j.math.2011.07.007.
Lacy JW, Stark CE. The neuroscience of memory: implications for the courtroom.Nat Rev Neurosci. 2013 Sep;14(9):649-58. doi: 10.1038/nrn3563. Epub 2013 Aug 14.
Rothstein JM. Thirty-Second Mary McMillan Lecture: journeys beyond the horizon. Phys Ther. 2001 Nov;81(11):1817-29.
Schoeller DA, Thomas D, Archer E, Heymsfield SB, Blair SN, Goran MI, Hill JO, Atkinson RL, Corkey BE, Foreyt J, Dhurandhar NV, Kral JG, Hall KD, Hansen BC, Heitmann BL, Ravussin E, Allison DB. Self-report-based estimates of energy intake offer an inadequate basis for scientific conclusions. Am J Clin Nutr. 2013 Jun;97(6):1413-5.