The case against the use of clinical experience (for finding causal relationships)

Dog Biting His Tail

Now, let me preface the title by saying that clinical experience and personal experiences have great value, and can teach us a lot of things. One great thing we can use our experiences to is to form a new testable hypothesis, that research then has to confirm before we integrate them into clinical practice.

But, personal experiences, anecdotes, and clinical experience are not sound evidence, or as I like to call them clinical “nursery rhymes”, they are merely personal stories. Many people are often totally oblivious towards the fallibility of our subjective experiences and the reconstructive nature of our memory.



“The plural of “anecdote” is not “data.” Professor Irwin S. Bernstein, The Department of Psychology, University of Georgia



Most layman and even some clinicians think that you can use experience to find causal relationships, to see if an intervention “works”.

As stated by Dr. Neil O’Connell PhD: “You can’t tell if a treatment works just from clinical observation and experience”. 

Prof. Howick PhD also writes about this problem in his book The Philosophy of Evidence-based Medicine: “To sum up, experience alone is usually an insufficient tool for detecting small and large effects.”.

The primary weakness of personal experiences and anecdotes as evidence is that they are uncontrolled, technically they are non-systematic and non-qualifiable observations. There is a considerable risk of subconscious data mining, confounding variables, memory effects, and they are subject to confirmation biases, and multiple other cognitive biases.

Therefore we cannot make any reliable assumptions or show causation from anecdotes or personal stories. Layman’s often tend to rely upon anecdotes and testimonies, not objective scientific research. Marketers, salespeople, and charlatans will often rely heavily on this type of evidence because, essentially, they can make it say what they want it to say.

Anecdotes often lead to type I errors and erroneous post hoc thinking (and the post hoc ergo propter hoc logical error), or mistaken correlation for causation. As the phrase, “correlation does not imply causation” refers to the “inability to legitimately deduce a cause-and-effect relationship between two variables solely on the basis of an observed association or correlation between them”.

We must remember that temporal priority (or chronological order) is only one of the indicators of a possible causal relationship. Other indicators might be a spatial connection or a history of regularity. But temporal priority alone is insufficient to establish a causal relationship, because if it was enough, then any event that preceded another event could be believed to be in a causal relationship with it; clearly, this is not the case (Damer 2009).

Our brain has a lot of flaws, these including flaws in our memory, our perception, our thinking. So I’m starting to think that I’m/we are not so reliable.

Some of these flaws are confirmation bias, heuristic thinking, gambling fallacy, availability heuristic, escalation of commitment heuristic, effort heuristic, fundamental attribution error, the anchoring effect, the “toupée” fallacy, attentional bias, congruence bias, bandwagon effect, wishful thinking bias, forer effect (Barnum effect), choice-supportive biases, negativity bias, observation selection bias, observer-expectancy effect, compartmentalization, inattentional blindness, change blindness, memory confabulation, memory pareidolia, subconscious data mining, and base rate fallacy.

Not to mention many statistical problems we can’t escape in clinical practice, like the problem of regression to the mean, the potential problem of sample selection bias, the possible effects of multiple confounding variables, or the lack of a control group.

Some of the other reasons why we can’t trust our own experience are summarized by Higgs & Jones in their book, “Clinical Reasoning in the Health Professions”: “No matter how much we may think we have an accurate sense of our practice, we are stymied by the fact that we are using our own interpretive filters to become aware of our own interpretive filters! This is the pedagogic equivalent of a dog trying to catch its own tail, or of trying to see the back of your head while looking in the bathroom mirror. To some extent we are all prisoners trapped within the perceptual frameworks that determine how we view our experiences. A self-confirming cycle often develops whereby our uncritically accepted assumptions shape clinical actions which then serve only to confirm the truth of those assumptions.”

One of the fundamental problems with experience is in the dubious nature of our memory, as stated by Lacy et al. 2013: “Findings from basic psychological research and neuroscience studies indicate that memory is a reconstructive process that is susceptible to distortion.”

It means that to a large degree, we can’t trust what we remember. There are many flaws in our memory, intuitively we all know this; that is why we use calendars, to-do lists, and use a shopping list when we go shopping, and we don’t want to forget anything.

Many clinicians sadly do “experience”-based care or belief-based care; they, unfortunately, have no idea of the current research. We need “science and evidence” more desperately that you need toilet paper when taking a shit in the woods! We need science to show us causal relationships, as stated before. We can’t do this in the clinical setting. There is simply way to much noise and risk of errors that we can make any reliable conclusions, with just a minimum of certainty.

So this is why personal experiences, anecdotes, and clinical nursery rhymes are not valid as reliable evidence. There is too big a risk of subconscious data mining, or one of the numerous endogeneity problems, like mistaken correlation for causation.

Science can be viewed as nothing more than our attempt to systematically and rigorously rectify and avoid the multiple errors and cognitive biases that plague our experiences and memory. While simultaneously also trying to decrease the influence of confounding variables, statistical flaws, and using consistent logic to evaluate the results. Science is built on the idea of falsification. This as a means to make emendation for the confirmatory nature of our mind.

Professor Karl R. Popper, defined the scientific method as “proposing bold hypotheses, and exposing them to the severest criticism, in order to detect where we have erred.” (Popper 1974). Only if the specific hypothesis can withstand criticism to the best of our ability, can we say that it may be somewhat valid.

As a start in trying to learn a more research-based approach to care, and to facilitate avoiding the research to practice gap. I would recommend following (on social media) the expert clinicians and researchers mentioned here:  Adam Meakins, Dr. Gregory Lehman (DC), Diane Jacobs, Ben Cormack, Prof. Peter O’Sullivan, Todd Hargrove, Dr. Jason Silvernail (DPT), Dr. Derek Griffin (PhD), Dr. Kjartan Vibe Fersum (PhD), Christopher Johnson, Dr. Jarod Hall (DPT), Sigurd Mikkelsen, as a start on your new journey.

References:

Damer, T. Edward. Attacking faulty reasoning : a practical guide to fallacy-free arguments (6th ed. ed.). Wadsworth,Cengage Learning. 2009 



Lacy JW, Stark CE. The neuroscience of memory: implications for the courtroom.Nat Rev Neurosci. 2013 Sep;14(9):649-58. doi: 10.1038/nrn3563. Epub 2013 Aug 14.



Popper, K.R. 1974a. Replies to my critics. In: Schilpp, P.A. (Ed.) The Philosophy of Karl Popper.