The Bastardization of Evidence-Based Practice

Bastardization
Most arguments against Evidence-Based Practice, if not all arguments, stem from misunderstandings and misrepresentations of what Evidence-Based Practice is. It seems popular to bastardize the Evidence-Based Practice (EBP) approach to healthcare while woefully ignoring that EBP is a giant leap forward compared with the alternative, even including the flaws within EBP.

“Evidence based medicine is the conscientious, explicit, and judicious use of current best evidence in making decisions about the care of individual patients. The practice of evidence based medicine means integrating individual clinical expertise with the best available external clinical evidence from systematic research.” (1)

“Evidence based medicine “is the process of systematically reviewing, appraising and using clinical research findings to aid the delivery of optimum clinical care to patients. For decades people have been aware of the gaps between research evidence and clinical practice, and the consequences in terms of expensive, ineffective, or even harmful decision making.” (2)

 

The 3 parts of Evidence-Based Practice

The three essential parts of Evidence-Based Practice all have individual strengths and weaknesses; they should always be utilized as a unity. Using only one part negates the primary purpose behind EBP and precipitates the deficiencies within that specific chosen part.

A prime weakness of the clinical expertise and the patient values part of Evidence-Based Practice is that they can not be used to find cause-and-effect relationships. It is basic epistemology knowledge at a kindergarten level. Independently, clinical experience or patient values are insufficient in detecting small and large effects.

One weakness of scientific experiments, randomized controlled trials (RCTs), and therefore EBP, is that we often talk about averages. We must consider that the effects we have seen in experiments are average outcomes. There is a large degree of inter-individual differences, and in some cases, even non-responders, people in an experiment that see no effects of a given intervention. However, evidence from quality RCTs is unambiguously the best way to know if an intervention has consistent results (3).

“Despite these limitations, randomization remains essential for identifying the effect of a treatment on outcomes in nearly all circumstances.” (3)

There are errors within EBM that should be acknowledged and rectified. However, there are so many more significant errors in practice only supported by clinical experience or patient values. The primary weakness of clinical experience is that it is essentially an uncontrolled, non-systematic observation. As such, there is a considerable risk of confirmation bias, confounding variables, and other cognitive errors. Therefore, we cannot make valid or reliable conclusions or show causation from clinical experience alone.

“Clinicians tend to underestimate potential harm whilst overestimating the potential benefit of tests and treatments. These tendencies make it more likely for clinicians to overuse diagnostic tests and overtreat disease.” (4)

“Clinicians rarely had accurate expectations of benefits or harms, with inaccuracies in both directions. However, clinicians more often underestimated rather than overestimated harms and overestimated rather than underestimated benefits. Inaccurate perceptions about 
the benefits and harms of interventions are likely 
to result in suboptimal clinical management choices.” (5)

“To sum up, experience alone is usually an insufficient tool for detecting small and large effects.” (6)

But, it’s essential to know that research alone does not help people, but the skillful clinical reasoning of a clinician does. Only the clinicians in the field can critically analyze and implement the research relevant to the patient’s representation, goals, and needs. Clinicians are, therefore, vital to implementing high-quality care based on external examination.

 

References:

1. Sackett DL, Rosenberg WM, Gray JA, Haynes RB, Richardson WS. Evidence based medicine: what it is and what it isn’t. BMJ. 1996 Jan 13;312(7023):71-2. doi: 10.1136/bmj.312.7023.71.

2. Rosenberg W, Donald A. Evidence based medicine: an approach to clinical problem-solving. BMJ 1995; 310: 1122–1126.

3. Fanaroff AC, Califf RM, Harrington RA, Granger CB, McMurray JJV, Patel MR, Bhatt DL, Windecker S, Hernandez AF, Gibson CM, Alexander JH, Lopes RD. Randomized Trials Versus Common Sense and Clinical Observation: JACC Review Topic of the Week. J Am Coll Cardiol. 2020 Aug 4;76(5):580-589. doi: 10.1016/j.jacc.2020.05.069.

4. Uy EJB. Key concepts in clinical epidemiology: Estimating pre-test probability. J Clin Epidemiol. 2022 Apr;144:198-202.

5. Hoffmann TC, Del Mar C. Clinicians’ Expectations of the Benefits and Harms of Treatments, Screening, and Tests: A Systematic Review. JAMA Intern Med. 2017 Mar 1;177(3):407-419. doi: 10.1001/jamainternmed.2016.8254.

6. Jeremy H. Howick, The Philosophy of Evidence-based Medicine, BMJ Books; 1st edition (April 25, 2011).