Unlearning – Yoga For Your Brain

The Gist:  Knowledge translation is a problem in medicine and, at the individual level, unlearning likely contributes to the knowledge translation gap.  It may also exist as part of the solution. Akin to yoga, unlearning requires flexibility, training or deliberate practice, and is enhanced by a community open to skepticism and growth.
  • Note: These musings are not evidence based but are more of a cognitive framework to understand why we have such difficulty individually changing practice.
For an introduction to unlearning, check out this post.  In brief, unlearning, while really an aspect of truly understanding and learning, is complicated.  Most of us find it relatively easy to stuff more information into our brains, a process we perceive as "learning."  Nasal cannula with oxygen at 15 liters per minute plus may help prolong safe apnea time and reduce hypoxia during intubation?  Cool, we can do that.  However, when we are told that something that we routinely do or believe may not necessary or even harmful, we often have more difficulty changing out behavior or "unlearning." Intubating patients with out of hospital cardiac arrest may not be helpful? Preposterous! Unlearning a bit of information is unnatural, it may feel awkward whereas learning, as it often does not put our knowledge or ego in jeopardy.

Flexibility - Unlearning requires cognitive flexibility. When we stretch our bodies, we reach a point where we feel a burn; our body telling us we are approaching our limit. We can push further safely, it just burns slightly more.  Unlearning is essentially the same and is often accompanied by “the burn.”  When we come across information counter to the way we practice, it stings. We may feel defensive. This is “the burn.”  We experience discomfort when we stretch our thinking beyond this. This may be, in part, because we are emotionally tied to our knowledge. We work hard for what we know.  We act quickly in emergency care and must have confidence in what we know, as emergent situations do not typically allow for debate or significant time to think.

  • A fix: When one feels “the burn” when confronted with new information that runs counter to our practice, recognize that this is a warning sign that knowledge may be changing.  Recognize that the sting comes from our ego, protecting what we know.  This does not mean that one should change practice whenever we come across a piece of novel information. Rather, we should we aware that in order to practice evidence based, up to date medicine, we may feel discomfort. When our beliefs are challenged, instead of becoming defensive, we should thoughtfully consider the information.
Training - Mastering a yoga pose requires training and deliberate practice. In order to unlearn ways of thinking, we must also engage in mental preparation and practice.  It is easy, particularly in emergency settings when adrenaline dominates, to think and execute in a perfunctory manner. We default to what we know and what is familiar.
Some fixes:

  • Early Exposure - The earlier we begin training, the more prepared we are. If one has an upcoming race, we may expect we will perform better if we begin preparing earlier rather than the week before the race. Similarly, when it comes to unlearning a habit or a way of thinking, the sooner we are exposed to the contrary argument, the more prepared we may be to unlearn.  This may serve as a preconditioning so that we may react less strongly upon repeat exposure. 
  • Repeat Exposure - Practice is central to most athletic endeavors. The more repetitions we do, the stronger we become.  The more we practice a yoga pose, the more likely we are to be successful, the more comfortable it will feel.  Unlearning is easier when we are exposed to the target bit of knowledge more frequently.  Spaced repetition exists as one of the most evidence-based means of learning and  this probably applies to unlearning. 
Community - Yoga and CrossFit are associated with strong communities, as are many team sports.  Communities may motivate us, hold us accountable, and push the bounds of our perceived capability.  Studies demonstrate that physicians practice similarly to the institution where they trained and show a wide array of geographic variation in practice patterns. A network of peers and colleagues, particularly outside of own’s main “system” or hospital, may increase our cognitive flexibility by exposing us to a wide array of practice pattern.  The Free Open Access Medical Education (FOAM) community may expose to novel and controversial information.

Unlearning in the Prehospital Arena: The Workout
Needle Decompression for Tension Pneumothorax (see this post or this podcast).  The second intercostal space at the midclavicular line (2nd ICS MCL) has been taught as the ideal spot for needle decompression.  This, however, is changing.  New recommendations are to use a catheter at least 8cm in length if needle decompression is attempted at the 2nd ICS MCL or decompress at the fourth or fifth intercostal space at the anterior axillary line (4/5th ICS AAL).
The chest wall is thick at the 2nd ICS MCL [1,2].  Radiographic studies of chest wall thickness demonstrate increased thickness at the 2nd ICS MCL compared with the 4/5th ICS AAL (4.78 cm vs 3.42 cm).  Even ATLS states that needle decompression in the 2nd ICS MCL will fail more than 50 percent of the time.  This is an intervention undertaken in extreme circumstances in critically ill patients.  A chance of failure of 1 in 2 is unacceptable.
The 2nd ICS MCL is difficult to identify [3,4].  The clavicle extends further than most people think. As a result, providers are less accurate in identifying the 2nd ICS MCL compared with the 4/5th ICS AAL.

The pulse check.  If one were to survey cardiac arrest resuscitation across the United States, in and out of hospital,  we would probably see that the majority of people pause every two minutes for a “pulse check” despite decreased emphasis on the pulse check by the AHA guidelines over the past 10 years.  The guidelines recommend minimal interruptions for pulse check and detail the problematic sensitivity and specificity of pulse identification [6].  After the initial pulse check prior to CPR, the guidelines don’t actually specify any time frame for repeat pulse checks. Yet, many of us do.  Sure, we can pause for rhythm analysis; however, many systems and the European guidelines now recommend pulse assessment upon observation of an organized rhythm or increase in end tidal capnography [7].
Few people can determine the presence of pulselessness in 10 seconds. Dick et al of patients placed on cardiopulmonary bypass and providers blinded to whether or not the patient actually had a pulse. Only 2% of this cohort of experienced providers were able to identify a pulseless patient in 10 seconds [8]. With increased emphasis on compression fraction, this may result in a delay in resumption of compressions.
The accuracy of the pulse identification by providers is suboptimal, noted to be 78% in one study [9].  While an accuracy of 78% may seem high, this means that approximately one in four times we will be wrong. We may feel the reverberation of our own pulse and the truly pulseless patient may have an unnecessary and perhaps deleterious delay in chest compressions.
For more on this topic check out this post and/or this post.

Left Bundle Branch Block (LBBB) as a STEMI Equivalent (check out this post)- Prior to the 2013 iteration of the AHA guidelines for ST-elevation myocardial infarction (STEMI), new or presumed new LBBB existed as a “STEMI equivalent.”  This often activated the cath lab and STEMI teams.  In 2013, the AHA removed this from the guidelines yet these patients are often referred to the emergency department for “rule out MI.”
Further, STEMI may often be diagnosed on ECG, using the Sgarbossa or modified Sgarbossa criteria (link) [10].

Backboards - fortunately protocols in many states and systems have dispensed with long backboards.  Long thought to be protective, despite known harms, the American College of Emergency Physicians released a guideline in 2016 explicitly stating that long backboards should not be used as a therapeutic or precautionary measure. They cause harm and don’t help. [11]

Oxygen in Acute Coronary Syndromes - Aspirin, oxygen, and nitroglycerin have long been the initial interventions for patients with suspected ACS. Recent studies have found no clear benefit for oxygen in patients with normal oxygen saturations. Further, one study found oxygen was associated with markers of larger myocardial infarctions (although this is not a patient-oriented outcome) [12]. The AHA recommends oxygen is appropriate for patients who are hypoxemic (oxygen saturation < 90%) [6].

1.  Laan D V., Vu TDN, Thiels CA, et al. Chest wall thickness and decompression failure: A systematic review and meta-analysis comparing anatomic locations in needle thoracostomy. Injury. 2015:14–16.
2. Advanced Trauma Life Support, 9th edition.
3. Ferrie EP et al. The right place in the right space? Awareness of site for needle thoracocentesis. Emerg Med J 2005;22(11):788–9.
4. Inaba K et al. Cadaveric comparison of the optimal site for needle decompression of tension pneumothorax by prehospital care providers
6. Berg RA, Hemphill R, Abella BS et al. Part 5: Adult Basic Life Support: 2010 American Heart Association Guidelines for Cardiopulmonary Resuscitation and Emergency Cardiovascular Care. Circulation. 122(18_suppl_3):S685-S705. 2010.
7. Resuscitation. 81(6):671-5. 2010.
8. Dick WF, Eberle B, Wisser G, Schneider T. The carotid pulse check revisited: what if there is no pulse?Crit Care Med. 2000 Nov;28(11 Suppl):N183-5
9. Tibballs J, Weeranatna C. The influence of time on the accuracy of healthcare personnel to diagnose paediatric cardiac arrest by pulse palpation.
10. O'Gara PT, Kushner FG, Ascheim DD et al. 2013 ACCF/AHA Guideline for the Management of ST-Elevation Myocardial Infarction. Journal of the American College of Cardiology. 61(4):e78-e140. 2013
11. "EMS Management of Patients with Potential Spinal Injury." ACEP Board of Directors. Available at: https://www.acep.org/clinical---practice-management/ems-management-of-patients-with-potential-spinal-injury/
12. Stub D et al. Air Versus Oxygen in ST-Segment-Elevation Myocardial Infarction. Circulation. 131(24):2143-2150. 2015.

Risky Business: Framing Statistics

The Gist: Medical literature frequently frames effect size using relative and absolute risks in ways that intentionally alter the appearance of an intervention, an example of framing bias. Effect sizes seem larger using relative risk and smaller using absolute risk [1,2].

Part of our 15 Minutes - 'Stats are the Dark Force?' residency lecture series
Risk from Lauren Westafer on Vimeo.

Perneger and colleagues surveyed physicians and patients about the efficacy of 4 new drugs. They were presented with the following scenarios:
While these numbers reflect the same data, patients and physicians selected that the drug that "reduced mortality by 1/3" was "clearly better" [3].  This indicates both parties are susceptible to the cognitive tricks statistics, particularly the way that relative risk make numbers appear larger and the ways absolute risk makes numbers appear smaller.  Authors can consistently switch between relative and absolute risk to maximize the appearance of benefit and minimize risks, as seen in the following abstract:

Perhap authors should be encouraged to report statistics consistently, using relative OR absolute rather than switching between the two to give the appearance of maximum benefit and minimal risk.

1. Barratt A. Tips for learners of evidence-based medicine: 1. Relative risk reduction, absolute risk reduction and number needed to treat. Canadian Medical Association Journal. 171(4):353-358. 2004
2. Malenka DJ, Baron JA, Johansen S, Wahrenberger JW, Ross JM. The framing effect of relative and absolute risk. Journal of general internal medicine. 1993;8(10):543-8.
3. Perneger TV, Agoritsas T. Doctors and patients' susceptibility to framing bias: a randomized trial. Journal of general internal medicine. 26(12):1411-7. 2011.

P Values: Everything You Know Is Wrong

The Gist:  P values are probably the most “understood” statistic amongst clinicians yet are widely misunderstood.  P values should not be used alone to accept or reject something as “truth” but they may be thought of as representing the strength of evidence against the null hypothesis [1].

Part of our 15 Minutes - 'Stats are the Dark Force?' residency lecture series

P: Everything You Know Is Wrong from Lauren Westafer on Vimeo.

At various times in my life I, like many others, have believed the p value to represent one of the following (none of which are true):
    • Significance
      • Problem:  Significance is a loaded term.  A value of 0.05 has become synonymous with “statistical significance.”  Yet, this value is not magical and was chosen predominantly for convenience [3].  Further, the term “significant” may be confused with clinical importance, something a statistic cannot answer.
    • The probability that the null hypothesis is true.
      • Problem: The calculation of the p value includes the assumption that the null hypothesis is true.  Thus, a calculation that assumes the null hypothesis is true cannot, in fact, tell you that the null hypothesis is false.
    • The probability of getting a Type I Error 
      • Background: Type I Error is the incorrect rejection of a true null hypothesis (i.e. a false positive) and the probability of getting a Type I Error is represented by alpha. Alpha is often set at 0.05 so that there is a 5% chance you are wrong if you reject the null hypothesis. This is a PRE test calculation (set before the experiment)
      • Problem: Again, the calculation of the p value assumes the null hypothesis is true. The p value only tells us the probability of getting the data we did, it does NOT speak to the underlying truth of whatever is being tested (i.e. efficacy). The p value is also a POST test calculation.
      • The error rate associated with various p values varies, depending on the assumptions in the calculations, particularly prevalence. However, it's interesting to look at some of the estimates of false positive error rates often associated with various p values:  p=0.05 - false positive error rate of 23-50%; p=0.01 - false positive error rate of 7-15% [5].
P value is the probability of getting results as extreme or more extreme, assuming the null hypothesis is true. Originally, this statistic was intended to serve as a gauge for researchers to decided whether or not a study was worth investigating further [3].  
  • High P value - data are likely with a true null hypothesis [Weak evidence against the null hypothesis]
  • Low P value - data are UNlikely with a true null hypothesis [Stronger evidence against the null hypothesis]
Example:  A group is interested in evaluating needle decompression of tension pneumothorax and proposes the following:
    • Hypothesis - Longer angiocatheters are more effective than shorter catheters in decompression of tension pneumothorax.
    • Null hypothesis - There is no difference in effective decompression of tension pneumothorax using longer or shorter angiocatheters.
A group, Aho and colleagues, did this study and found a p value of 0.01 with 8 cm catheters compared with 5 cm catheters.  How do we interpret this p value?  
  • We would expect the same number of effective decompressions or more in 1% of cases due to random sampling error.  
  • The data are UNLIKELY with a true null hypothesis and this is decent strength evidence against the null hypothesis.
Limitations of the Letter “P”
    • Reliability.  P values depend on the statistical power of a study. A small study with little statistical power may have a p value greater than 0.05 and a large study may reveal that a trivial effect has statistical significance [2,4].  Thus, even if we are testing the same question, the p value may be "significant" or "nonsignificant" depending on the sample size.
    • P-hacking.  Definition:  "Exploiting –perhaps unconsciously - researcher degrees of freedom until p<.05" Alternatively: "Manipulation of statistics such that the desired outcome assumes "statistical significance", usually for the benefit of the study's sponsors" [7].
      • A recent study of abstracts between 1990-2015 showed 96% contained at least 1 p value < 0.05.  Are we that wildly successful in research? Or, are statistically nonsignificant results published less frequently (probably).  Or, do we try to find something in the data to report as significant, i.e. p-hack (likely).
P values are neither good nor bad. They serve a role that we have distorted and, according to the American Statistical Association: The widespread use of “statistical significance” (generally interpreted as “p ≤ 0.05”) as a license for making a claim of a scientific finding (or implied truth) leads to considerable distortion of the scientific process [1].  In sum, acknowledge what the p value is and is not and, by all means, do not p alone.

  1. Wasserstein RL, Lazar NA. The ASA’s statement on p-values: context, process, and purpose. Am Stat. 2016;1305(April):00–00. doi:10.1080/00031305.2016.1154108.
  2. Goodman S. (2008) A dirty dozen: twelve p-value misconceptions. Seminars in hematology, 45(3), 135-40. PMID: 18582619 
  3. Fisher RA. Statistical Methods for Research Workers. Edinburgh, United Kingdom: Oliver&Boyd; 1925.
  4. Sainani KL. Putting P values in perspective. PM R. 2009;1(9):873–7. doi:10.1016/j.pmrj.2009.07.003.
  5. Sellke T, Bayarri MJ, Berger JO.  Calibration of p Values for Testing Precise Null Hypotheses.  The American Statistician, February 2001, Vol. 55, No. 1
  6. Chavalarias D, Wallach JD, Li AHT, Ioannidis JPA. Evolution of Reporting P Values in the Biomedical Literature, 1990-2015. Jama. 2016;315(11):1141. doi:10.1001/jama.2016.1952.
  7. PProf.  "P-Hacking."  Urban Dictionary. Accessed May 1, 2016.  Available at: http://www.urbandictionary.com/define.php?term=p-hacking