Stress Inoculation Training

One of the hottest buzzphrases in Emergency Medicine and Critical Care Education is Stress Inoculation Training (SIT).

For this podcast, Swami had the opportunity to sit down and chat with Michael Lauria. Mike is a 1st year medical student at Dartmouth University Medical School but he has extensive experience in SIT from his time as a Pararescueman in the US Air Force. Mike’s prehospital and retrieval experiences are translatable to resident education. To get some background on where Mike is coming from, check out his lecture “Making the Call” on YouTube as well as his recent guest appearance on EMCrit on Toughness.

Here we discuss the origins of SIT, its use in the military and how we can bring SIT to the world of medical education. We also touch on the strengths and weaknesses of SIT. Mike discusses some concepts that are critical to implementing SIT into resident training. So, take a listen, see what you think and post some comments and critiques. We, of course, would love to hear your thoughts and opinions.

References

Meichenbaum D. Stress inoculation training: a preventative and treatment approach. Principles and Practice of Stress Management 3rd Edition. Guilford Press 2007.

Saunders T et al. The effect of stress inoculation training on anxiety and performance. J Occ Health Psych 1996; 1(2): 170-86. PMID: 9547044

LeBlanc VR. The effects of acute stress on performance: implications for health professions education. Acad Med 2009; 84: S25-33. PMID: 9907380

Recommended Reading

Grossman D, Christensen LW. On Combat, The Psychology and Physiology of Deadly Conflict in War and in Peace. 2008. Link

EMCrit Podcast 118 – EMCrit Book Club – On Combat by Dave Grossman.

Asken M, Christensen LW, Grossman D. Warrior Mindset. 2010. Link

Siddle BK. Sharpening the Warriors Edge: The Psychology & Science of Training. 1995. Link

 

Guest

Michael J. Lauria

MS1, Dartmouth Geisel School of Medicine

Critical Care/Flight Paramedic

Dartmouth-Hitchcock Advanced Response Team (DHART)

M. Lauria

The post Stress Inoculation Training appeared first on iTeachEM.

How health professions educators should use social media

Michelle Lin, creator of the amazing ALIEM blog, recently spoke at the inaugural 2014 SoMeSummit in Canada as part of the International Conference on Residency Education (ICRE). This talk is a strong contender for the definitive explanation of ‘how health professions educators should use social media’.

Watch it now:

The post How health professions educators should use social media appeared first on iTeachEM.

Teaching Risk Taking Behavior in Medical Education

Swaminathan Headshot 2013

This post was put together by the incredibly talented and brilliant, Swami.

 

 

 

A 44-year-old healthy man presents with dull chest pain for 3 hours. His EKG is unremarkable. What’s his risk for acute coronary syndrome? Should he get a troponin? Two troponins? Observation and a stress test?

The Emergency Department is an inherently high-risk zone.

The Emergency Department is an inherently high-risk zone.

Emergency Medicine is an inherently risky specialty. In fact, many would say that risk stratification is our specialty. When a patient presents with symptoms, we use our clinical knowledge to determine what we think to be the most likely cause of those symptoms. We then apply studies and investigations to help confirm that diagnosis while attempting to “rule out” other diagnoses. At the end of this, we are often left without a specific diagnosis and need to make a disposition. When we decide to admit or discharge a patient, have them follow up in 24 hours or 1 week we are risk stratifying. For those we send home without a diagnosis, we try to determine how long they can wait to see another doctor for further investigation. We know that some of these patients will decompensate and return to the ED and so we are risk stratifying the likelihood of that decompensation.   Thus, during each patient encounter, the Emergency Physician needs to perform multiple risk stratifications. For example:   A 41-year-old man on aspirin presents with minor head trauma. His GCS is 15 and he is neurologically intact. He complains of a mild headache.

  • Does the patient need imaging now?
  • Does the patient need observation for 2 hours? 4 hours? Overnight?
  • Can I send the patient home safely without imaging?
  • Will the patient’s status degrade in the next 24 hours? 48 hours?
  • Should I schedule neurology follow up? If so, when?

This is a fairly simple case yet multiple risk assessments are involved. Each of these decisions must take into account hospital factors (i.e. ability to obtain follow up) and patient factors (i.e. distance from the hospital, reliable to follow up).   This brings us to the central questions of this post:

  • How do I train residents about risk?
  • How do I train residents to develop their risk threshold?
  • How do I train residents to embrace risk?
Damn it Jim, I'm a doctor not a CT scanner!

Damn it Jim, I’m a doctor not a CT scanner!

Clearly, we can see the need for this type of training. While we’d all like to have the magic tricorder to tell us if the patient has an intracranial injury, has a concussion etc we don’t. We deal with tests that are less than perfect and make decisions based on these tests.This raises the first point I always discuss with my residents. There is no such thing as a “rule-out” test. There is no test or series of tests that can definitely “rule-out” a disease. We use the tests to risk stratify the patient. Take another case:

A 22-year-old woman presents with right lower quadrant pain and vomiting. She is tender and you order a CT scan of the abdomen and pelvis. The scan is read as “no intra-abdominal pathology is identified that explains the patients pain.” Has the patient been “ruled-out” for appendicitis?

We know the answer to this question is no. CT scan of the abdomen and pelvis has a sensitivity of 98-99% and so there will be patients that are false negatives. In spite of the fact that we know this, we usually tell patients, “You don’t have an appendicitis. You’re going to be fine and we’ll be discharging you in a bit.” What we should be telling patients is “The CT scan doesn’t show signs of appendicitis. I think it’s unlikely you have an appendicitis but the test isn’t perfect. There’s still a chance. We’re going to send you home but here’s what you need to watch out for.” The second statement is an acknowledgement that we have risk stratified to a low risk category but not no risk. This approach goes for any patient we see whether it be chest pain, an ankle injury or abdominal pain. Although this may appear to be nothing more than semantics, I argue that this change in terminology is central to teaching what Emergency Medicine is about.

Once we’ve rid the residents of the idea of ruling out disease, we need to encourage them to think about risk stratification when they present the patient and incorporate that into their presentation. Residents are smart and once they get to know the faculty, they tailor the presentation and their proposed workup to what they think the faculty member will want to hear. We should encourage them, instead, to present the patient and tell us what they would do if they were in charge. This allows them to begin to feel the responsibility of their plans. Unfortunately, this takes time. After they give you their plan, you need to explain why you would do things differently. Why is your plan more or less risky than that of the residents? Explaining this will allow them to develop their risk taking behavior.

The most important part of risk stratification and risk decisions is the patient. When I finish a patient encounter and am ready to discharge patients home, I always sit down and have a discussion about risks and the need for follow-up. These two things go hand in hand. Often, patients believe that discharge from the Emergency Department comes with a clean bill of health and a 5 year, 100,000 mile guarantee. This again reflects the disconnection between what we as physicians think and what patients perceive.

When discussing disposition with the patient, sit down and turn off the phone.

When discussing disposition with the patient, sit down and turn off the phone.

How do we teach residents to communicate risk to patients? First we should start with modeling the behavior. Have the resident follow you while you have this discussion with the patient. Here are some simple things to do to maximize this interaction and model the proper behavior:

  • Sit down and turn off the pager/phone (no interruptions)
  • Explain everything that’s happened during the ED stay
  • Explain the findings (or lack there of) from your evaluation
  • Discuss your evaluation of all of the information and the presence of clinical uncertainty and the importance of prompt follow up
  • Discuss how the follow up will be arranged (patient calls or you are calling for the appointment)

Ask if the patient has any questions

I also like to add, “I’m okay with being wrong but I want you to give me the opportunity to make it right. Come back and see me or one of my colleagues if anything concerns you.”

There is no act of teaching that benefits the resident more than watching the proper behavior modeled. For the next patient, you flip the scenario and have the resident lead the discussion and you watch them. Afterwards, you offer them critique and tips to improve.

"No I've never sent home a patient to die but that's because I'm a good doctor."

“No I’ve never sent home a patient to die but that’s because I’m a good doctor.”

Finally, I think there’s an important role in discussing difficult cases where your risk stratification was incorrect. I’m not talking about during formal department Morbidity and Mortality conference but rather conversations in the clinical environment about these cases. This act stresses the importance of following patients up in order to evaluate the appropriateness of your level of risk taking behavior. Residents should understand that our understanding and practice of medicine is not perfect and mistakes will be made. The vital thing is to learn from these mistakes and adapt our clinical care and risk taking behavior accordingly.

Risk stratification and risk taking behavior are central aspects of Emergency Medicine. It is our job as resident educators to help residents develop these skills and attitudes. Since I’m by no means an expert in this area, I encourage you to email me, post comments etc on the topic so we can all learn more.

The post Teaching Risk Taking Behavior in Medical Education appeared first on iTeachEM.

The Importance of GRIT

“Grit or grittiness is a concept in education that emerged about 5-10 years ago, created by an academic research group led by Dr. Angela Duckworth.  Check out Angela’s TED talk here: Angela Duckworth. Duckworth’s targeted research audience tends to be primary and secondary education, however I think this definitely applies to medical school, residency, and other types of high level education.

Definition. Grit is not only a measure of an individuals ability to overcome adversity or failure (resilience) but also includes a person’s commitment to their passions/interests over the long-term.  People with grit not only work hard to succeed in whatever they do, but also have an ability to remain focused and committed to mastering a specific task or skill set. It is one of the many non-cognitive skills that educators once thought were inherent to the student, and left out of  curricula & formal education.  We now know it is possible to teach these “intangible” skills.

How to measure grit. The Duckworth group has created a 12-point scale that has been used to predict success.  I challenge each of you to take this 12 question test to see how “gritty” you actually are!

In a number of studies, they have found that grit is a better predictor of success than IQ, standardized tests, or any other type of exam we traditionally use to rate students.

Failure is a good thing.  Interestingly, I’m sure we’ve all heard the phrase - “I’d take a B+ overachiever over an A-student any day!” The concept of grit may come into play here – Duckworth believes that in some cases, talent and grit can be inversely related. Why? well, failure builds grit.  Failure provides the learner an opportunity to regroup, restrategize, and find an alternative path of success.  Those who are very talented may rarely experience failure, and as a result - not have the same drive to try over and over again until they eventually succeed.

Carol Dweck is a leader in the concept of the “growth mindset” which is also commonly found in those with grit.

 

Fixed vs Growth Mindset

 

Grit matters in medicine. Grit is a critical non-cognitive skill for physicians, especially those in emergency medicine, critical care, and other high-stress specialties. Resuscitations, and even careers for that matter, don’t always go according to plan – but we need to teach our students how to learn from their failures so they succeed and adapt the next time they are faced with a similar challenge.  It’s the teacher’s responsibility to foster and encourage grit.  You are teaching a skill that will help shape a lifelong career.

So, is it possible to teach grit?  Some wonderful suggestions on how to teach grit can be found below:

True Grit: The Best Measure of Success and How to Teach It

Stay tuned as we go through GRIT in more detail over the next few posts.  We are going to break down the 2013 White Paper on Grit - Promoting Grit, Tenacity, and Perseverance: Critical Factors for Success in the 21st Century and try to apply it to some critical issues in both medical education & training.

 

Suggested Reading

  • Duckworth AL, Peterson C, Matthews MD, Kelly DR. Grit: perseverance and passion for long-term goals. J Pers Soc Psychol. 2007;92(6):1087-101. [Full Text Link]
  • Duckworth Research Group
  •  Shechtman N, et al. Promoting Grit, Tenacity, and Perseverance: Critical Factors for Success in the 21st Century. US Department of Education. 2013. [Full Text Link]

The post The Importance of GRIT appeared first on iTeachEM.

Adopting FOAM: When and How?

OLYMPUS DIGITAL CAMERAIn this post, Rob Cooney, the man, the myth, the legend, discusses FOAM and the adoption of FOAM into practice. Recently, the FOAM world was able to have a front row seat at a great debate between two titans as they went head-to-head over the issue of adopting a change in practice.

 

Part 1: SMACC Gold: What to Believe-When to Change

Part 1: SMACC Back-On Beliefs of Early Adopters and Straw Men

Part 3: SMACC Back-Back on What to Believe and When to Change   

While I enjoyed the debate and found myself nodding in agreement with each of the speakers, in the end, I was left feeling a little wanting. There was very little practical advice given on how to adopt a change. In fairness to the speakers, this is a high level construct, a 50,000 foot view of the issue, designed to inspire you and make you think. That being said, Scott Weingart did give a glimpse of the solution:

“You need to put in the time. You need to read. You need to understand how to critically appraise new evidence; how to integrate it into your existing belief structure; how to then test that based on bedside clinical experience; based on your understanding of physiology, based on the specifics of every individual patient.”

He is absolutely correct in his statement. My only concern is that his call for a rigorous method will scare away early adopters and push them back into the early majority. I’m pretty sure that wasn’t his intent, but just in case, I wanted to offer a more “boots on the ground” approach that I think many of us could use to determine when to implement a change in practice.

The methodology that I believe will allow everyday clinicians to adopt changes in their practice is called the Model for Improvement. This year, I have the good fortune of being an IHI/AIAMC fellow. This means that I am learning, living, and breathing quality improvement. The Institute for Healthcare Improvement is a remarkable organization with quite the track record for implementing positive changes in healthcare. The model they use? The Model for Improvement. So what does this model look like?

 

fig 1-model for improvement2

 

As you can see, the model is based on three critical questions followed by iterative cycles of testing and learning. Let’s break it down piece by piece.

“All improvement requires a change, but not all change is an improvement.”

As you can see from the above quote, if we want to get better at something we have to make a change. Unfortunately, we sometimes can change things for the worse. Choosing what to change can be difficult. This is why the fundamental questions are critical.

Question 1: What are we trying to accomplish?

The first question can be viewed as the “aim statement.” This question must be answered very specifically. With quality improvement work, we attempt to identify the system, a timeline, and goals.

For example, “Within the next 12 months, we will reduce the door to doctor time in the emergency department from an average of 45 minutes to an average of 20 minutes.”

For practitioners working on individual improvements, they can easily choose much more manageable chunks to work with. Consider the use of push-dose pressors. Perhaps you have had difficulty with post-intubation hypotension and you’re considering the addition of push-dose pressors. Your aim could simply be, “I want to reduce the incidence of post intubation hypotension to less than 10%.” The key is to be as specific as possible. As they say at the IHI:

“Hope is not a plan, some is not a number, soon is not a time.”

Question 2: How we know that a change is an improvement?

 “In God we trust. All others bring data.”

                                                                                                            -W.E. Deming

This may seem obvious, but in order to determine if a change is an improvement, we have to measure something! In the above example of push-dose pressers, you would have to measure your rate of post-intubation hypotension. If the addition of push-does pressers did little to decrease your rate of hypotension, you’d likely abandon the practice before fully implementing it. While this seems intuitive, the actual measurement process can be made more robust by considering three types of measures:

Outcome Measures: These measures look at the performance of the system under study and are derived from the aim, i.e. rate of hypotension after intubation

Process measures: These measures look at the rate of utilization of an activity, i.e. use of meds, fluids, before and after vital signs

Balancing measures: Trying to improve one system at the expense of another should be mitigated as much as possible. Balancing measures attempt to look at the performance of the overall system. For example, how long does it take to intubate a patient with the addition of new drugs? Is there more hypoxia after the change?

It is also important to note that there are three kinds of measurement: research, judgment/accountability, or improvement. In terms of improvement data, we are not looking for rigorous, randomized, double-blind, placebo-controlled level data. We simply want “just enough” data to determine whether the change we are implementing is leading to an improvement.

Question 3: What changes can we make that will result in improvement?

This question is where improvement gets kind of fun. Depending on the complexity of the system that you are trying to improve, you may be able to come up with very simple ideas and test them easily. Answering this question also allows you to be quite creative in the solutions you suggest. Have you seen something work another industry that you think may apply to your day-to-day practice? Try it out!

Once you’ve answered the three questions above, it’s time to test the actual changes. These are done through iterative cycles known as “PDSA or Plan-Do-Study-Act Cycles.”

 

figure 2-use of pdsa1

These are simple experiments that take the ideas that you’ve created above and actually test them.

Plan: What are planning to do? How will you do it? Is anyone else going to test the change with you? how will you collect the data?

Do: Implement the test and collect the data

Study: What did you learn? Did the data match the predictions?

Act: What you need to change before the next cycle? Did it work well enough that you can apply it more broadly?

Notice from the above figure that the cycles are designed to be iterative, meaning one cycle flows smoothly into the next cycle as the data guides the improvements. Too often in healthcare, we identify a change and go straight into implementation. This is the dangerous practice that both Scott and Simon cautioned against. The below figure illustrates why it is important to implement changes very slowly. Every change comes with a cost. The higher the cost, the smaller the test should be. If there is a potential of harming a patient, VERY small tests are the first choice. This helps to mitigate the harm while allowing for future cycles to scale up if the change seems feasible. It also allows the early adopters to get things right before pushing the change out into the workforce.

new fig 3

While this model may seem complex, with a little bit of trial and error, it is quite simple to apply. It is also universally scalable. Want to lose weight? Apply the model: What am I trying to do? Lose weight. How will I know the change and improvement? My weight will drop, my clothing will fit easier, I’ll feel better, etc. What changes can I make? Eat less, exercise more, etc. These three critical questions, once answered, can then be tested, measured, and modified. Whether trying out simple new things were attempting to modify complex systems, use of this model allows a practical approach to making changes that drive improvement. It also allows “less expert” early adopters to safely dip their feet into the world of FOAM.

The post Adopting FOAM: When and How? appeared first on iTeachEM.