Injury Prevention – Dr Emily Sullivan

“Find some reason that makes you want to advocate for injury prevention and start advocating!” Dr. Emily Sullivan

A while back one of my senior residents gave us an engaging talk on injury prevention. I was so inspired that I asked her to guest blog on ERmentor – enjoy!

Introduction

Patients suffering from injuries are commonly seen in everyone’s emergency department. Thankfully, injuries can be studied and understood just like any other disease. By incorporating the prevention strategies discussed below you’ll be able to reduce both morbidity and mortality in your injured patients, your community, and your country!

The Burden of Injury in Saskatchewan

Screen shot 2014-11-20 at 2.18.40 PM

As you can see from the above chart, a large percentage of SK residents suffer injuries annually, with increasing incidence in younger age groups. Additionally, males are 5% more likely to suffer injuries than females.

Screen shot 2014-11-20 at 2.19.34 PM

Thankfully many injuries we see are minor, for example strains or sprains.

Screen shot 2014-11-20 at 2.46.42 PM

However, devastating morbidity and mortality does occur. In children, young adults, and overall, motor vehicle collisions are the most common cause of fatal injury. Older adults often succumb from self inflicted injuries while in the elderly it’s often from falls.

Screen shot 2014-11-20 at 2.21.14 PM

Interestingly, while the summer months are the most common time for younger age groups to suffer from an injury, in patients 65+, late fall and early winter (corresponding with the first snowfall in SK) is the highest risk time.

The Burden of Injury in Canada

  • Leading cause of death for Canadians 1-34yo
  • 6th leading cause of death for all ages
  • 2nd leading cause of potential years of life lost before 70yo
  • 2003 Canadian Data
    • 13,906 died as a result of injuries
    • 226,436 admitted to hospital because of injuries
  • 2009-10 Canadian Data
    • 27 million Canadians >11yo suffered an injury severe enough to limit usual activity (15% of the population)
      • 27% of 12-19yo (2/3 of these are due to sports)
      • 14% of adults (1/2 are due to work or sports)
      • 9% of seniors (1/2 are due to walking or daily household chores)

Screen Shot 2014-11-20 at 1.57.38 PM

 

Annual Canadian Economic Burden:

Screen Shot 2014-08-27 at 1.45.44 PM

Screen shot 2014-11-20 at 2.15.38 PM

Risk Factors for Injury

Behavioral/social risk factors:

(Use these to guide your counselling and referral)

  1. Alcohol and drug use/abuse*
    • Up to 20% of patients seen in the ED after an motor vehicle collision (MVC) meet criteria for an alcohol use disorder (AUD)
    • Patients with an AUD have higher rates of illness and MVC injuries compared to the rest of the population
    • Patients with an AUD are more likely to drive while intoxicated
    • Interestingly, inconsistent helmet use in children is associated with one or more parents having risky drinking habits (pubmed link)
  2. Prior injury* (watch for an upcoming blog with more info on recidivism!)
    • Patients are 10 times more likely to return to the ED if they have been seen once previously for violent injury
    • This number is even higher for domestic violence and children and teenagers with intentional injury
  3. Low income
  4. Male
  5. Age
    • Children and teens are at the highest risk

*data from Rosens

Biomechanical risk factors:

(Use these to guide your assessment and physical exam)

  1. Airbag use
  2. Seatbelt use
  3. Driving speed
  4. Helmet use

Injury Pyramid

For every death reported in the news and every injured patient we see in the ED, there are many more who sustain less severe injuries and countless near misses and risky behaviour.

Screen shot 2014-11-17 at 12.53.11 PM

Injury Triangle

Just as the classic epidemiological triad represents the relationship between a host, agent, and environment, this triangle can also represent the framework for injury occurrence and prevention. We can prevent injuries by stopping or altering the interaction of the host with an agent and a vector and making an environment safer.

Screen shot 2014-11-17 at 12.55.31 PM

Haddon’s Injury Prevention Strategies From Rosens

  1. Prevent the initial marshaling of energy
  2. Reduce the amount of energy marshalled
  3. Prevent the release of energy
  4. Modify the rate of spatial distribution of the release of energy from its source
  5. Separate the energy from the host in space or time
  6. Separate the energy from the host by barrier
  7. Modify the surface or structure of impact
  8. Strengthen the host receiving the energy
  9. Rapidly detect and evaluate damage and counter its continuation and extension
  10. Reparative and rehabilitative measures

Haddon’s Matrix

An injury event can be divided into 3 modifiable phases:

  • Pre-event – production or release of energy has not yet occurred
  • Event – release of energy has occurred, but has not yet transferred to the host
  • Post-event – energy has been transferred, but damage has not yet reached its full extent

Epidemiologic factors in injury prevention include:

  • Host – the human that is affected by the energy release and transfer
  • Vehicle or agent – the vector that transfers energy
  • Environment – changes to the environment that affect the impact of one or more of the injury phases (this can include the physical, sociocultural, and political influences)

Screen shot 2014-11-17 at 1.05.59 PM

The above Haddon matrix example shows 12 possible areas for injury intervention to prevent a motor vehicle collision. Prior to the collision, the host’s EtOH level, the car’s old tires, and the environment’s winding road all may have an influence on the collision. During the collision, the host’s use of a seatbelt, the speed of the car, and the busy environment of an intersection affect the collision. After the collision, the host’s age will affect their ability to recover, a car with a full gas tack may do further damage upon exploding, and the amount of time it takes for EMS to arrive on scene will determine how much additonal damage occurs.

How can you incorporate injury prevention into your practice?

Clinically:

  • Assess biomechanical risk factors and direct your evaluation
  • Document, document, document
  • Assess behavioral risk factors and predict future injury
  • Provide risk screening, counseling, and referral to prevent recidivism (more on this later!)
  • Provide systematized trauma care in the ED

“One or two sentences is all is may take to prevent future injuries. You CAN make a difference. By counselling a patient on the dangers of driving while intoxicated you may actually be saving your own life or the life of a loved one.”

Through Population Health, Research, and Policy:

  • Advocate for multidisciplinary trauma systems
  • Advocate for rapid, competent EMS services
  • Lead and support policies and environmental changes that can reduce injury
  • Educate the public, especially high-risk groups
  • Lead or participate in research to reduce injuries

“While we all enjoy clinical practice and feel fulfilled and accomplished upon helping a patient in the ED, you may be making the biggest difference of your career by taking time to promote injury prevention to a group of high school children.”

Injury Prevention for the ER Physician

All of the above can be summed up in three easy to remember E’s of Injury Prevention:

  • Education
  • Enforcement/Legislation
  • Engineering

Last Word on the author:

Dr Sullivan was born and raised in Saskatchewan. She’s currently in her PGY3 year and is also pursuing a Masters in Public Health through the U of S. She was recently awarded a $19,000 grant from SGI [on twitter @SGItweets ] to study helmet use in Saskatoon!

Resources

Use these yourselves, use these with your patients, give these to your patients!

  • Parachute Canada: http://www.parachutecanada.org/
  • Saskatchewan Prevention Institute: http://www.skprevention.ca/  [twitter: @SKPrevention1]
  • CAEP Position Staements: http://caep.caCAEPPositionStatementsGuidelines

Injury surveillance in Canada is not done thoroughly and varies from province to province, but limited information can be access on the following websites:

The post Injury Prevention – Dr Emily Sullivan appeared first on ERMentor.

Dr Eddy Lang: Making the Most of Chart Reviews

Screen Shot 2014-11-13 at 10.41.42 PM

 

 

A few months back Dr Eddy Lang [Co-editor of the Royal College Research Guide [link]] graced us with his kind and friendly personality and dropped some pearls on retrospective chart reviews.

Medical Record Review [MRR] Research in General

“Chart reviews don’t get  the respect they may deserve” Dr Lang

Dr Lang lamented the fact that MRR doesn’t get the street cred it deserves. This is large part because of a historical pattern of:

  • Wrong questions
  • Poor methods
  • Action/Documentation Divide = what happened vs what was documented
  • Missing Data
  • Case identification

Gilbert and others. 1996. Chart Reviews in Emergency Medicine [Pubmed link] showed that 25% of EM publications between 1988-1995 relied on chart reviews. However, although inclusion citeria were present 98% of the time, important data regarding methodology was generally absent:

  • abstractor training, 18% (95% CI, 13% to 23%)
  • standardized abstraction forms, 11% (95% CI, 7% to 15%)
  • periodic abstractor monitoring, 4% (95% CI, 2% to 7%)
  • abstractor blinding to study hypotheses, 3% (95% CI, 1% to 6%)
  • Interrater reliability was mentioned in 5% (95% Cl, 3% to 9%) and tested statistically in 0.4%

In their article – Gilbert et. al. lay out their solutions

The 7 Key Ingredients of good MRR:

 1. Abstractor Training: Need to convince the reader that the people pulling the charts

  • Describe the Qualifications and Training procedure for the data Abstractors
  • before the study begins pull some Trial charts to Test the data abstraction process

2. Case Selection: Needs to be explicit and well described

  • Administrative codes is a start but has flaws
    • Often this can lead to a substudy [i.e do the ultimate codes reflect the Dx?]
  • Clear inclusion/exclusion criteria
  • Screening procedures must be solid

3. Definition of the variables: Need to be done well

  • Dictionary – define things e.g. vitals signs … at triage? by the EP? on reassessment?
  • Timing and Source of the info needs to be described
  • Adjudication – how are you going to categorise contradictions and inconsistencies?

4. Data Abstraction Tool: Make it good

  • need to have a standardised data abstraction tool – use your research staff here
  • need to have a uniform process of handling missing data  – need to think about what to do with missing or unclear data
  • Consider using software to manage data [e.g. Using Redcap Software [link]

5. Blinding:

  • Are the abstractors unaware of the study hypothesis? – consider quizzing them afterwards to see!

6. Quality Control

  • regular meetings to ensure standard process
  • need to monitor the abstractors work – consider audits
  • resolution of conflicting assessments

7. Inter-rater reliability: Report inter-rater reliability – it’s eKspected …get it?

  • reported on a sample of charts reviewed by another [blinded] reviewer

Eddy then introduced another landmark article by Jansen and others who created a guideline on how to conduct MRR – [Pubmed Link]

Criteria to Follow

Screen Shot 2014-11-13 at 11.44.07 PM

Dr Lang finished by giving us some examples of good MRR

Instructive Examples – MRR CAN change practice!

Answer questions that change local practice … e.g. Eddy’s 1995 Publication on the prognostic value of amylase in the evaluation of the abdominal pain patient. [Pubmed Link]

  • Pulled lab results
  • Showed that there was no difference between patients with intermediate levels of amylase and normal patients

Answer questions that change global practice e.g. Ross Baker et al Canadian Adverse Events Study [Pubmed Link]

  • Retrospective review of 3500 charts from 5 provinces in Canada
  • Sowed an AE rate of 7.5 % which translates into 70, 000 annual AE’s in Canadian hospitals
  • placed the spotlight on patient safety

Good MRR Questions

How are we doing? [care practices, quality of care e.g. Look at time to analgesia after intro of new acute pain protocol for say... renal colic]

What does this condition look like? [e.g looking for key word search "Rugby" ... pull charts associated with rugby injuries]

Derivations Models  [e.g. Risk Factors for Hospitalization after Dog Bite injury [Pubmed Link]]

Ethics of MMR?

  • Are there ways to bypass ethics? Yes! If it’s labeled as “more QI” may not require full ethics look and have “expedited review”
  •  Consider using the ARECCI Ethics Screening Tool [Link]

Last word on our guest speaker:

I have known Eddy for a few years, having collaborated on a couple of occasions putting on workshops at SAEM and CAEP. He is one of Canada’s best researchers, a solid ER doc, a great dad and family man and a true ambassador for Emergency Medicine. Thanks Eddy for letting me replay your words of wisdom.

My Ideas/Homework:

  1. Buy that Royal College Guide
  2. Start in on a Project:
    1. Renal Colic after new pain protocol – time to analgesia
    2. Time to EKG after New Protocol
    3. Reduction in Flex/Ex Ordering after journal club on C spine
    4. Reduction in Time in ER after New HS Trop
    5. Change in medication use in migraine after Journal Club
    6. Management of abscesses [packing vs loops] after our Journal CLub

 

 

The post Dr Eddy Lang: Making the Most of Chart Reviews appeared first on ERMentor.

Teaching EM Trainees of Different Levels in the ER

 

from http://professionallearningmatters.org/

from http://professionallearningmatters.org/

Like many institutions, we have a mix of EM resident learners rotating through our departments. Expectations  [and competencies] of junior learner differ greatly from that of a senior learner.  For Example:

  • PGY 1 – Focus on clinical skills e.g. Xray reading and procedures
  • PGY2-4 – Focus on more challenging patient encounters e.g. medical and procedural management of the septic patient
  • PGY 5 –  Focus on managerial roles e.g. taking referrals from family doctors

At our recent Faculty Development Workshop my brilliant colleague – Dr Rob Woods gave an engaging presentation on teaching senior learners in the ED. He subsequently facilitated an impromptu crowd-sourcing  of the participants.  The result was the derivation of an  easy-to-apply rubric for expectations for learners of various seniority levels in the ER. We hope you find it useful:

Teaching EM Trainees at Different Levels of Training Sheet1

The post Teaching EM Trainees of Different Levels in the ER appeared first on ERMentor.

Failure to Fail Part 3 – A Prescription for Better Evaluations in the ER

Introduction:

In the last two posts I introduced the fact that there exists a culture of ‘failure to fail’ in medicine. Faculty themselves have a large part to play and, apart from being too lenient, also unwittingly  introduce bias during learner evaluations.In this final post I want to share my approach to trainee evaluation.

I believe that most clinicians already know how to make judgements about learners – they just don’t know that they know! Furthermore if you follow the six steps below I think you’ll start to make more effective judgements and provide more meaningful feedback to learners. Just maybe – you might even identify [and start the process of helping] a failing learner. Comments welcome.

FACT: Most clinicians ALREADY possess skills transferable to trainee evaluation

Screen Shot 2014-02-24 at 7.14.21 AM

  • ER docs routinely diagnose and treat conditions in the ER with little information
    • We have the ability to quickly discern sick  from not sick
    • What’s more – (if you think about it) you also possess the right words to describe why  … pale, cyanotic, listless versus  pink, alert, smiling
    • The same skills can be used to identify competence [introduced self, gave clear question to consultant, provided concise history] and incompetence [pertinent negatives missing from history, inability to generate basic ddx, brusque interpersonal interactions]

Screen Shot 2014-02-17 at 3.29.37 PM

  • Many clinicians also possess transferable “Soft Skills”. We use these daily in difficult patient interactions.
  • Guess what? The exact same skills can be brought to bear in difficult trainee evaluations!
    • listening empathetically
    • breaking bad news
    • dealing with emotions
    • dealing with hostility
from wicked writing skills

from wicked writing skills

My 6-step Prescription for Evaluation Success:

I have tried to illustrate above that the same skills that make you an astute clinician will also allow you to discern whether a learner may be under-performing and then let them know. Why not stick to what works? Below is my six-step prescription for evaluation success. I am hoping that it is pleasantly familiar to you.

STEP 1: Take a history from the learner at the beginning of the shift.

The ED STAT course run by CAEP [link] teaches Faculty to understand where the learner is coming from at the start of the shift.

  • Ask the learner about:
    • Level of training
    • EM experience
    • Home program
  • Provide a orientation to the ER
  • Ascertain their learning goals
  • Set expectations
    • based on level of training
    • that they will be receiving feedback about their performance
    • [in fact - tell them about these six steps]

STEP 2: Examine the learner’s skills

Use the IPPA approach!

  • Inspectionwatch the learner
    • taking a history
    • performing physical exam manoeuvres
    • giving the diagnosis/ddx
    • giving discharge instructions
  • Palpationperform physical exams together
    • to confirm or refute findings
    • to correct errors
    • to role model
    • to learn from them
      • learners often remember all the special manoeuvres better
      • they ask questions that challenge you to read up – like “which cranial nerve is glossopharyngeal?”
  • Percussion:  use your finely tuned senses ‘tap out’ gaps in:
    • knowledge
    • clinical skills
    • attitude
    • [Hint: the mnemonic KSALTs can help identify where the problem lies with the learner. Read this [link] to the ALIEM MEdIC case].
  • Auscultationlisten to the learner
    • during consultations
    • during interactions with other providers
    • [HINT: eavesdrop on their histories from behind the curtain]

STEP 3: Diagnose the Learner:

Screen Shot 2014-02-25 at 4.52.23 PM

  • Is the learner competent or not?
  • Think about biases you may be introducing and try to avoid them – especially central tendency bias [link].
  • Use my ABC RIMES approach:
    • Attitude - make comments like “self-directed” vs “unmotivated”
    • Behaviours – “procedural skills above peers” vs “poor attention to workplace safety”
    • Competencies: [I use a modification of the RIME approach [link] with the addition of an ‘S’]
        • Reporting – give feedback on history-taking, written communication and presentations
        • Interpretation – give feedback on interpretation of Xrays, EKG’s and ‘putting information together’
        • Managerial skills – give feedback on efficiency, multi-tasking, stewardship
        • Educator/Expertise – give feedback on knowledge and teaching skills
        • Soft skills – give feedback on communication, team-skills, honesty, reliability, insight, receptivity to feedback
  • For those that are slavish to the CanMEDs brand try the MS CAMP mnemonic:
    • Medical expert
    • Scholar
    • Communicator/ Collaborator
    • Advocate
    • Manager
    • Professional

STEP 4: Provide your diagnosis

  • Think of your favourite uncle or granddad – would you be happy with the care that they received? If so  - great!
  • Provide feedback using the ABC RIMES framework [here's an earlier post on providing feedback].
  • If the feedback is going to be negative – you need to prepare. Here’s more information on getting ready for a difficult conversation [link].
    • If you’ve primed the learner at the beginning of the shift – they shouldn’t be surprised.
    • Have the courage to practise tough love – trainees are adults, they should be expected to receive feedback like adults.

STEP 5: Document! Document! Document!

Screen Shot 2014-02-25 at 4.57.57 PM

  • ALL verbal feedback needs to be written down.
  • Use strong descriptive terms rather than weak ones. Saying ‘good job’ is good for self-confidence, but learners need specifics.
  • Use the ABC RIMES or MS CAMP framework – effective evaluation forms make this task easier.
  • RANK THE LEARNER. They cannot all be ‘above average’ – some trainees are below average.
  • Even though a learner may be having a bad day this needs to be documented so that patterns can emerge.

STEP 6: Consult them out!

Most of us feel that problem learners are “someone else’s problem” – you’re right! You need to consult them out. But like all consults – you need to give the consultant the right information [see above].

  • Remediation is NOT your responsibility
  • Issues need to be referred to:
    • Rotation coordinator
    • Program director

Summary:

Most clinicians already possess the skills needed to made judgements about failing learners. Furthermore, they also possess the soft skills to give negative face-to-face evaluations. If all clinicians used these skills [together with direct observation and expectation-setting] we may identify more learners that need help. Ultimately we will achieve more success at graduating cadres 100% of whom are competent,caring physicians.

Future Directions:

There needs to be huge emphasis on culture change. This is going to require:

  1. More faculty development and engagement. BOTH faculty and programs need to come together to do this. [I'd appreciate comments on how to bring faculty to the table because at my institution this is problematic]
  2. Learners themselves understand how to receive and act on feedback.  

References:

ED STAT! Course Material

Hauer et al 2010. Twelve tips for implementing tools for direct observation of medical trainees’ clinical skills during patient encounters.

Kogan et al 2010. What drives faculty ratings of residents’ clinical skills? The impact of faculty’s own clinical skills.

Kogan et al 2012. Faculty staff perceptions of feedback to residents after direct observation of clinical skills.

 

 

The post Failure to Fail Part 3 – A Prescription for Better Evaluations in the ER appeared first on ERMentor.

Failure to Fail Part 2: Types of Evaluation Bias

Introduction:

We know that the best way to evaluate learners is by directly observing what they do in the workplace. Unfortunately [for a variety of reasons] we do not do enough of this. In my last post I described some reasons why we sometimes fail at making appropriate judgements about failing learners.

When it comes to providing feedback, there is much room for improvement.We know that feedback can be influenced by the source, the recipient and the message. What most people don’t know is that, when you’re evaluating a learner, you yourself could be unwittingly introducing bias -  just like when we make diagnoses.

Types of Evaluation Bias:

Screen Shot 2014-02-12 at 2.55.15 PM

  • If a learner really excels in one area,  this may positively influence their evaluation in other areas.
  • For example, a resident successfully/quickly intubates a patient with respiratory arrest. The evaluator is so impressed she minimizes deficiencies in knowledge, punctuality, ED efficiency.

Screen Shot 2014-02-12 at 2.56.41 PM

  • It is a human tendency to NOT give an extreme answer.
  • Recall from part 1 [link] that faculty tend to overestimate learners’ skills and furthermore tend to pass them on rather than fail them.
  • This may explain why learners are all ranked “above average”.

Screen Shot 2014-02-12 at 2.59.28 PM

  • Some faculty are inherently  more strict [aka “Hawks”] while others are more  lenient [Doves]
  • Research is inconclusive on demographics that predispose them to being either of these
    • One study suggested ethnicity and experience may be correlated with hawkishness

Screen Shot 2014-02-12 at 3.02.53 PM

  • This is more relevant to end-of-year evaluations.
  • Despite fact that we all have highs and lows – recent performance tends to overshadow remote performance

Screen Shot 2014-02-12 at 3.03.55 PM

  • After just working shift with exemplary learner, the evaluator may be unfairly harsh when faced with a less capable learner.

Screen Shot 2014-02-17 at 3.02.19 PM

  • Evaluators can be biased by their own mental ‘filters’. These develop from their experiences, background &c. They will therefore form subjective opinions based on
    • First Impressions [Primacy Effect]
    • Bad Impressions [Horn Effect] This is the opposite of halo effect. [e.g. if you're shabbily dressed you may be brilliant, but might be under appreciated]
    • “similar-to-me” effect. Faculty will be more favourable to trainee who is perceived to be similar to themselves.

Opportunity for EM Faculty

Okay. In part 1 you have seen how we as faculty are partly to blame for evaluation failure. Above, I have illustrated how we also introduce bias when we try to evaluate learners. Now for the good news …

  • What faculty need to ascertain is:
    • how well the trainee performs HIGH QUALITY PATIENT CARE
      • what are the features of this?
      • how do I put this into words?
  • This can easily be done by:
    1. making observations – thankfully, in the ER we have all the tools we need [read below]
    2. synthesizing observations into an evaluation - in Part 3 of this blog I’ll show you how I go about putting my observations into words.

Each ER shift  is a perfect laboratory  for direct observation

Screen Shot 2014-02-12 at 3.06.31 PM

  • During a shift in the ED, multiple domains of competency can be assessed -such as physical exam, procedural skill and written communication skills.  In fact ALL of the CanMEDs competencies can be assessed in real time.
  • Additionally we [at U of S] already do things that are encouraged:
    • We provide daily feedback using a:
      • Effective [validated] assessment tool. [Those daily encounter cards were designed using current evidence on evaluation]
      • Multiple assessments by several faculty members throughout their rotation
      • ER also provides avenue for 360 evaluation [by ancillary staff,co-learners and patients]

Summary:

Hopefully you now know a bit more about why we as medical faculty fail at failing learners and how evaluation bias plays into this. I have tried to show that the ER provides a perfect learning lab for direct observation and feedback. In the next post I will prescribe my personal formula for successful trainee assessment. Comments please! [There's a link at the top under the title of the post]

References:

Management Study Guide Website

Dartmouth Website

The post Failure to Fail Part 2: Types of Evaluation Bias appeared first on ERMentor.

Failure to Fail Part 1- Why faculty evaluation may not identify a failing learner

 

Lake-Wobegon-0003-0984

I Recently gave a talk to fellow faculty on the phenomenon of “failure to fail” in emergency medicine. I am no expert, but I have tried to synthesize the details in a useful way. I have broken it down into three parts. Part 1 deals with the phenomenon of Failure to Fail.  In separate posts I will introduce some forms of evaluator bias and then provide a prescription for more effective learner assessment in the ER. As always comments are welcome!

Introduction:

  1. We expect our medical trainees to acquire the fundamental clinical skills
  2. We expect them to evolve from novice to expert.
  3. Our goal is to graduate cadres of competent physicians who will serve their communities safely, effectively and conscientiously.

The Importance of Direct observation and Work-Based Assessment:

The current model of medical training is a blend of didactic teaching, clinical learning, simulation and self-directed endeavours. We then try to evaluate the learners formatively and summatively through written exams and standardised clinical scenarios.

We are learning that the best tool to evaluate learners is direct observation in the work context. This requires four things:

  1. Deliberate practice [on the part of the trainee]
  2. Intentional observation [on the part of the faculty]
  3. Feedback
  4. Action planning

This will place even more emphasis on direct faculty oversight. We will therefore need to develop their skills and coach them how to:

  • Perform Direct observation
  • Perform a valid [i.e repeatable] evaluation of skills
  • Provide Effective feedback

Screen Shot 2014-01-29 at 1.51.23 PM

Current State of Trainee Evaluation

FACT: the current model is sometimes failing to discriminate and fail learners

      • This occurs in spite of observed unsatisfactory performance.
      • This occurs despite faculty confidence with their ability to discriminate.
      • Most faculty agree that this is the single most important problem with trainee evaluation.

We’re trying to understand why… This is what I found.

  •  The Wobegon Effect – [wikilink]
    • From the fictional town featured on a radio show – a town where

 “all the women are strong, all the men are good looking, and all the children are above average”

    • Describes the human tendency to overestimate one’s achievements and capabilities in relation to others. [Also known as illusory superiority - link] 
  • Grossly inflated performance ratings have been found practically everywhere in North America:
    • Business – both employees and managers
    • University professors – overestimate their excellence [gulp]
    • Studies on ‘bad drivers’ – everyone has one of these in their family! 
  • Not-surprisingly this phenomenon is equally pervasive in medicine
    • Faculty struggle to provide honest feedback and consistent [valid] evaluations. [One study raters rated 66% trainees "above average" This is simply not possible! Pubmed]
    • Fazio et al [see references] demonstrated that 40% of IM clerks should have failed, yet were passed on…FORTY PERCENT!!!
    • In this study by Harasym et al [Link] showed that OSCE examiners are more likely to err on the side of passing than failing students

    Screen Shot 2014-02-13 at 7.39.05 AM

    • Residents [in particular lowest performing ones] overestimate their competency compared to their ratings by faculty peers and nurses [Pubmed Link]
    • Moreover the biggest overestimations lay in the so called “soft skills – Communication, teamwork and professionalism. These are often the problems that give faculty and colleagues headaches with a particular learner.
    • One reason might be because  soft skills are hard to quantify – unlike suturing skills where incompetence is quickly identified 
  • End result is a culture of “failure to fail”  where …
    • Many Graduates  are not acquiring required skill-set
    • We are failing to serve patient needs
      • Reduced safety, increased diagnostic error and  reduced patient satisfaction
    • Increasing negative fallout to ENTIRE profession – our reputations are being besmirched in the digital era. 
    • Ultimately public trust is being eroded.
    • We cannot succeed at our job without public confidence in what we do.

Why we fail to fail learners:

Barriers to adequate Evaluation:

Learner factors

  • Learners are all different. Moreover, the same learner will vary in skills through time as they grow and develop.
  • We all have good and bad days.
  • There exists a phenomenon called “Group-work effect” where medical teams can mask deficiencies of individual learners.

Institution factors

  • Tools of evaluation flawed – some eval forms are poorly designed to discriminate learners
  • We all work in the current culture of “too busy to teach”.
  • There is an incredible amount of work needed to change this culture

Faculty Factors

  • Faculty feel confident in ability when poled.
  • Faculty feel sense of responsibility to patient, profession, learner BUT …
  • Raters themselves are the Largest source of variance in ratings of learners:
    • Examiners account for 40% of the variance in scores on OSCEs
    • Examiners’ ratings are FOUR times more varied than the “true” scores of students
    • some tend to be more strict – “Hawks” … some are more lenient – “Doves”
    • negligible effect of gender, ethnicity and age/experience [one UK study that "hawks are more likely to be ethnic minority and older - link]
  • Clinical competence of faculty members is also correlated with better evaluations [link]
    • One Interesting study where faculty took OSCE themselves, then rated students … Results show that:
      • Use their own practice style as frame of reference to rate learners
      • Better performers on the OSCE were more insightful and attentive evaluators

2013 convenience sample of U of S EM faculty: Top three reasons :

1. Fear of being perceived as unfair

2. Lack of confidence in the supporting evidence when considering to “fail”

3. Uncertainty about how to identify specific behaviours

What I  discovered in the literature:

  1. Competing demands [clinical vs educational] mean that education suffers.
  2. Lack of Documentation  - Preceptors fail to record day-to-day performance. So when it comes to end of rotation eval – not enough evidence.
  3. Interpersonal conflict model decribes the following phenomenon:
  • Faulty members’ goal is to improve trainees skills – preceptors do care a lot!
    • They perceive the need to emphasize the positives and be gentle [to protect learner self esteem and maintain an alliance/engage the trainee.
    • Faculty try and make feedback constructive without the learner feeling like it's a personal attack.
    • This creates tension when one is forced to be negative and critical feedback
    • Emotional component of giving negative feedback also makes it even more difficult
    • Consequently this tension forces us to overemphasize the positives
    • Creates mixed messages regarding feedback. Learners walk away with wrong message.

4. Lack of self efficacy:

  • There's a lack of knowledge of what specifically to document - a) Faculty don't know what type of information to jot down, b) Faculty struggle to identify specific behaviours associated with failure.

[The reported low self-confidence during evaluations is actually a product of our training [or rather lack thereof]. No-one teaches you how to navigate minefields in evaluation. This is particularly evident for soft skills. Staff often think that their judgements are subjective interpretations].

5. Anticipating [arduous] appeal process – having an extra commitment, having to defend ones actions/comments, fear of escalation [legal action e.g.].
6. Lack of remediation options- there exists a lack of faculty support. This makes them unsure about what to do/advise after diagnosing a problem.

Summary:

We have seen that the current model of medical training is failing to identify and fail underperforming learners. There are several reasons why, but faculty themselves play a large role in this culture of “Failure to Fail”. In my next post I will highlight some biases that we encounter when judging learners and provide a prescription for more effective learner evaluation.

Acknowledgement- Dr Jason Frank @drjfrank for pointing me in the right direction [authors Kogan and Holmboe are ones you should search out in particular]

References:

Dudek et al 2005 Failure to Fail: The Perspectives of Clinical Supervisors

Fazio et al 2013. Grade inflation in the internal medicine clerkship: a national survey.

Harasym et al 2008 Undesired variance due to examiner stringency/leniency effect in communication skill scores assessed in OSCEs

Kogan and Holmboe 2013. Realizing the promise and importance of performance-based assessment.

 

The post Failure to Fail Part 1- Why faculty evaluation may not identify a failing learner appeared first on ERMentor.