Dr Eddy Lang: Making the Most of Chart Reviews

Screen Shot 2014-11-13 at 10.41.42 PM



A few months back Dr Eddy Lang [Co-editor of the Royal College Research Guide [link]] graced us with his kind and friendly personality and dropped some pearls on retrospective chart reviews.

Medical Record Review [MRR] Research in General

“Chart reviews don’t get  the respect they may deserve” Dr Lang

Dr Lang lamented the fact that MRR doesn’t get the street cred it deserves. This is large part because of a historical pattern of:

  • Wrong questions
  • Poor methods
  • Action/Documentation Divide = what happened vs what was documented
  • Missing Data
  • Case identification

Gilbert and others. 1996. Chart Reviews in Emergency Medicine [Pubmed link] showed that 25% of EM publications between 1988-1995 relied on chart reviews. However, although inclusion citeria were present 98% of the time, important data regarding methodology was generally absent:

  • abstractor training, 18% (95% CI, 13% to 23%)
  • standardized abstraction forms, 11% (95% CI, 7% to 15%)
  • periodic abstractor monitoring, 4% (95% CI, 2% to 7%)
  • abstractor blinding to study hypotheses, 3% (95% CI, 1% to 6%)
  • Interrater reliability was mentioned in 5% (95% Cl, 3% to 9%) and tested statistically in 0.4%

In their article – Gilbert et. al. lay out their solutions

The 7 Key Ingredients of good MRR:

 1. Abstractor Training: Need to convince the reader that the people pulling the charts

  • Describe the Qualifications and Training procedure for the data Abstractors
  • before the study begins pull some Trial charts to Test the data abstraction process

2. Case Selection: Needs to be explicit and well described

  • Administrative codes is a start but has flaws
    • Often this can lead to a substudy [i.e do the ultimate codes reflect the Dx?]
  • Clear inclusion/exclusion criteria
  • Screening procedures must be solid

3. Definition of the variables: Need to be done well

  • Dictionary – define things e.g. vitals signs … at triage? by the EP? on reassessment?
  • Timing and Source of the info needs to be described
  • Adjudication – how are you going to categorise contradictions and inconsistencies?

4. Data Abstraction Tool: Make it good

  • need to have a standardised data abstraction tool – use your research staff here
  • need to have a uniform process of handling missing data  – need to think about what to do with missing or unclear data
  • Consider using software to manage data [e.g. Using Redcap Software [link]

5. Blinding:

  • Are the abstractors unaware of the study hypothesis? – consider quizzing them afterwards to see!

6. Quality Control

  • regular meetings to ensure standard process
  • need to monitor the abstractors work – consider audits
  • resolution of conflicting assessments

7. Inter-rater reliability: Report inter-rater reliability – it’s eKspected …get it?

  • reported on a sample of charts reviewed by another [blinded] reviewer

Eddy then introduced another landmark article by Jansen and others who created a guideline on how to conduct MRR – [Pubmed Link]

Criteria to Follow

Screen Shot 2014-11-13 at 11.44.07 PM

Dr Lang finished by giving us some examples of good MRR

Instructive Examples – MRR CAN change practice!

Answer questions that change local practice … e.g. Eddy’s 1995 Publication on the prognostic value of amylase in the evaluation of the abdominal pain patient. [Pubmed Link]

  • Pulled lab results
  • Showed that there was no difference between patients with intermediate levels of amylase and normal patients

Answer questions that change global practice e.g. Ross Baker et al Canadian Adverse Events Study [Pubmed Link]

  • Retrospective review of 3500 charts from 5 provinces in Canada
  • Sowed an AE rate of 7.5 % which translates into 70, 000 annual AE’s in Canadian hospitals
  • placed the spotlight on patient safety

Good MRR Questions

How are we doing? [care practices, quality of care e.g. Look at time to analgesia after intro of new acute pain protocol for say... renal colic]

What does this condition look like? [e.g looking for key word search "Rugby" ... pull charts associated with rugby injuries]

Derivations Models  [e.g. Risk Factors for Hospitalization after Dog Bite injury [Pubmed Link]]

Ethics of MMR?

  • Are there ways to bypass ethics? Yes! If it’s labeled as “more QI” may not require full ethics look and have “expedited review”
  •  Consider using the ARECCI Ethics Screening Tool [Link]

Last word on our guest speaker:

I have known Eddy for a few years, having collaborated on a couple of occasions putting on workshops at SAEM and CAEP. He is one of Canada’s best researchers, a solid ER doc, a great dad and family man and a true ambassador for Emergency Medicine. Thanks Eddy for letting me replay your words of wisdom.

My Ideas/Homework:

  1. Buy that Royal College Guide
  2. Start in on a Project:
    1. Renal Colic after new pain protocol – time to analgesia
    2. Time to EKG after New Protocol
    3. Reduction in Flex/Ex Ordering after journal club on C spine
    4. Reduction in Time in ER after New HS Trop
    5. Change in medication use in migraine after Journal Club
    6. Management of abscesses [packing vs loops] after our Journal CLub



The post Dr Eddy Lang: Making the Most of Chart Reviews appeared first on ERMentor.

Teaching EM Trainees of Different Levels in the ER


from http://professionallearningmatters.org/

from http://professionallearningmatters.org/

Like many institutions, we have a mix of EM resident learners rotating through our departments. Expectations  [and competencies] of junior learner differ greatly from that of a senior learner.  For Example:

  • PGY 1 – Focus on clinical skills e.g. Xray reading and procedures
  • PGY2-4 – Focus on more challenging patient encounters e.g. medical and procedural management of the septic patient
  • PGY 5 –  Focus on managerial roles e.g. taking referrals from family doctors

At our recent Faculty Development Workshop my brilliant colleague – Dr Rob Woods gave an engaging presentation on teaching senior learners in the ED. He subsequently facilitated an impromptu crowd-sourcing  of the participants.  The result was the derivation of an  easy-to-apply rubric for expectations for learners of various seniority levels in the ER. We hope you find it useful:

Teaching EM Trainees at Different Levels of Training Sheet1

The post Teaching EM Trainees of Different Levels in the ER appeared first on ERMentor.

Failure to Fail Part 3 – A Prescription for Better Evaluations in the ER


In the last two posts I introduced the fact that there exists a culture of ‘failure to fail’ in medicine. Faculty themselves have a large part to play and, apart from being too lenient, also unwittingly  introduce bias during learner evaluations.In this final post I want to share my approach to trainee evaluation.

I believe that most clinicians already know how to make judgements about learners – they just don’t know that they know! Furthermore if you follow the six steps below I think you’ll start to make more effective judgements and provide more meaningful feedback to learners. Just maybe – you might even identify [and start the process of helping] a failing learner. Comments welcome.

FACT: Most clinicians ALREADY possess skills transferable to trainee evaluation

Screen Shot 2014-02-24 at 7.14.21 AM

  • ER docs routinely diagnose and treat conditions in the ER with little information
    • We have the ability to quickly discern sick  from not sick
    • What’s more – (if you think about it) you also possess the right words to describe why  … pale, cyanotic, listless versus  pink, alert, smiling
    • The same skills can be used to identify competence [introduced self, gave clear question to consultant, provided concise history] and incompetence [pertinent negatives missing from history, inability to generate basic ddx, brusque interpersonal interactions]

Screen Shot 2014-02-17 at 3.29.37 PM

  • Many clinicians also possess transferable “Soft Skills”. We use these daily in difficult patient interactions.
  • Guess what? The exact same skills can be brought to bear in difficult trainee evaluations!
    • listening empathetically
    • breaking bad news
    • dealing with emotions
    • dealing with hostility
from wicked writing skills

from wicked writing skills

My 6-step Prescription for Evaluation Success:

I have tried to illustrate above that the same skills that make you an astute clinician will also allow you to discern whether a learner may be under-performing and then let them know. Why not stick to what works? Below is my six-step prescription for evaluation success. I am hoping that it is pleasantly familiar to you.

STEP 1: Take a history from the learner at the beginning of the shift.

The ED STAT course run by CAEP [link] teaches Faculty to understand where the learner is coming from at the start of the shift.

  • Ask the learner about:
    • Level of training
    • EM experience
    • Home program
  • Provide a orientation to the ER
  • Ascertain their learning goals
  • Set expectations
    • based on level of training
    • that they will be receiving feedback about their performance
    • [in fact - tell them about these six steps]

STEP 2: Examine the learner’s skills

Use the IPPA approach!

  • Inspectionwatch the learner
    • taking a history
    • performing physical exam manoeuvres
    • giving the diagnosis/ddx
    • giving discharge instructions
  • Palpationperform physical exams together
    • to confirm or refute findings
    • to correct errors
    • to role model
    • to learn from them
      • learners often remember all the special manoeuvres better
      • they ask questions that challenge you to read up – like “which cranial nerve is glossopharyngeal?”
  • Percussion:  use your finely tuned senses ‘tap out’ gaps in:
    • knowledge
    • clinical skills
    • attitude
    • [Hint: the mnemonic KSALTs can help identify where the problem lies with the learner. Read this [link] to the ALIEM MEdIC case].
  • Auscultationlisten to the learner
    • during consultations
    • during interactions with other providers
    • [HINT: eavesdrop on their histories from behind the curtain]

STEP 3: Diagnose the Learner:

Screen Shot 2014-02-25 at 4.52.23 PM

  • Is the learner competent or not?
  • Think about biases you may be introducing and try to avoid them – especially central tendency bias [link].
  • Use my ABC RIMES approach:
    • Attitude - make comments like “self-directed” vs “unmotivated”
    • Behaviours – “procedural skills above peers” vs “poor attention to workplace safety”
    • Competencies: [I use a modification of the RIME approach [link] with the addition of an ‘S’]
        • Reporting – give feedback on history-taking, written communication and presentations
        • Interpretation – give feedback on interpretation of Xrays, EKG’s and ‘putting information together’
        • Managerial skills – give feedback on efficiency, multi-tasking, stewardship
        • Educator/Expertise – give feedback on knowledge and teaching skills
        • Soft skills – give feedback on communication, team-skills, honesty, reliability, insight, receptivity to feedback
  • For those that are slavish to the CanMEDs brand try the MS CAMP mnemonic:
    • Medical expert
    • Scholar
    • Communicator/ Collaborator
    • Advocate
    • Manager
    • Professional

STEP 4: Provide your diagnosis

  • Think of your favourite uncle or granddad – would you be happy with the care that they received? If so  - great!
  • Provide feedback using the ABC RIMES framework [here's an earlier post on providing feedback].
  • If the feedback is going to be negative – you need to prepare. Here’s more information on getting ready for a difficult conversation [link].
    • If you’ve primed the learner at the beginning of the shift – they shouldn’t be surprised.
    • Have the courage to practise tough love – trainees are adults, they should be expected to receive feedback like adults.

STEP 5: Document! Document! Document!

Screen Shot 2014-02-25 at 4.57.57 PM

  • ALL verbal feedback needs to be written down.
  • Use strong descriptive terms rather than weak ones. Saying ‘good job’ is good for self-confidence, but learners need specifics.
  • Use the ABC RIMES or MS CAMP framework – effective evaluation forms make this task easier.
  • RANK THE LEARNER. They cannot all be ‘above average’ – some trainees are below average.
  • Even though a learner may be having a bad day this needs to be documented so that patterns can emerge.

STEP 6: Consult them out!

Most of us feel that problem learners are “someone else’s problem” – you’re right! You need to consult them out. But like all consults – you need to give the consultant the right information [see above].

  • Remediation is NOT your responsibility
  • Issues need to be referred to:
    • Rotation coordinator
    • Program director


Most clinicians already possess the skills needed to made judgements about failing learners. Furthermore, they also possess the soft skills to give negative face-to-face evaluations. If all clinicians used these skills [together with direct observation and expectation-setting] we may identify more learners that need help. Ultimately we will achieve more success at graduating cadres 100% of whom are competent,caring physicians.

Future Directions:

There needs to be huge emphasis on culture change. This is going to require:

  1. More faculty development and engagement. BOTH faculty and programs need to come together to do this. [I'd appreciate comments on how to bring faculty to the table because at my institution this is problematic]
  2. Learners themselves understand how to receive and act on feedback.  


ED STAT! Course Material

Hauer et al 2010. Twelve tips for implementing tools for direct observation of medical trainees’ clinical skills during patient encounters.

Kogan et al 2010. What drives faculty ratings of residents’ clinical skills? The impact of faculty’s own clinical skills.

Kogan et al 2012. Faculty staff perceptions of feedback to residents after direct observation of clinical skills.



The post Failure to Fail Part 3 – A Prescription for Better Evaluations in the ER appeared first on ERMentor.

Failure to Fail Part 2: Types of Evaluation Bias


We know that the best way to evaluate learners is by directly observing what they do in the workplace. Unfortunately [for a variety of reasons] we do not do enough of this. In my last post I described some reasons why we sometimes fail at making appropriate judgements about failing learners.

When it comes to providing feedback, there is much room for improvement.We know that feedback can be influenced by the source, the recipient and the message. What most people don’t know is that, when you’re evaluating a learner, you yourself could be unwittingly introducing bias -  just like when we make diagnoses.

Types of Evaluation Bias:

Screen Shot 2014-02-12 at 2.55.15 PM

  • If a learner really excels in one area,  this may positively influence their evaluation in other areas.
  • For example, a resident successfully/quickly intubates a patient with respiratory arrest. The evaluator is so impressed she minimizes deficiencies in knowledge, punctuality, ED efficiency.

Screen Shot 2014-02-12 at 2.56.41 PM

  • It is a human tendency to NOT give an extreme answer.
  • Recall from part 1 [link] that faculty tend to overestimate learners’ skills and furthermore tend to pass them on rather than fail them.
  • This may explain why learners are all ranked “above average”.

Screen Shot 2014-02-12 at 2.59.28 PM

  • Some faculty are inherently  more strict [aka “Hawks”] while others are more  lenient [Doves]
  • Research is inconclusive on demographics that predispose them to being either of these
    • One study suggested ethnicity and experience may be correlated with hawkishness

Screen Shot 2014-02-12 at 3.02.53 PM

  • This is more relevant to end-of-year evaluations.
  • Despite fact that we all have highs and lows – recent performance tends to overshadow remote performance

Screen Shot 2014-02-12 at 3.03.55 PM

  • After just working shift with exemplary learner, the evaluator may be unfairly harsh when faced with a less capable learner.

Screen Shot 2014-02-17 at 3.02.19 PM

  • Evaluators can be biased by their own mental ‘filters’. These develop from their experiences, background &c. They will therefore form subjective opinions based on
    • First Impressions [Primacy Effect]
    • Bad Impressions [Horn Effect] This is the opposite of halo effect. [e.g. if you're shabbily dressed you may be brilliant, but might be under appreciated]
    • “similar-to-me” effect. Faculty will be more favourable to trainee who is perceived to be similar to themselves.

Opportunity for EM Faculty

Okay. In part 1 you have seen how we as faculty are partly to blame for evaluation failure. Above, I have illustrated how we also introduce bias when we try to evaluate learners. Now for the good news …

  • What faculty need to ascertain is:
    • how well the trainee performs HIGH QUALITY PATIENT CARE
      • what are the features of this?
      • how do I put this into words?
  • This can easily be done by:
    1. making observations – thankfully, in the ER we have all the tools we need [read below]
    2. synthesizing observations into an evaluation - in Part 3 of this blog I’ll show you how I go about putting my observations into words.

Each ER shift  is a perfect laboratory  for direct observation

Screen Shot 2014-02-12 at 3.06.31 PM

  • During a shift in the ED, multiple domains of competency can be assessed -such as physical exam, procedural skill and written communication skills.  In fact ALL of the CanMEDs competencies can be assessed in real time.
  • Additionally we [at U of S] already do things that are encouraged:
    • We provide daily feedback using a:
      • Effective [validated] assessment tool. [Those daily encounter cards were designed using current evidence on evaluation]
      • Multiple assessments by several faculty members throughout their rotation
      • ER also provides avenue for 360 evaluation [by ancillary staff,co-learners and patients]


Hopefully you now know a bit more about why we as medical faculty fail at failing learners and how evaluation bias plays into this. I have tried to show that the ER provides a perfect learning lab for direct observation and feedback. In the next post I will prescribe my personal formula for successful trainee assessment. Comments please! [There's a link at the top under the title of the post]


Management Study Guide Website

Dartmouth Website

The post Failure to Fail Part 2: Types of Evaluation Bias appeared first on ERMentor.

Failure to Fail Part 1- Why faculty evaluation may not identify a failing learner



I Recently gave a talk to fellow faculty on the phenomenon of “failure to fail” in emergency medicine. I am no expert, but I have tried to synthesize the details in a useful way. I have broken it down into three parts. Part 1 deals with the phenomenon of Failure to Fail.  In separate posts I will introduce some forms of evaluator bias and then provide a prescription for more effective learner assessment in the ER. As always comments are welcome!


  1. We expect our medical trainees to acquire the fundamental clinical skills
  2. We expect them to evolve from novice to expert.
  3. Our goal is to graduate cadres of competent physicians who will serve their communities safely, effectively and conscientiously.

The Importance of Direct observation and Work-Based Assessment:

The current model of medical training is a blend of didactic teaching, clinical learning, simulation and self-directed endeavours. We then try to evaluate the learners formatively and summatively through written exams and standardised clinical scenarios.

We are learning that the best tool to evaluate learners is direct observation in the work context. This requires four things:

  1. Deliberate practice [on the part of the trainee]
  2. Intentional observation [on the part of the faculty]
  3. Feedback
  4. Action planning

This will place even more emphasis on direct faculty oversight. We will therefore need to develop their skills and coach them how to:

  • Perform Direct observation
  • Perform a valid [i.e repeatable] evaluation of skills
  • Provide Effective feedback

Screen Shot 2014-01-29 at 1.51.23 PM

Current State of Trainee Evaluation

FACT: the current model is sometimes failing to discriminate and fail learners

      • This occurs in spite of observed unsatisfactory performance.
      • This occurs despite faculty confidence with their ability to discriminate.
      • Most faculty agree that this is the single most important problem with trainee evaluation.

We’re trying to understand why… This is what I found.

  •  The Wobegon Effect – [wikilink]
    • From the fictional town featured on a radio show – a town where

 “all the women are strong, all the men are good looking, and all the children are above average”

    • Describes the human tendency to overestimate one’s achievements and capabilities in relation to others. [Also known as illusory superiority - link] 
  • Grossly inflated performance ratings have been found practically everywhere in North America:
    • Business – both employees and managers
    • University professors – overestimate their excellence [gulp]
    • Studies on ‘bad drivers’ – everyone has one of these in their family! 
  • Not-surprisingly this phenomenon is equally pervasive in medicine
    • Faculty struggle to provide honest feedback and consistent [valid] evaluations. [One study raters rated 66% trainees "above average" This is simply not possible! Pubmed]
    • Fazio et al [see references] demonstrated that 40% of IM clerks should have failed, yet were passed on…FORTY PERCENT!!!
    • In this study by Harasym et al [Link] showed that OSCE examiners are more likely to err on the side of passing than failing students

    Screen Shot 2014-02-13 at 7.39.05 AM

    • Residents [in particular lowest performing ones] overestimate their competency compared to their ratings by faculty peers and nurses [Pubmed Link]
    • Moreover the biggest overestimations lay in the so called “soft skills – Communication, teamwork and professionalism. These are often the problems that give faculty and colleagues headaches with a particular learner.
    • One reason might be because  soft skills are hard to quantify – unlike suturing skills where incompetence is quickly identified 
  • End result is a culture of “failure to fail”  where …
    • Many Graduates  are not acquiring required skill-set
    • We are failing to serve patient needs
      • Reduced safety, increased diagnostic error and  reduced patient satisfaction
    • Increasing negative fallout to ENTIRE profession – our reputations are being besmirched in the digital era. 
    • Ultimately public trust is being eroded.
    • We cannot succeed at our job without public confidence in what we do.

Why we fail to fail learners:

Barriers to adequate Evaluation:

Learner factors

  • Learners are all different. Moreover, the same learner will vary in skills through time as they grow and develop.
  • We all have good and bad days.
  • There exists a phenomenon called “Group-work effect” where medical teams can mask deficiencies of individual learners.

Institution factors

  • Tools of evaluation flawed – some eval forms are poorly designed to discriminate learners
  • We all work in the current culture of “too busy to teach”.
  • There is an incredible amount of work needed to change this culture

Faculty Factors

  • Faculty feel confident in ability when poled.
  • Faculty feel sense of responsibility to patient, profession, learner BUT …
  • Raters themselves are the Largest source of variance in ratings of learners:
    • Examiners account for 40% of the variance in scores on OSCEs
    • Examiners’ ratings are FOUR times more varied than the “true” scores of students
    • some tend to be more strict – “Hawks” … some are more lenient – “Doves”
    • negligible effect of gender, ethnicity and age/experience [one UK study that "hawks are more likely to be ethnic minority and older - link]
  • Clinical competence of faculty members is also correlated with better evaluations [link]
    • One Interesting study where faculty took OSCE themselves, then rated students … Results show that:
      • Use their own practice style as frame of reference to rate learners
      • Better performers on the OSCE were more insightful and attentive evaluators

2013 convenience sample of U of S EM faculty: Top three reasons :

1. Fear of being perceived as unfair

2. Lack of confidence in the supporting evidence when considering to “fail”

3. Uncertainty about how to identify specific behaviours

What I  discovered in the literature:

  1. Competing demands [clinical vs educational] mean that education suffers.
  2. Lack of Documentation  - Preceptors fail to record day-to-day performance. So when it comes to end of rotation eval – not enough evidence.
  3. Interpersonal conflict model decribes the following phenomenon:
  • Faulty members’ goal is to improve trainees skills – preceptors do care a lot!
    • They perceive the need to emphasize the positives and be gentle [to protect learner self esteem and maintain an alliance/engage the trainee.
    • Faculty try and make feedback constructive without the learner feeling like it's a personal attack.
    • This creates tension when one is forced to be negative and critical feedback
    • Emotional component of giving negative feedback also makes it even more difficult
    • Consequently this tension forces us to overemphasize the positives
    • Creates mixed messages regarding feedback. Learners walk away with wrong message.

4. Lack of self efficacy:

  • There's a lack of knowledge of what specifically to document - a) Faculty don't know what type of information to jot down, b) Faculty struggle to identify specific behaviours associated with failure.

[The reported low self-confidence during evaluations is actually a product of our training [or rather lack thereof]. No-one teaches you how to navigate minefields in evaluation. This is particularly evident for soft skills. Staff often think that their judgements are subjective interpretations].

5. Anticipating [arduous] appeal process – having an extra commitment, having to defend ones actions/comments, fear of escalation [legal action e.g.].
6. Lack of remediation options- there exists a lack of faculty support. This makes them unsure about what to do/advise after diagnosing a problem.


We have seen that the current model of medical training is failing to identify and fail underperforming learners. There are several reasons why, but faculty themselves play a large role in this culture of “Failure to Fail”. In my next post I will highlight some biases that we encounter when judging learners and provide a prescription for more effective learner evaluation.

Acknowledgement- Dr Jason Frank @drjfrank for pointing me in the right direction [authors Kogan and Holmboe are ones you should search out in particular]


Dudek et al 2005 Failure to Fail: The Perspectives of Clinical Supervisors

Fazio et al 2013. Grade inflation in the internal medicine clerkship: a national survey.

Harasym et al 2008 Undesired variance due to examiner stringency/leniency effect in communication skill scores assessed in OSCEs

Kogan and Holmboe 2013. Realizing the promise and importance of performance-based assessment.


The post Failure to Fail Part 1- Why faculty evaluation may not identify a failing learner appeared first on ERMentor.

Does that HIV-infected patient with a headache really require that CT head


Screen Shot 2013-10-03 at 9.10.49 AM

from radiopedia.org

Having HIV is considered a Red Flag feature of headaches – mandating CT. The main bad things that you’re looking for are Toxoplasmosis, Cryptococcus and CNS lymhoma. This recent article [Pubmed link] by Luma et. al. highlights the importance of making the diagnosis of toxoplasmosis in AIDS patients. However, since the widespread adoption of HAART regimens there has been astounding reduction in HIV and AIDS related morbidity and mortality – meaning that opportunistic infections are also on the decline.

So … if the patient is well appearing, tells you she is immunocompetent and has a headache syndrome that sounds like a primary benign headache … Do they really need that CT? Here’s what my quick literature search discovered:



Screen Shot 2013-10-22 at 10.52.34 AM

Maybe one of the reasons for a”CT for ALL” approach was this [pre-HAART] article by Elizabeth Tso et al [Pubmed Link] in 1993. They looked at the usefulness of CT in examining HIV patients with neurologic complaints

In a predominantly African-American and inmate population, they performed a retrospective review of CT imaging results in:

  1. HIV +ve patients
  2. those with risk factors for HIV:
  • IV drug abuse [80%]
  • homosexual/bisexual
  • transfusion before 1985

CT’s were performed for:

  1. Headache
  2. focal neurologic signs
  3. other neurologic signs [e.g. visual changes]


146 pts who got 169 CT’s:

  • 85 Normal [50%]
  • 49 atrophy [29%]
  • 35 Focal lesion [21%] – 10 of which were mass lesions

Among subgroup of 114 seropositive HIV patients there were 134 CT scans:

  • 68 [50%] normal
  • 43 [32%] atrophy
  • 25 [17%] focal lesion

In the subgroup of 32 pt’s with risk factors 35 CT’s:

  • 17 [49%] normal
  • 8 [23%]  atrophy
  • 10 [28%] focal lesions

Looking specifically at the 99 patients presenting with headache +/- other symptoms … 85 had either normal scans or atrophy. 14 had a focal lesion:

  • Headache only 32 patients: 29 normal/atrophy, 3 focal lesions [10%]
  • Headache and photophobia 8 patients: 6 normal/atrophy, 2 focal lesions [25%]
  • Headache and Fever 36 patients: 33 normal/atrophy, 3 focal [10%]
  • Headache fever and stiff neck 6 patients: 5 normal, 1 focal lesion [17%]

Looking at 18 patients who got repeat scans:

7/18 repeat scans were +ve

  • 4 = Normal/Atrophy
  • 3 = New enhancing lesions [pts had: Seizure, worsening paralysis, altered LOC]

The quickest change occurred in a repeat scan done 24 days after the first CT.

Among the 76 visits that ended up getting admitted [45% of total]:

  • 68% [52] CT scans were normal/atrophy
  • 32% [24] focal lesions – 21 of these were admitted because of the scan [which is 12.4% of the total group]

Authors also conducted linear regression analysis. The following features were associated with CT abnormality:


Focal neurologic signs

Altered mental state



Authors recommend a “CT for ALL” approach because even non-focal presentations [i.e. headache alone] are associated with focal lesions. Furthermore, even if the patient is re-presenting to the ER after having a recent CT they should be re-scanned as changes can happen rapidly.

My thoughts:

  1. This was a retrospective review of imaging in a demographic with perhaps a high disease burden – so there may be selection bias.
  2. This paper also pre-dated HAART – so nowadays there may be even less positive scans in these patients.
  3. Even though this cohort was ‘pre-HAART era’,  the vast majority of patients with headache had no focal lesions on CT [85%]. Only 12% of the time the CT lead to an admission. So could CT’s be performed on a case-by-case basis and perhaps non-emergently in the “headache-only” crowd?


A late 1990′s publication by Rothman et al [Pubmed link] attempted to determine which signs and symptoms are predictive of new lesions on CT:

roth titel

  • They prospectively analysed a convenience sample of 110 HIV-infected patients presenting to an inner-city hospital
  • Enrolled were any HIV patients presenting with new neurologic findings.
  • These patients were assessed using a standardized assessment tool and were subsequently sent to the CT scanner if they reported new symptoms.


rothman flowchart

Clinical features most strongly associated with new focal lesions on CT were:

New Seizures
Depressed or altered orientation
Change in headache
Prolonged headache ≥3days
Focal neurologic deficits

Use of criteria 1-4 would pick up 19/19 patients with new lesions. On further analysis they added 5. Focal Neurologic deficits because of its association with the other 4 findings.

roth sens

I like it when authors publish this kind of data because it allows me to tease out interesting and important findings like  – asymmetric deep tendon reflexes which have a 97% specificity for a lesion on CT. Take a moment to read these findings.

CD4 counts:

roth cd4

75 patients had CD4 counts ≤200

  • 13/75 [17%] had new lesions on CT
  • In these patients New seizures, change in orientation and change in headache were predictors of CT lesions

29 patients had CD4 counts >200

  • only 4/29 [13%] had new focal lesions on CT
  • in theses patients new seizures and prolonged headache with nausea were predictive


This data may guide physicians to order stat imaging in HIV-infected patients who need it and obviate CT in those that will not have new lesions on CT requiring emergent management. They recommend further study of this concept.

My thoughts:

Small study and convenience sample introduces selection bias.

New headache was 80% specific, but [perhaps because of my bias] the article suggests that “headache only” is too conservative a strategy to mandate CT scan.

The majority of [15/19 = 79%] patients with lesions on CT also had CD4 ≤200 – which seems to fit with the articles below.


Screen Shot 2013-07-17 at 2.53.32 PM

These authors published in Headache 2001 [Pubmed Link] tried to come up with a clinical decision rule to help obviate the need for CT. Their hypothesis was that HIV-infected patients are at low risk of intracranial lesions if:

 (1) there are no focal neurological signs or mental status alterations on physical examination

(2) there is no history of seizure

(3) CD4+ cell count  ≥200 cells/mircoL.

Study Methods:

  • Retrospective review of 365 patients in San Francisco at 2 urban EDs
  • Abstracted medical records from 364 HIV-infected patients who got a CT head for evaluation of headache.
  • Patients were categorized as low, intermediate, or high risk based on clinical criteria (focal neurological signs, altered mental status, history of seizure) and immune status (CD4 lymphocytes ≤200 microL).
  • This cohort was compared to a unselected cohort of HIV-infected outpatients with headache who were all treated and followed in primary care (N=101) for 3 months.

Inclusion: who had HIV/AIDS who got a CT head

Exclusion: Meningitis in the last 30d, previous known Toxoplasmosis, brain lymphoma or other CNS disease


  • Overall
    • Mostly men [>90%]
    • Mostly >30 yo [>80%]
    • Mostly MSM, IVDU or both
    • CD4+ < 200 cells/mm3 in 70% and >200 cells/mm3 in 30%
    • 125 patients had no CD4+ on file – assumed to be >200 if their lymphocyte count was >2.0
  • CT vs “headache” group
    • CT More likely to have had an AIDS opportunistic infection
    • CT More likely to have a CD4+ < 50 cells/mm3
    • 8 pts lost to follow up in headache group
    • Overall – 50% of headache group had benign headache. Only 3/93 had lesions diagnosed on follow up.
  • CT findings:
    • 40/364 [11%] had +ve CT – of these 19 had definite intracranial mass lesions [5%] the rest were possible lesions.
    • Subjects classified as “low risk” [i.e rule negative] – ZERO had lesions on CT
    • Subjects classified as “medium risk” [ i.e rule negative but CD4+ <200 or Lymphocytes <2.0 - 9% had CT lesions
    • Subjects classified as "high risk" [focal neuro e.g.] – 21% had lesions on CT
  • Features Associated with positive CT:
    • Abnormal motor or sensory exam
    • Abnormal mental status
    • Previous Hx of opportunistic infection
    • CD4+ < 200
    • Toxoplasmosis +ve IgG
    • [non-significant trend with Hx of Seizures]
    • although type of headache wasn’t discriminating – a 1999 study [Pubmed Link] showed that change in HA may predispose to positive CT

Screen Shot 2013-07-17 at 3.38.16 PM


  • Despite limitations [Sample size, Loss to follow up in headache group, Retrospective study]
  • Authors conclude that you can use these features to avoid unnecessary imaging in HIV-infected patients with headache

My Thoughts:

This was a retrospective review of imaging. Their algorithm can only be thought of as a derivation of a CDR that needs further validation.

That said I think that the evidence is growing that nowadays (in the HAART era), there may be a need for more than just a headache to be at risk of clinically significant lesion on CT.

If there’s CD4 > 200 with no focal findings, seizure or altered mental state – we should consider NO EMERGENT CT [unless things change]. If there’s no clinical risk factors, but a CD4 ≤ 200 – this may be a patient worth scanning [or at least looking for more subtle clinical findings]


Screen Shot 2013-10-02 at 11.15.19 AM

In the study above, Graham et al [Pubmed link] tried to illicit the clinical variables predictive of those who would benefit from CT. The postulated that because CD4 count correlates with opportunistic infection, it may be able to sort out who needs CT.


Authors reviewed CT scan results and CD4 counts in HIV patients presenting with headache

Excluding those with

altered mental status, meningeal signs, neurologic findings, or symptoms of subarachnoid hemorrhage.

CT scans were considered positive or negative and were grouped according to CD4 counts:

  1. less than 200 cells/microL,
  1. 200 -499 cells/microL,
  1. greater than [or equal]  500 cells/microL.



178 patients total

128 negative scans [63%]

76 positive scans [37%]

  • 55 had atrophy alone [77% of positive scans] – all but 3 patients had CD4 > 200
  • 18 had mass lesion [23%]
    • 13/18 were Toxoplasmosis – all had CD4 < 200


CD4 count > 200 obviates CT.

My thoughts:

small series, but the figure above speaks for itself. All the pathology was in patients with low CD4 counts.


Bottom Line:


When it comes to neuroimaging in the HIV-infected population, the medical dogma suggesting a “CT for ALL” approach is being challenged.  The most recent ACEP Guidelines aren’t particularly helpful [link] suggesting “neuroimaging for a new type of headache”. But the articles above [despite there being no validated CDRs] suggest that, in the post HAART era,  HIV- infected patients  may need more than just a headache in order to mandate a stat CT. A reasonable approach might be to stat scan those patients with:

  1. Seizures
  2. Altered mentation
  3. Focal findings
  4. CD4 < 200
  5. prolonged headache
  6. “Headache Plus” [photophobia, change in affect etc]

Without these, an EMERGENT CT is probably not warranted, should be considered on a case-by-case basis. I have assumed that finding “atrophy” on a CT isn’t clinically significant to the ER doc. TSO states in her article that these are young people who shouldn’t have atrophy – so the finding of atrophy may be important in their overall management … so if they haven’t had a CT they might need one – it just maybe not be done in the ER [in the middle of the night].Those are my thoughts – what do you think? Peer review via comments please :) [click on the link below the title of this blog post]






The post Does that HIV-infected patient with a headache really require that CT head appeared first on ERMentor.