Tiny Tips: The Ottawa SAH Rule

This month’s ALiEM Journal Club discussed Perry et al’s 2013 paper Clinical Decision Rules to Rule Out Subarachnoid Hemorrhage for Acute Headache. If you haven’t already, check out the awesome collaborative vodcast featuring ALiEM Editors, Annals of EM Editors and the Authors! If you can’t spare 30 minutes, watch at double speed.

I suspect the Ottawa SAH rule may be destined to join the ranks of Stiell et al’s other Canadian exports (the Canadian CT Head Rule, Canadian C-Spine Rule, and Ottawa Foot/Ankle/Knee Rules) and decided that I needed a way to remember it.

The mnemonic ANT LEaF was derived in my head and validated on Twitter with 8 favorites, 2 retweet and 1 supportive reply in 3 hours:

Mnemonic for the Ottawa SAH Rule: ANT LEaF

A - Age > 40y
N - Neck pain/stiffness
T - Thunderclap onset

L
 - LOC
E - Exertion onset
a
F - Flexion decreased

It’s not perfect because it has nothing to do with the head, but I hope the association between an ant on a leaf inside a bleeding brain describes a bizarre enough visual to help tie them together. Note that the last letters of each word are the two additional items added to one of the previously derived rules to increase its sensitivity to 100% (95% CI 97.2%-100%) from 98.5% (95% CI 94.6%-99.6%).

Jen Williams (@DrJenWilliams) suggested a slight reorganization of the letters to spell ELeFANT. If this sticks with you better, use it!

As usual, this Tiny Tip will be added to the Boring Cards Tiny Tips deck. For more information and instructions on how to use them, check out the Boring Cards page. Thanks to Perry et al for doing such great work!

Post frequency on BoringEM has decreased dramatically since I started writing at ALiEM and doing a lot more research, but I will continue to put out content intermittently. Please consider following along via email or social media (see the top of the column on the right).

Author information

Brent Thoma, MD MA
Editor of BoringEM Simulation Fellow at Massachusetts General Hospital and Harvard Medical School Emergency Medicine Resident at the University of Saskatchewan

The post Tiny Tips: The Ottawa SAH Rule appeared first on BoringEM and was written by Brent Thoma, MD MA.

Counterpoint: Why Graduate Medical Education will be fine

Written by: Teresa Chan MD, FRCPC   |   Peer reviewed by: Brent Thoma MD, MA

This piece is a Counterpoint piece in response to KevinMD piece:  J. Russell Strader, MD  Why graduate medical education is failing from December 19, 2013.  Read the original post here.

 

The Impetus for my latest Counterpoint

In his piece on the KevinMD blog, Dr. J. Russell Strader implied that because he’s interviewing candidates whom he believes are “woefully unprepared” to practice independently as cardiologists, it must be because all of graduate medical education training is broken.

He goes on to tell an unfortunate anecdote about his experience as a junior cardiology fellow where he was under-supervised and oppressed.  In his blog he states: “If I had to call the attending for help, it was a failure.  And the attendings let me know it.”  He feels this “House of God”-like culture is the reason for his competence.

However, there is no evidence that would suggest this is the case.

What’s wrong with his conjecture?  Wasn’t it just an opinion piece?

Throughout my medical training, it has been impressed upon me that we must not infer truth from data that is faulty or incomplete. There is a danger to generalizing too much, and the final common step for critical appraisal guides like the JAMA User’s Guides to the Medical Literature is to ensure we are applying the data to the correct population.  Don’t even get me started on using anecdotes about therapies or observations.  These days, even the most junior of clinical clerks at McMaster University will look at me quizzically and ask “What’s the evidence for that?”

And yet, sometimes I hear really smart people over generalize and catastrophize based on anecdote and conjecture.  These are the same smart people who would instantly rebuke me if I said: “One time, we used blood-letting to cure a fever… So, my plan is to use that therapy.”

Opinion pieces are fine and dandy, but they should respect the work that has been done on these topics by thousands of medical educators. Non-medical readers may see opinion pieces like this, written by well-respected members of the medical community, and interpret them as true representations of medical education. For this reason, when we as physicians put our thoughts out for public consumption there is an inherent trust that we must uphold and it is imperative that we discuss how to do this better.

So, indulge me as I engage in some post-publication peer review. For assistance, I solicited comments from my Facebook friends (which I have posted with their consent) and the Twitterverse. I received a resounding response rate with many replies and personal messages.

****

Dear KevinMD Editors,

My name is Teresa Chan: I finished residency and passed my final examinations 5.5 months ago, and I am presently an attending physician. Please accept this open letter as a peer review of your piece posted on December 19, 2013.  This represents a culmination of multiple responses back to Dr. Strader’s piece.

There are 5 major faults in the arguments presented in this piece about why graduate medical education (GME) is broken.

1. Context is Everything

How are we to judge competence?

The author states:

As we have looked for additional partners for my practice over the years, a common theme has emerged.  The doctors coming out of current GME training programs do not know how to act as a[n] independent physician.  They are unable to decide the right test to order, or the right medicine to prescribe, or what to do in the middle of the night when they are called to the bedside of a patient whose blood pressure is dangerously low, or who can’t breathe.

I have not read any literature to support the general incompetence of all new hires and none is referenced here. If they do have a method of assessment that determined that they know that they don’t know which “test to order or medicine to describe” or what to do at 3am when a patient is hypotensive or apneic, I am sure the medical education community would welcome such a publication.  Of note, recent advances in this field (e.g. the script concordance test) have been called into question, and we are in need of such metrics.  As they do not state their methods in their piece I suspect it is simply an anecdote without the support of data or the literature, but I highly welcome their addition to the current literature if they have a novel methodology for assessing exiting residents.

Standards have changed

While medical education has changed, so have standards.  As Dr. Nicole Swallow states:

Patients are not fodder for residents and students to learn from by trial and error.  The Libby Zion case caused massive work-hours reform for GME but was in a large part about a lack of clinical supervision.  And so, we changed.  We changed because it was in the best interest of our patients to ensure that they received care from a well-supervised team. Life-or-death decisions are learning opportunities, but their high-stakes nature requires that even the most senior of learners be supervised closely with their answers vetted and the evidence discussed.

As I start my career I am humble enough to admit that when I am in doubt I will swallow my pride and discuss decisions that have patient-care ramifications.  Several other new attendings that I have discussed this with have also admitted incorporating such measures into our practice because patient care is more important than bravado.  I do not think this makes me a less capable doctor – and in fact, I think it is an act of true self-reflection to acknowledge my limitations and overcome them. The more senior members of my group have helped by role-modelling this behaviour.

There is more evidence to consider

So much has changed between 2005 and 2013.  In 2005, the iPhone did not yet exist, YouTube had just launched its first video (April 23, 2005 at 8:27pm), and Wikipedia was only 4 years old.  Medical knowledge is not static.

As I have blogged about before, each year there is a ridiculous increase in the amount of knowledge that we are accruing in the fields of healthcare.  The annual summary report of the Publishers Association reports that for scientific technical and medical (STM) journals 2010, there were 2000 publishers that collectively published 1.4 million peer reviewed articles per year in 23,000 journals.   In light of this massive insurgence of data and “evidence,” can we be so sure that it is merely work hours and “less harsh” GME training that is changing? Are graduates perceived as less competent because of their training or because they have way more to know than when the author trained? Do they still need to know the answer to every question, or do they need to know where to find the answer to every question? Does not knowing everything off the top of their head make them less competent if they can find an answer that is more comprehensive and up-to-date than one that has been memorized?

http://d3z1rkrtcvm2b.cloudfront.net/wp-content/uploads/2013/04/datadeluge-infographic-1024x463.gifUnderstanding the Data Deluge: Comparison of Scale With Physical Objects (Infographic created by Julian Carver of Seradigm in New Zealand http://www.saradigm.co.nz) (used by creative commons license)

2) Methods

It would be hard to get a large enough sample size from a few years of personally hiring cardiology fellows to infer any conclusions on their general competence even using a robust research methodology. I can not imagine the sample that the author has been exposed to more than a fraction of the residents from a single specialty in a small number of jurisdictions. From this data, can we truly generalize these statements to all graduate medical education, everywhere?

Because his blog implies that, and I beg to differ.

3) Wither the data?

We are only now at the dawn of the age of Competency Based Medical Education. We have no metrics and measurements on our previous generations of doctors on a comparable scale.  Perhaps, because current trainees are more robustly observed, they are better able to reflect and learn about their experiences, and are actually more savvy about their decisions.  Is Dr. Strader mis-interpreting a nuanced decision-making process that combines the latest literature and the exceptional on-the-job teaching as indecision?

Examinations

We have always had examinations scores to assess competence, but the psychometric properties of these examinations have changed as the state of medical education improved.  They have generally not assessed the managerial skills that Dr. Strader so ardently argues that junior doctors now lack.  Previously, if a fellow ‘served his time’ we considered them ‘done’ provided they passed their purely knowledge-based examinations. This shift has not been studied but the Hawthorne effect would suggest that attendings will actually be better prepared in these areas because attention is being given to them.

Historical Cohort Comparisons

Dr. Strader’s ultimate conclusion is that new doctors today are incompetent relative to the physicians that graduated when he trained. Does the author have any proof of his own competency as a recent grad?  Or did a program director just take the word of his supervising doctors (the same ones who were in bed sleeping while he was working) when they hired him?  Does he have any evidence that all doctors of his generation were automatically able to be instant managers of CCUs across the nation?  That they could make life-or-death decisions without batting an eyelash? Or that they never made a silent wish that their fairy-god attending would swoop in and just tell them the answer?  Where is the data that supports his claim that newly minted doctors from previous eras were always able to make independent decisions immediately on day one of practice?  And even if they were making the decisions… was it just that they deluded themselves into thinking they made the right ones?

Because, according to legend, they were fellows and residents who never dared to review with their staff and therefore were never able to learn from them in a timely fashion.  And yet, more recent evidence suggests that timely feedback by attending physicians improves learning and achievement.(Epstein, NEJM 2007; Ramani & Krackov Med Teach 2012)  Sure, they made independent decisions, but who knows if they were the best ones?  And shouldn’t we, as attendings, ensure that our patients get the best care they can get, and that our learners can still learn and feel independent in that scenario?  The deliberate practice model, as per Donald Schön‎, requires not just the act of practising, but of reflection and feedback (usually from an expert).

Transition Anecdotes

The author stated that:

“Ideally, the last year of one’s GME should be spent working as a de facto attending, being supervised only nominally.  In this way, the programs can assure that the graduates are able to hop from their training program into practice in a seamless transition… It doesn’t happen that way so much anymore.”

With no other evidence to discuss we enter the realm of the anecdote. To counter the author’s own experiences, I will state that after my five years of GME training, I felt prepared for practice. My very last residency shift was essentially the same as my very first shift as an attending physician, except, I finally felt the freedom of not having to run around after my staff to tell them what I had already decided.  In Emergency Medicine, we have the privilege of having direct supervision and observation for all of our shifts – until that very last day of residency.  Such supervision did not diminish my ability to learn to make critical decisions, and in fact a number of my attendings will attest to my steely resolve in convincing them of such things as intubations or admissions as a final year PGY5.

Having increased supervision does not automatically beget a decrease in decision-making, independent thought or ownership of one’s decisions. Most clinical teachers know learners to commit to a decision and this technique has become a widely disseminated part of the lexicon of teaching programs in recent years (e.g. the one-minute preceptor, which is now even taught to residents in their training programs).

Perhaps my training is different from the new hires you are meeting, but I think it is the mark of a great training program that can encourage autonomous thought (respecting dissent, encouraging discussion) so that learners can develop a sense of ownership over decisions, without ever being ‘hung out to dry’.

 

4) The Fallacy of Arrogant Perception

Arrogant perception’ is a term I learned when I attended my initial teacher training program at OISE/UT (which I completed prior to medical school).  It helps us understand power dynamics and their implications for how the dominant person views the ‘other’ in any relationship. If the dominant party assumes that his/her frame of reference and world view should be imperialistically applied across the board – that is ‘arrogant perception’.

In medicine, this perception is often seen in two ways:

  1. between the teacher (dominant) and student (subjugated);
  2. between generations of physicians (older = dominant as they are often employers and managers; younger = lacking agency within medical infrastructures).

This seems to be an example of an intergenerational arrogant perception. Although, there are only 8 years between when Dr. Strader finished his training and when I finished mine, his blog post cements his point of view within the dominant position as a ‘generation’ up from me.  He is now in the dominant position of hiring people and seems to be applying his perceptions of competence on his prospective hires and finding them failing.

Perhaps the author needs to be reminded that not everyone learns best in the way that he did.  Many of my fellow academic centre attending colleagues (who are emergency physicians or intensive care physicians) were quite saddened by this blog.

“[T]he critical decisions residents learn to make under the watchful eye of a preceptor are perhaps better for being made well rested: more evidence-based and more careful thought and attention to detail. The key is keeping the patient safe while still allowing the resident to think they are making completely independent decisions. [And when they're ready] to increasingly [add] complex and critically ill patients. It is the skills of the preceptor that determines this learning.” – Dr. M. Welsford

Other colleagues suggested that junior fellows should not be expected to run a cardiology critical care unit, nor be punished for asking opinions about managing life-threatening conditions – otherwise, what would be the point of their fellowship?  If they are failing to ask for help, it is a clear failure of the attending physician, not the resident.  Is it the role of a first year fellow to know they are out beyond their limits? Or is it the role of the attending to adequately provide them just enough of a push to get them their comfort zone into the their Zone of Proximal Development so they grow?

Senior learners also had negative reactions to this post.

“Unfortunately I think going back to “the way it was in my day” is to the detriment of patient care in many instances. I completely agree with Michelle [Welsford] in that preceptors have differing skills/styles to foster independence in learners. The best preceptors I have had were the ones who pushed me to make critical decisions independently but who were supportive, present, and made me feel like they would prevent me from harming a patient.” – Dr. D. Atrie, Chief Resident (PGY-5)

One Residency program’s twitter account, even chimed in:

In the comments back to Dr. Strader, one anonymous commenter states: “A new grad is a new grad. We all needed help right out of the gate.”  Similarly, as one person stated in my twitter feed:

@MedPedsDoctor @TChanMD In end all drs have to deal with occ uncertainty without backup yet have to act decisively with no right answers

— Adr Born (@ClinicalArts) December 21, 2013

Just because the author learned that way, does not make it the best way.  In fact, faculty developers would suggest that a skilled teacher should have many more tricks in their toolbox, as suggested in the BMJ article by Kaufman in 2003.  There is ample evidence in the medical education and patient safety literature that suggests the methods described in your blog are not ‘best practices’ in graduate medical education.

5) A Classic Case of Nostalgitis Imperfecta

Stella Ng (https://twitter.com/StellaHPE/) stated felt that this post was:

Dr. Ken Milne (who is a Chief of Staff at a Canadian hospital and also the voice behind SGEM) sent me a note in support of my stance.  In it he equates the sentiment that this post is quite similar to the piece of classic comedy,  The 4 Yorkshiremen by Monty Python.

These comments reminded me of an earlier twitter-based discussion that I had with Eric Holmboe (an adjunct professor at Yale, and Chief Medical Officer at the American Board of Internal Medicine) a few months ago. He told me then that he had a term for the phenomenon of viewing one’s training through ‘rose-coloured’ glasses.  He refers to it as ‘Nostalgialitis  Imperfecta’.

@TChanMD @BoringEM good for you – really nice done and much needed to address the “nostalgialitis imperfecta” syndrome among folks of my age
— Eric Holmboe (@boedudley) September 8, 2013

The current system has been vetted more fully, with stakeholders (e.g. patients) being included more closely.  We have sought to balance patient welfare with graduated responsibility.  As one of my colleagues pointed out: “…our current system handles graduated responsibility better than the previous, both at providing support for juniors and independence for the seniors.”

Conclusions

This whole Counterpoint has boiled down to one thing: it’s time we got more critical about the blogosphere.  We need to stop posting unsupported and non-evidence based conjecture without so much as a shred of corroborating evidence from the robust body of literature that exists in medical education. When anecdotes are used the lack of evidence should be acknowledged and dramatic, overreaching conclusions should not be drawn. A blog is not a peer reviewed journal, but it should take the evidence into account when drawing conclusions. And when doing so, that evidence should be wielded critically – referencing the most robust and credible papers. One need only look at the December 4th edition of JAMA to see that there is great research being done in medical education.  In fact, there is also a whole journal published by the ACGME called the Journal of Graduate Medical Education.

My plea to the editors of KevinMD and other blog writers like Dr. Strader: Can we stop with the unsupported, non-evidence based discussions of how ‘today’s doctors’ are lesser than those before?  There is actually plenty of evidence out there that suggests you may be right (especially regarding highly technical specialities, wherein work hours may in fact be decreasing procedural prowess), but we need to be citing it.  Let’s have this discussion, it’s important.  But to borrow a phrase from celebrity chef Emeril Legasse: “Let’s kick it up a notch!” KevinMD, you can do better.

Addendum:  The December 25, 2013 issue of JAMA gave me a present this morning by going to press on an article that speaks (eerily precisely) to the issue of supervision and autonomy.  It is one of their viewpoint articles… So, again my synthesis and hypothesis than pure research, but it incorporates previous literature and works.  Article:  Schumacher DJ, Bria C, Frohna JG. The Quest Toward Unsupervised Practice: Promoting Autonomy, Not Independence. JAMA. 2013;310(24):2613-2614. doi:10.1001/jama.2013.282324. Click here to go to the link.

___________________________

P.S. Thank you to my friends and colleagues who allowed me to quote them from Facebook and Twitter on their responses to this blog.  Whether I attributed your name (with your consent) or not (i.e. you just helped me build my arguments), I appreciate the discussion!

Author information

Teresa Chan
Teresa Chan
Emergency Physician. Budding Medical Educator.

The post Counterpoint: Why Graduate Medical Education will be fine appeared first on BoringEM and was written by Teresa Chan.

Counterpoint: Chunking theory and why medical mnemonics can be useful.

By Teresa Chan, MD    |    Peer-reviewed by Brent Thoma, MD

Lately, I’ve been pondering about the role of mnemonics.

Join me as I wax philosophical and dig into the education theory behind why we’re tempted by these tiny packages of information.

 

Why am I thinking about Chunking?

I’ve been reading a lot about the psychological sciences right now as I have been working on my thesis proposal.  Some of the great work done was on Chessmasters and their inordinate ability to remember chess boards at play.  This lead to some theories around a phenomenon now known as ‘chunking’.

Then, I was cc’d on a tweet December 15, 2013.

It brought me to a post by ‘Compound Fracture’ that stated:

There comes a point in medical education where mnemonics become a crutch or are even harmful to your knowledge. You want to be able to understand a disease state enough to be able to reason through the clinical presentations rather than trying to remember what the ‘E’ in CHORD ITEM stands for.

At first glance, I can see what this person means.  You shouldn’t supplant clinical reasoning with random medical mnemonics.  And yet, for my thesis I have delved quite deeply into the psychological and educational research around diagnostic reasoning… and I’m not sure that this ‘gut reaction’ is evidence based.  So, now it is time for, another BoringEM.org Counterpoint!

- TChanMD


Dear ‘Compound Fracture’:

Your point is well taken, but you could say this about nearly any educational intervention if it ends up supplanting actual thinking and reasoning. But mnemonics have their role as well and a huge basis in the psychological literature.

‘Chunking’ is a memorization strategy defined by Miller [http://en.wikipedia.org/wiki/C...]. He found that experts chunked things together (found patterns, saw them, used them) so that they could memorize more.

If you are relying on mnemonics to remember everything… I agree that is unreasonable and foolhardy. In fact, I would argue that in clinical practice if you’re trying to remember a letter in a mnemonic, just look it up – for you and your patient’s sake.

However, for testing (and for when you are being pimped) I think there is a role for chunking (actually I think we should really just change the tests so you don’t have to use mnemonics…but I digress). You can read my related rant here (My first Counterpoint).

Medical Mnemonics for tests:

I think in certain cases there is little harm to be done with using good, evidence-based approaches to conquering your education goals. We are often Evidence-based with our medicine, but funnily enough, we don’t encourage this same voracity of knowledge translation around other key sciences that can help us. I would suggest that learning to create MEANINGFUL mnemonics can be useful to cue your brain to connect disparate and random concepts (e.g. a list of 8 drugs that cause methemoglobinemia). In the end, yes, you COULD go back to first principles and learn EVERY SINGLE pharmacokinetic and structural similarity between the 8 drugs or you might create a mnemonic for rapid recall. Pragmatically, for many high-stakes exams (e.g. Boards or Royal College examinations).

Medical Mnemonics For Clinical Practice:

Like I said above, if you’re using a mnemonic and can’t remember you should probably just look it up. However, there can be a role for mnemonics to help you remember a checklist (e.g. the SIGNOUT handover mnemonic, SBAR mnemonic, FAST HUG for ICU safety), and that can be useful.

Bottom Line on Medical Mnemonics:

I think that the educational evidence suggests that mnemonics can be useful and you can consider using them (especially in time-limited testing scenarios), but not relying on them in clinical practice.

 

Conclusion

Thanks to ‘Compound Fracture’ for being skeptical about mnemonics. It is important to be skeptical about things.  Education needs to critiqued with the save voracity as medicine.  Taking things for granted is probably the way we got into a lot of ruts (e.g. teaching the way we were taught).  With the dawn of EBM, this became less and less so.  And now…. we’re on the verge of Evidence Based Medical Education (EBME!), so let’s get out there – research & discuss about education topics!  Question things!  But if there’s evidence, let’s start using it.

________

P.S. The post script is inspired by one of my friends and mentors Dr. Ken Milne (The Skeptic’s Guide to Emergency Medicine). Let’s get medical educators skeptical as well! :D

Title Picture Credit: Attribution-NoDerivs 2.0 Generic (CC BY-ND 2.0) by Jon Fingas (http://www.flickr.com/photos/jfingas/)

Author information

Teresa Chan
Teresa Chan
Emergency Physician. Budding Medical Educator.

The post Counterpoint: Chunking theory and why medical mnemonics can be useful. appeared first on BoringEM and was written by Teresa Chan.

Tiny Tips: Approach to the Alarming Ventilator

The December edition of EM:RAP had a great segment with Haney Mallemat and Stuart Swadron that went over five “Critical Care Quickies.” One of them presented an approach to an alarming ventilator that really caught my eye.

The classic mnemonic that I was taught for this situation was

Differential for the Alarming Ventilator: DOPE(S)

DDislodged tube
OObstructed tube (mucous plug, blood, kink)
PPneumothorax
EEquipment failure (ventilator, tubing, etc)
S – Breath Stacking [breath] (Auto-PEEP)

This is certainly helpful for diagnosing an alarming ventilator, but it does not provide an approach to dealing with these problems. While it’s still a mnemonic that I am going to keep in the back of my mind, I really liked the DOTTS mnemonic that he offered to outline the things that you should do. While it could certainly be argued that if you know what you’re doing, DOPES will result in a similar approach, I liked the order and direction of DOTTS.

Approach to the Alarming Ventilator: DOTTS

DDisconnect the patient from the ventilator +/- provide gentle pressure to the chest (assess for and treat breath Stacking and Equipment failure)
OOxygen (100%) and manual ventilation with a bag (check compliance by squeezing the bag: difficult bagging suggests Pneumothorax or Obstructed tube, very easy bagging suggests Dislodged tube or Equipment failure due to a deflated cuff)
TTube position/function (see if the tube has migrated to assess for Dislodged tube; pass a bougie or suction catheter through to see if the tube is Obstructed)
TTweak the vent (prevents breath Stacking by decreasing respiratory rate, decreasing tidal volume or decreasing inspiratory time)
SSonography (assess for pneumothorax, mainstem intubation, plugging)

As usual, this Tiny Tip will be added to the Flashcard Deck that I am building. For more information and instructions on how to use them, check out the Boring Cards page. Thanks to the Haney Mallemat, Stuart Swadron and the rest of the EM:RAP team for the great tip and outstanding podcast!

Author information

Brent Thoma, MD MA
Editor of BoringEM Simulation Fellow at Massachusetts General Hospital and Harvard Medical School Emergency Medicine Resident at the University of Saskatchewan

The post Tiny Tips: Approach to the Alarming Ventilator appeared first on BoringEM and was written by Brent Thoma.

Chalk Talk #1: Acute Pain Control

BoringEM Chalk Talk’s are short (<5 minutes), basic videos aimed at contextualizing preclinical knowledge. They were initially created for the students of the Harvard Medical School Learner-Directed Simulation Program Chalk. I found that similar topics seemed to come up in a lot of the groups that I was teaching. Rather than doing the same basic tutorial for all of them, I decided to do it once and post it here for them to view. If anyone else finds it useful, that’s a cherry on top.

Acute Pain Control

This video was created to provide a basic approach to acute pain control. It discusses oral and intravenous opioids (morphine, fentanyl, hydromorphone and oxycodone) and non-opioids (Acetaminophen and NSAIDS including toradol and naproxen).

http://www.youtube.com/watch?v=o2m2sqdVIlk

It is also available as a podcast via SoundCloud.

Summary

Acute pain can be treated with opioids and non-opioids.

Opioids can be given intravenously or orally. Common intravenous opioids include morphine (2-5mg dose; 20 minute onset, 3-4 hour duration) and fentanyl (~0.75-1mcg/kg dose; 3-5 minute onset, 30 minute duration). Common oral opioids include hydromorphone (2mg tabs) and oxycodone (1-2 tabs; usually combined with non-opioid analgesics). Watch for respiratory depression, hypotension and altered mental status. Be cautious when treating patients with kidney or hepatic failure and with elderly or unstable patients.

Non-opioids include NSAIDs and Acetaminophen. Ketorolac is an NSAID that can be given intravenously (10mg dose; 6h duration) while ibuprofen (400mg dose; 6h duration) and naproxen (500mg dose; 12h duration) are NSAIDs that can be given orally. They should be used carefully in the elderly and generally not given to patients with renal insufficiency or a history of ulcers. Acetaminophen (650-975mg; 6h duration) can also be given orally. It should generally not be given to patients with alcoholism or hepatic failure.

Conclusion

As I’m tentatively planning to make more of these I would appreciate any feedback. Was it useful or am I wasting my time? Should I keep posting them as YouTube videos? Or podcasts? Or both? Would you rather I draw things out Khan Academy-style? Or was the Prezi okay? Thanks!

Author information

Brent Thoma, MD MA
Editor of BoringEM Simulation Fellow at Massachusetts General Hospital and Harvard Medical School Emergency Medicine Resident at the University of Saskatchewan

The post Chalk Talk #1: Acute Pain Control appeared first on BoringEM and was written by Brent Thoma.

The Precordial Thump: Good, Bad or Ugly?

It’s one of those mythical interventions that everyone has heard of, less have attempted and few have seen used successfully. Everyone knows someone who knows someone that brought a patient back from the brink using only hope and the power contained within a tightly clenched fist. There’s something almost mystical about restarting a patient’s heart with a bare hand. However, these events are so rare that until Nadim Lalani’s tweet flew across my feed I actually thought the precordial thump had already gone the way of the triceratops.

I don't always defibrillate

NOOOOO! pic.twitter.com/5Q8YJbfWbb
— Nadim Lalani (@ERmentor) October 5, 2013

Originally documented in 1920 by Schott in the paper “Ventrikelstillstand (Adam-Stokes’sche Anfälle) nebst Bemerkungen über andersartige Arrhythmien passagerer Natur,” the precordial thump is a legendary procedure that involves thwacking the sternum of an arresting patient with sufficient force to cause coordinated cardiac depolarization. Even better, it has the potential to be a fast, low-tech, ninja-like way to achieve ROSC. It is how Chuck Norris would defibrillate. Clinicians who have managed to pull it off are spoken of with a level of awe appropriate for their awesomeness. Those that mess it up… just look like they’re hitting the poor patient that arrested.

This post will review thumping technique and examine the latest evidence to help you decide to thump… or not to thump.

Technique

The precordial thump technique used in the literature is fairly consistent. Descriptions include:

  • Miller (1984): “a precordial thump using the fleshy part of the hypothenar eminence is delivered from a height of eight to 12 inches above the sternum.”
  • Amir (2007): “thump from a height of 8–10 inches (20-25cm) aimed to the junction of the middle and lower parts of the sternum”
  • Haman (2009): “clenched fist forcefully applied from the height of 20-30cm to the junction of the middle and lower third of the patient’s sternum.”
  • Pellis (2010):  “delivered from a height of about 20 cm, using the ulnar edge of the tightly clinched fist” along with “active retraction of the fist after full impact.”
  • Nehme (2013): “single sharp blow to the patient’s mid-sternum using the medial aspect of a clenched fist from a height of 20–30 cm.”

Consensus definition: A precordial thump is an impact delivered with the ulnar aspect of a clenched fist from ~20cm above the patient’s chest to the bottom third of the sternum and followed by immediately retraction.

YouTube has a demonstration of a precordial thump on a manikin (see 1:02). If anyone has a better video please send it to me!

http://www.youtube.com/watch?v=p-wkeDyJ_ho

The Guideline

Evidence on the benefits and harms of the precordial thump is not consistent. The 2010 ACLS Guidelines concluded:

The precordial thump should not be used for unwitnessed out-of-hospital cardiac arrest (Class III, LOE C). The precordial thump may be considered for patients with witnessed, monitored, unstable ventricular tachycardia including pulseless VT if a defibrillator is not immediately ready for use (Class IIb, LOE C), but it should not delay CPR and shock delivery. There is insufficient evidence to recommend for or against the use of the precordial thump for witnessed onset of asystole.

To reiterate, the appropriate patient for this procedure as per ACLS is one with a witnessed, unstable ventricular rhythm when a defibrillator is not immediately available. This could include hospital patients that decompensate into an unstable ventricular rhythms prior to the attachment of defibrillator pads.

Before I review the fancy new September 2013 study that finally got me to learn about the thump I will review the literature that led to the ACLS recommendation above.

The Evidence

The available studies investigated the effect of a precordial thump on patients’ immediate condition and have been done in three settings (electrophysiology suites, hospital wards and prehospital settings). Technically, we cannot calculate a real NNT for these studies because they do not have a control group. However, if we assume that the positive/negative effect of not doing a precordial thump is close to 0 we can use that value to calculate an Absolute Risk Reduction and the NNT. While that assumption may be flawed, I think even an estimated NNT does a better job of putting the benefits and harms in perspective than a bunch of other numbers.

Electrophysiology Studies

Want to study a lot of patients with ventricular arrhythmia’s in a short period of time? The electrophysiology suite is the place to be! With many of their procedures requiring ventricular stimulation they have a lot of patients that flip into ventricular arrhythmia’s. They also have continuous monitoring to detect their onset and a cardiologist standing by with the ability to perform a precordial thump almost instantaneously. There have been two studies done in this environment.

Amir (2007) and Haman (2009) conducted similar nonrandomized prospective studies that assessed the performance of precordial thump on ventricular arrhythmia’s induced in this setting and had similar results. Amir had 1/80 patients convert while Haman had 2/155 (estimated NNT’s of 77.5-80). All 3 of the patients that converted were in monomorphic VT.

With such a small number of patients converting it does not seem to be an effective treatment in these patients, especially considering they are already prepared for immediate electrical cardioversion. However, these patients represent a different population than those seen in the prehospital and ED settings.

These studies do provide evidence for the safety of the thump itself as neither documented traumatic injury or transition to arrhythmia’s with poorer prognosis.

Hospital Studies

The hospital has a lot of sick, monitored patients and coordinated code teams. Surely it would also be a fertile environment for a study? Unfortunately, not a lot of work has been done in this setting. The one prospective study done in the hospital environment does not appear to be available in English.

According to Miller (1984)Volkmann (1990) analyzed a group of 47 hospital patients and reported a thumping success rate of 77% for termination of VT at rates of <160 with no adverse events – I wish I knew the Volkmann technique! Efficacy was reduced above at higher rates and it was completely ineffective during VF.

This single study and various case series seem to have provided the majority of the evidence for the use of the thump. However, it seems to be an outlier as its results have not been replicated.

Prehospital Studies

Miller (1984) conducted a nonrandomized prospective pre-hospital study on 50 patients that went into VF or pulseless VT at some point during their resuscitation. Unfortunately, the methodology was not quite up to today’s standards and insufficient data was provided to determine how long after arrest the patients received the precordial thump. Additionally, while thumps were delivered only for VF/VT, they were not the initial rhythms in 20 of the 50 patients.

Regardless, the study did not support the practice. Only 2/27 patients were thumped from VT to a supraventricular rhythm with a pulse (estimated NNT of 13.5) and 12/27 transitioned to a less favorable rhythm (8 VF, 3 asystole and 1 PEA – NNH of 2.25). None of the VF patients had a rhythm change after the thump. This suggests precordial thumps are much more likely to convert the patient to a rhythm with a worse prognosis than reboot the heart.

Pellis (2009) conducted a nonrandomized prospective study on 144 patients with out-of-hospital arrest and a more modern methodology. Sort of. There was no randomization and inclusion in the study cohort was dependent upon whether or not the EMS providers elected to try a thump prior to proceeding with standard ACLS (219 patients were excluded as a result). A thump was delivered to qualified patients after the attachment of defibrillation pads to allow for identification of the rhythm.

Interestingly, while the precordial thump led to ROSC in only 3/144 patients (estimated NNT of 48), the 3 patients it worked on had a witnessed asystolic arrest. In total, 3/11 witnessed, asystolic arrests achieved ROSC and 2/3 were discharged from hospital. Those work out to estimated NNT’s of 48 and . Both of the patients discharged were thumped within 60 seconds of arrest. There were no injuries reported in the patients following precordial thump, but three had other rhythm changes (PEA -> Asystole; Asystole -> PEA; VT -> PEA).

The New Study

Finally, to the study that inspired this post! Following Nadim Lalani‘s tweet about this recently published study the twitterverse lit up.

@ERmentor Wait… NNT=20? That’s still pretty good isn’t it?
— TChan (@TChanMD) October 5, 2013

Wait, 5%?! That’s 5% who don’t need ACLS… I’m doing it MORE now! RT @ERmentor: NOOOOO! pic.twitter.com/ICiSvZvEwz
— Todd Raine (@RaineDoc) October 5, 2013

@RaineDoc @ERmentor That’s better results than any ACLS drug…
— Ben Addleman (@Addleben) October 5, 2013

@TChanMD I agree, I was more concerned with the number-needed-to-look-awesome :)
— Rory Spiegel (@CaptainBasilEM) October 5, 2013

So what was all the fuss about? What IS the number needed to look awesome? How many times will I have to look like I’m hitting a poor patient before it works?

Nehme (2013) was a nonrandomized retrospective study of data from the Victorian Ambulance Cardiac Arrest Registry in Australia. They included 434 patients with VF (305) or pulseless VT (37) of presumed cardiac etiology. 103 were initially treated with precordial thump and the other 325 with defibrillation.

Unsurprisingly, defibrillation was substantially more effective. Combining the data from both rhythms, precordial thump resulted in ROSC in 5/103 patients (estimated NNT of 26.2) while defibrillation led to ROSC in 188/325 (estimated NNT 1.7). More concerning, 10/103 (estimated NNH of 10.3) thumped patients shifted into rhythms with poorer prognosis (8 VT -> VF; 1 VF -> PEA; 1 VF -> Asystole).

Conclusion

As awesome as it would be if this worked, based on the evidence that I have reviewed I’m not sure why this procedure is still in the guidelines. The newest study is the only one that was not available for the 2010 iteration of ACLS and it thumps another nail into this procedure’s coffin.

Rhythm degradation is the natural course of a cardiac arrest so it is possible that it was not a result of the thump’s. However, the excellent outcomes with electricity makes it clear that early defibrillation should be the #1 intervention. The weight of the evidence suggests that it causes more harm than good so classic teaching that a thump “can’t hurt and may help” should be corrected and legendary doc’s may have to find another way to look awesome.

I definitely moved away from the “boring” with this post, but once I got into it I just couldn’t stop. I’ll get right back to mind numbing with my next post, I promise. Thanks for reading and sharing.

Thanks as well to my reviewers, Drs. Teresa Chan and Rory Spiegal.

Author information

Brent Thoma
Simulation/Research Fellow at Massachusetts General Hospital
A Canadian that loves emergency medicine, simulation, education, mentorship, leadership, quality improvement, writing, parliamentary procedure, Star Wars, Dodgeball, his dog and a few people.

The post The Precordial Thump: Good, Bad or Ugly? appeared first on BoringEM and was written by .