When Is An Alarm Not An Alarm?

What is the sound of one hand clapping?  If a tree falls in a forest, does it make a sound?  If a healthcare alarm is in no fashion alarming, what judgement ought we make of its existence?

The authors of this study, from UCSF, compose a beautiful, concise introduction to their study, which I will simply reproduce, rather than unimpressively paraphrase:
“Physiologic monitors are plagued with alarms that create a cacophony of sounds and visual alerts causing ‘alarm fatigue’ which creates an unsafe patient environment because a life-threatening event may be missed in this milieu of sensory overload.“
We all, intuitively, know this to be true.  Even the musical mating call of the ventilator, the “life support” of the critically ill, barely raises us from our chairs until such sounds become insistent and sustained.  But, these authors quantified such sounds – and look upon such numbers, ye Mighty, and despair:
2,558,760 alarms on 461 adults over a 31-day study period.
Most alarms – 1,154,201 of them – were due to monitor detection of “arrhythmias”, with the remainder split between vital sign parameters and other technical alarms.  These authors note, in efforts to combat alert fatigue, audible alerts were already restricted to those considered clinically important – which reduced the overall burden to a mere 381,050 audible alarms, or, only 187 audible alarms per bed per day.

Of course, this is the ICU – many of these audible alarms may, in fact, have represented true positives.  And, many did – nearly 60% of the ventricular fibrillation alarms were true positives.  However, next up was asystole at 33% true positives, and it just goes downhill from there – with a mere 3.3% of the 1,299 reviewed ventricular bradycardia alarms classified as true positives.

Dramatic redesign of healthcare alarms is clearly necessary as not to detract from high-quality care.  Physicians are obviously tuning out vast oceans of alerts, alarms, and reminders – and some of them might even be important.

“Insights into the Problem of Alarm Fatigue with Physiologic Monitor Devices: A Comprehensive Observational Study of Consecutive Intensive Care Unit Patients”
http://www.ncbi.nlm.nih.gov/pubmed/25338067


CPR: Crushing It With Machines

In movies and television, CPR is a miraculous, dramatic event.  Pulses return, patients open their eyes and make a witty remark, and everyone celebrates.

The reality: CPR is a brutal, violent intervention.  And, the new mechanical CPR devices are even moreso.

This is a brief autopsy-based survey of 222 patients in whom CPR was unsuccessful, 83 of which were treated with manual CPR only, and 139 patients who received primarily mechanical CPR.  Mean age was ~67 years, about 30% female, and mean CPR time was ~35 minutes.  That is, to say the least, ample time to crush the thorax.

A brief accounting of the injuries from CPR:

  • Multiple rib fractures: 57.3% manual, 65.0% mechanical
  • Sternal fractures: 54.2% manual, 58.3% mechanical
  • Intrathoracic bleeding:  36.1% manual, 48.9% mechanical
  • Cardiac injuries: 7.2% manual, 15.2% mechanical
  • Liver injuries: 3.6% manual, 7.9% mechanical

Pathologists at autopsy, however, did not judge any of the injuries from CPR to have contributed to the cause of death.

Injuries in those for whom CPR is unsuccessful are intuitively more severe than for cases in which patients survive – a result of non-survivors being generally older, more brittle, and longer CPR duration.  However, it’s an interesting window into the destructive nature of vigorous CPR – and the increased injury associated with mechanical devices.

“CPR-related injuries after manual or mechanical chest compressions with the LUCAS device: A multicentre study of victims after unsuccessful resuscitation”
http://www.ncbi.nlm.nih.gov/pubmed/25277343

The Trauma Pan-Scan Saves Lives

… and redeems them for valuable prizes.

Such is the message of this systematic review and meta-analysis, evaluating the published literature comparing “whole body CT”, arbitrary complete scanning, with “selective imaging”, scanning as indicated by physical examination.

Identifying seven studies, comprising 23,172 patients, these authors found a 20% reduction in mortality – 20.3% versus 16.9% – associated with the use of WBCT, despite a higher mean Injury Severity Score in the WBCT cohort.  The implication: choosing a selective scanning strategy was harmful, even in the face of a less-injured cohort.  Thus, the authors conclude the mortality advantage far exceeds any risks from radiation, and WBCT should be considered the standard method of evaluation.

Except, all but 2,610 of the patients in these pooled studies are from retrospective cohorts fraught with selection bias.  There are many reasons why trauma patients with lower ISS might yet have higher mortality, and otherwise aggressive diagnostic evaluation not indicated.  And, when those retrospective patients are tossed out, the comparison is a wash in the prospectively studied cohort.

If you’re a fan of selective imaging, this study probably changes little in your mind.  If you’re a fan of WBCT, it’s another citation to add to your quiver.  The authors of this study are hoping REACT-2 gives us the definitive answer – but with only 1,000 patients, I doubt that will be the case, either.

“Whole-body computed tomographic scanning leads to better survival as opposed to selective scanning in trauma patients: A systematic review and meta-analysis”
http://www.ncbi.nlm.nih.gov/pubmed/25250591

Clinical Informatics Exam Post-Mortem

I rarely break from literature review in my blog posts (although, I used to make the occasional post about Scotch).  However, there are probably enough folks out there in academia planning on taking this examination, or considering an Emergency Medicine Clinical Informatics fellowship – like the ones at Mt. Sinai, BIDMC, and Arizona – to make this diversion of passing interest to a few.

Today is the final day of the 2015 testing window, so everyone taking the test this year has already sat for it or is suffering through it at this moment.  Of course, I’m not going to reveal any specific questions, or talk about a special topic to review (hint, hint), but more my general impressions of the test – as someone who has taken a lot of tests.

The day started out well, as the Pearson Vue registration clerk made a nice comment that I’d gone bald since my picture at my last encounter with them, presumably for USMLE Step 3.  After divesting myself of Twitter-enabled devices, the standard computer-based multiple-choice testing commenced.

First of all, for those who aren’t aware, this is only the second time the American Board of Preventive Medicine has administered the Clinical Informatics board examination.  Furthermore, there are few – probably zero – clinicians currently taking this examination who have completed an ACGME Clinical Informatics fellowship.  They simply don’t exist.  Thus, there is a bit of a perfect storm in which none of us have undergone a specific training curriculum preparing us for this test, plus minimal hearsay/experience from folks who have taken the test, plus a test which is essentially still experimental.

Also, the majority (>90%) of folks taking the test use one of AMIA’s review courses – either the in-person session or the online course and assessment.  These courses step through the core content areas describe for the subspecialty of Clinical Informatics, and, in theory, review the necessary material to obtain a passing score.  After all, presumably, the folks at AMIA designed the subspecialty and wrote most of the questions – they ought to know how to prep for it, right?

Except, as you progress through the computer-based examination, you find the board review course has given you an apparently uselessly superficial overview of many topics.  Most of us taking the examination today, I assume, are current or former clinicians, with some sort of computer science background, and are part-time researchers in a subset of clinical informatics.  This sort of experience gets you about half the questions on the exam in the bag.  Then, about a quarter of the course – if you know every detail of what’s presented in the review course regarding certification organizations, standards terminologies, process change, and leadership – that’s another 50 out of 200 questions you can safely answer.  But, you will need to really have pointlessly memorized a pile of acronyms and their various relationships to get there.  Indeed, the use of acronyms is pervasive enough it’s almost as though their intention is more to weed out those who don’t know some secret handshake of Clinical Informatics, rather than truly assess your domain expertise.

The last quarter of the exam?  The ABPM study guide for the examination states 40% of the exam covers “Health Information Systems” and 20% covers “Leading and Managing Change”.  And, nearly every question I was trying to make useful guesses towards came from those two areas – and covered details either absent from or addressed in some passing vagueness in the AMIA study course.  And, probably as some consequence of this being one of the first administrations of this test, I wasn't particularly impressed the questions – which were heavy on specific recall, and not hardly on application of knowledge or problem solving.  I'm not sure exactly what resources I'd use to study prior to retaking if I failed, but most of the difference would come down to just rote memorization.

However, because the pass rate was 92% last year, and nearly everyone taking the test used the AMIA course, an average examinee with the average preparation ought yet to be in good shape.  So, presumably, despite my distasteful experience overall – one likely shared by many – we’ll all receive passing scores.

Check back mid-December for the exciting conclusion to this tale.

TMJ Dislocations: A Better Mousetrap?

Anterior temporomandibular dislocations are generally quite satisfying closed reductions.  Patients, understandably, are exceedingly grateful to have their function restored.  However, it typically requires parenteral analgesia, sometimes procedural sedation, and puts the practioner at risk of injury from inadvertent biting.

This interesting pilot describes a technique in which the patient, essentially, self-reduces the TMJ dislocation by using a syringe held between the posterior molars as a rolling fulcrum.  I’d describe it in more detail, but I think, from the image reproduced here, you’ll get it:


These authors used this technique for 31 cases, and only one was ultimately unsuccessful.

While this is not the intended use for a syringe, I can’t hardly imagine any terrible harmful adverse effects from materials failure – and they don’t exceed the risks of procedural sedation.  I certainly find it reasonable to experiment with this technique.

“The ‘Syringe’ technique: a hands-free approach for the reduction of acute nontraumatic temporomandibular dislocation in the Emergency Department.”
http://www.ncbi.nlm.nih.gov/pubmed/25278137

Who Loves Tamiflu?

Those who are paid to love it, by a wide margin.

This brief evaluation, published in Annals of Internal Medicine, asks the question: is there a relationship between financial conflicts-of-interest, and the outcomes of systematic reviews regarding the use of neuraminidase inhibitors for influenza?  To answer such a question, these authors reviewed 37 assessments in 26 systematic reviews, published between 2005 and 2014, and evaluated the concluding language of each as “favorable” or “unfavorable”.  They then checked each author of each systematic review for relevant conflicts of interest with GlaxoSmithKline and Roche Pharmaceuticals.

Among those systematic reviews associated with author COI, 7 of 8 assessments were rated as “favorable”.  Among the remaining 29 assessments made without author COI, only 5 were favorable.  Of the reviews published with COI, only 1 made mention of limitations due to publication bias or incomplete outcomes reporting, versus most of those published without COI.

Shocking findings to all frequent readers, I’m sure.

“Financial Conflicts of Interest and Conclusions About Neuraminidase Inhibitors for Influenza”
http://www.ncbi.nlm.nih.gov/pubmed/25285542

Original link in error ... although, it's a good article, too!
http://www.ncbi.nlm.nih.gov/pubmed/24218071