Next Up in Syncope Prediction

The Great White North is the land of clinical decision instruments.  Canadian Head, Canadian C-Spine, Ottawa Ankle, Ottawa SAH, the list goes on – and now, from the same esteemed group: the Canadian Syncope Risk Score.

The vast majority of patients with syncope have unrevealing initial and, if admitted, in-house evaluation.  That said, any physiologic interruptions in the ability to perfuse the brain portend a poor prognosis greater than the general background radiation.  These authors performed an observational study over the course of four years to prospectively derive a decision instrument to support risk-stratification for syncope.

There were 4,030 patients enrolled and eligible for analysis based on 30-day follow-up, and 147 of these suffered a “serious adverse event”.  They identified 43 candidate predictors for prospective collection, and ultimately this resulted in a multivariate logistic regression predictive model with 9 elements.  Scores range from -3, with a 0.4% estimated risk for SAE, to 11, with an 83.6% estimated risk for SAE.  Useable confidence intervals, however, were mostly scores <5.

There are a few things I would quibble with regarding this study.  The “serious adverse event” definition is rather broad, and includes 30-day events for which the underlying pathology was not present or necessarily preventable at the initial visit.  For example, a patient with a subsequent encounter for a GI bleed or a case of appendicitis fit their criteria of SAE.  This would diminish the instrument’s apparent sensitivity without materially improving its clinical relevance.  Then, there is the oddity of incorporating the final ED diagnosis into the scoring system – where a provisional diagnosis of “vasovagal syncope” is -2, and a diagnosis of “cardiac syncope” is +2.  The authors explicitly defend its inclusion and the methods behind it – but I feel its subjectivity coupled with widespread practice variation will impair this rule's generalizability and external validation.

Finally, the last issue with these sorts of “rules”: “high risk” is frequently conflated to mean “admit to hospital”.  In many situations close to the end-of-life, the protective effect of hospitalization and medical intervention vanishes – and may have little or no value.  This sort of stratification should be applied within the appropriate medical and social context, rather than simply triggering admission.

“Development of the Canadian Syncope Risk Score to predict serious adverse events after emergency department assessment of syncope”
http://www.ncbi.nlm.nih.gov/pubmed/27378464

Perpetuating the Flawed Approach to Chest Pain

Everyone has their favored chest pain accelerated diagnostic risk-stratification algorithm or pathway these days.  TIMI, HEART, ADAPT, MACS, Vancouver, EDACS – the list goes on and on.  What has become painfully clear from this latest article, however, is this approach is fundamentally flawed.

This is a prospective effectiveness trial comparing ADAPT to EDACS in the New Zealand population.  Each “chest pain rule-out” was randomized to either the ADAPT pathway – using modified TIMI, ECG, and 0- and 2-hour troponins – or the EDACS pathway – which is its own unique scoring system, ECG, and 0- and 2-hour troponins.  The ADAPT pathway classified 30.8% of these patients as “low risk”, while the EDACS classified 41.6% as such.  Despite this, their primary outcome – patients discharged from the ED within 6 hours – non-significantly favored the ADAPT group, 34.4% vs 32.3%.

To me, this represents a few things.

We are still have an irrational, cultural fear of chest pain.  Only 11.6% of their total cohort had STEMI or NSTEMI, and another 5.7% received a diagnosis of "unstable angina".  Thus, potentially greater than 50% of patients were still hospitalized unnecessarily.  Furthermore, this cultural fear of chest pain was strong enough to prevent acceptance of the more-aggressive EDACS decision instrument being tested in this study.  A full 15% of low-risk patients by the EDACS instrument failed to be discharged within 6 hours, despite their evaluation being complete following 2-hour troponin testing.

But, even these observations are a digression from the core hypothesis: ADPs are a flawed approach.  Poor outcomes are such the rarity, and so difficult to predict, that our thought process ought be predicated on a foundation that most patients will do well, regardless, and only the highest-risk should stay in the hospital.  Our decision-making should probably be broken down into three steps:
  • Does this patient have STEMI/NSTEMI/true UA?  This is the domain of inquiry into high-sensitivity troponin assays.
  • Does the patient need any provocative testing at all?  I.e., the "No Objective Testing Rule".
  • Finally, are there “red flag” clinical features that preclude outpatient provocative testing?  The handful of patients with concerning EKG changes, crescendo symptoms, or other high-risk factors fall into this category.
If we are doing chest pain close to correctly, the numbers from this article would be flipped – rather than ~30% being discharged, we ought to be ~70%.

“Effectiveness of EDACS Versus ADAPT Accelerated Diagnostic Pathways for Chest Pain: A Pragmatic Randomized Controlled Trial Embedded Within Practice”

The “IV Antibiotics” Sham

Among the many overused tropes in medicine is the myth of the supremacy of intravenous antibiotics.  In the appropriate clinical context, it’s just a waste.

This is a retrospective analysis of 36,405 patients hospitalized for community-acquired pneumonia, and for whom a fluoroquinolone was selected as therapy.  The vast majority – 94% – received an intravenous dose, while the remaining 2,205 (6%) were treated orally.  Unadjusted mortality favored the oral dose – unsurprisingly, as those patients also generally has fewer comorbid conditions.  In their multivariate, propensity-matched analysis, there was no difference in mortality, intensive care unit escalation, or mechanical ventilation.

These results are wholly unsurprising, and the key feature is the class of antibiotic involved.  Commonly used antibiotics in the fluoroquinolone class, trimethoprim-sulfamethoxazole, metronidazole, and clindamycin, among others, have excellent oral absorption.  I have seen many a referral to the Emergency Department for “intravenous antibiotics” prior to an anticipated discharge to home therapy when any one of these choices could have obviated the entire encounter.

“Association Between Initial Route of Fluoroquinolone Administration and Outcomes in Patients Hospitalized for Community-acquired Pneumonia”
http://www.ncbi.nlm.nih.gov/pubmed/27048748

The Magic Bacterial Divining Rod

Antibiotic overuse is a real issue.  In modern countries, despite obsessing over antibiotic stewardship, we are still suckers for the excessive use of both narrow-spectrum antibiotics for ambulatory patients and broad-spectrum antibiotics for the critically ill.  In less resource-capable areas, the tests used to stratify patients as potentially bacterial or viral exceed the cost of the antibiotics – also leading down the path to overuse.

This breathless coverage, featured in Time, the AFP, and proudly advertised by Stanford Medicine, profiles a new panel of tests that is destined to bring clarity.  Rather than relying simply on a single biomarker, “our test can detect an infection anywhere in the body by 'reading the immune system’”.

They used retrospective genetic expression cohorts from children and adults with supposedly confirmed non-infectious or infectious etiologies to derive and validate a scoring system to differentiate the underlying cause of sepsis.  They then further trim their model by eliminating infants and predominately healthy patients from outpatient cohorts.  Ultimately, they then test their model on a previously uncharacterized whole blood sample from 96 pediatric sepsis patients and report an AUC for viral vs. bacterial sepsis of 0.84, with a -LR of 0.15 and +LR of 3.0 for bacterial infections.  At face value, translated to a presumed clinical setting with a generally low prevalence of bacterial infection complicating SIRS, this is an uninspiring result.

However, these authors rather focus their discussion and press releases around the -LR of 0.10 and +LR of 2.34 produced as part of their ideal validation cohort, trumpeting its superiority over the -LR for procalcitonin of 0.29 as “three-fold improvement”.  This is, of course, nonsense, as the AUC from that same procalcitonin meta-analysis was 0.85, and these authors are simply cherry-picking one threshold and performance characteristic for their comparison.

Now, that’s hardly to say this is not novel work, and their confusion matrices showing clustering of non-infected SIRS vs. bacterial sepsis vs. viral sepsis are quite lovely.  Their approach is interesting, and very well could ultimately outperform existing strategies.  However, their current performance clearly does not match the hype, and they are miles away from a meaningful validation.  Furthermore, the sort of nano-array assay required is neither fast enough to be clinically useful nor likely to be produced cheaply enough to be used in some of the resource-poor settings they claim to be addressing.

It makes for a nice headline, but it’s better consigned to the “Fantasy/Science Fiction” shelf of your local bookstore for now.

“Robust classification of bacterial and viral infections via integrated host gene expression diagnostics”
http://stm.sciencemag.org/content/8/346/346ra91

Pan-Scans Don’t Save Lives

Humans are fallible.  We don’t always make good choices, and our patients – bless their hearts – can sometimes be time bombs wrapped in meat.  Logically, then, as many trauma services have concluded, the solution is to eliminate the weak link: don’t let the human chose which parts of the body to scan – just scan it all.

This is REACT-2, a randomised [sic] trial evaluating precisely the limits to human judgment in a resource-utilization versus immediacy context.  In this multi-center trial, adult trauma patients wth suspected serious injury were randomized to either imaging guided by clinical evaluation or total-body CT.  The primary outcome was in-hospital mortality, with secondary outcomes relating to timeliness of diagnosis, to mortality in other time frames, morbidity, and costs.

This was a massive undertaking, with 1,403 patients randomly assigned to one of the arms, with ~540 in each arm successfully allocated and included in their primary analysis.  Each cohort was well-matched on baseline characteristics, including all physiologic markers, although the Triage Revised Trauma Score was slightly lower (worse) for the total-body CT group.  The results, in most concise form, weakly favor selective scanning.  There was no difference in mortality nor complications nor length-of-stay nor virtually any reliable secondary outcome.  Costs, as measured in European terms, were no different, despite the few scans obviated.  Time-to-diagnosis was slightly faster in the total-body CT group, owing to skipping initial conventional radiography, while radiation exposure was slightly lower in the selective scanning group.

In some respects, it is not surprising there were no differences found – as CT was still frequently utilized in the selective CT cohort, including nearly half that ultimately underwent total-body CT.  There were some differences noted in in-hospital Injury Severity Score between groups, and I agree with Rory Spiegel’s assertion this is probably an artifact of the routine total-body CT.  This study can be used to justify either strategy, however – with selective CT proponents focusing on the lack of differences in patient-oriented outcomes, and total-body CT proponents noting minimal resource and radiation savings at the expense of timeliness.

“Immediate total-body CT scanning versus conventional imaging and selective CT scanning in patients with severe trauma (REACT-2): a randomised controlled trial”
http://www.ncbi.nlm.nih.gov/pubmed/27371185