In this post, Rob Cooney, the man, the myth, the legend, discusses FOAM and the adoption of FOAM into practice. Recently, the FOAM world was able to have a front row seat at a great debate between two titans as they went head-to-head over the issue of adopting a change in practice.
Part 1: SMACC Gold: What to Believe-When to Change
Part 1: SMACC Back-On Beliefs of Early Adopters and Straw Men
Part 3: SMACC Back-Back on What to Believe and When to Change
While I enjoyed the debate and found myself nodding in agreement with each of the speakers, in the end, I was left feeling a little wanting. There was very little practical advice given on how to adopt a change. In fairness to the speakers, this is a high level construct, a 50,000 foot view of the issue, designed to inspire you and make you think. That being said, Scott Weingart did give a glimpse of the solution:
“You need to put in the time. You need to read. You need to understand how to critically appraise new evidence; how to integrate it into your existing belief structure; how to then test that based on bedside clinical experience; based on your understanding of physiology, based on the specifics of every individual patient.”
He is absolutely correct in his statement. My only concern is that his call for a rigorous method will scare away early adopters and push them back into the early majority. I’m pretty sure that wasn’t his intent, but just in case, I wanted to offer a more “boots on the ground” approach that I think many of us could use to determine when to implement a change in practice.
The methodology that I believe will allow everyday clinicians to adopt changes in their practice is called the Model for Improvement. This year, I have the good fortune of being an IHI/AIAMC fellow. This means that I am learning, living, and breathing quality improvement. The Institute for Healthcare Improvement is a remarkable organization with quite the track record for implementing positive changes in healthcare. The model they use? The Model for Improvement. So what does this model look like?
As you can see, the model is based on three critical questions followed by iterative cycles of testing and learning. Let’s break it down piece by piece.
“All improvement requires a change, but not all change is an improvement.”
As you can see from the above quote, if we want to get better at something we have to make a change. Unfortunately, we sometimes can change things for the worse. Choosing what to change can be difficult. This is why the fundamental questions are critical.
Question 1: What are we trying to accomplish?
The first question can be viewed as the “aim statement.” This question must be answered very specifically. With quality improvement work, we attempt to identify the system, a timeline, and goals.
For example, “Within the next 12 months, we will reduce the door to doctor time in the emergency department from an average of 45 minutes to an average of 20 minutes.”
For practitioners working on individual improvements, they can easily choose much more manageable chunks to work with. Consider the use of push-dose pressors. Perhaps you have had difficulty with post-intubation hypotension and you’re considering the addition of push-dose pressors. Your aim could simply be, “I want to reduce the incidence of post intubation hypotension to less than 10%.” The key is to be as specific as possible. As they say at the IHI:
“Hope is not a plan, some is not a number, soon is not a time.”
Question 2: How we know that a change is an improvement?
“In God we trust. All others bring data.”
This may seem obvious, but in order to determine if a change is an improvement, we have to measure something! In the above example of push-dose pressers, you would have to measure your rate of post-intubation hypotension. If the addition of push-does pressers did little to decrease your rate of hypotension, you’d likely abandon the practice before fully implementing it. While this seems intuitive, the actual measurement process can be made more robust by considering three types of measures:
Outcome Measures: These measures look at the performance of the system under study and are derived from the aim, i.e. rate of hypotension after intubation
Process measures: These measures look at the rate of utilization of an activity, i.e. use of meds, fluids, before and after vital signs
Balancing measures: Trying to improve one system at the expense of another should be mitigated as much as possible. Balancing measures attempt to look at the performance of the overall system. For example, how long does it take to intubate a patient with the addition of new drugs? Is there more hypoxia after the change?
It is also important to note that there are three kinds of measurement: research, judgment/accountability, or improvement. In terms of improvement data, we are not looking for rigorous, randomized, double-blind, placebo-controlled level data. We simply want “just enough” data to determine whether the change we are implementing is leading to an improvement.
Question 3: What changes can we make that will result in improvement?
This question is where improvement gets kind of fun. Depending on the complexity of the system that you are trying to improve, you may be able to come up with very simple ideas and test them easily. Answering this question also allows you to be quite creative in the solutions you suggest. Have you seen something work another industry that you think may apply to your day-to-day practice? Try it out!
Once you’ve answered the three questions above, it’s time to test the actual changes. These are done through iterative cycles known as “PDSA or Plan-Do-Study-Act Cycles.”
These are simple experiments that take the ideas that you’ve created above and actually test them.
Plan: What are planning to do? How will you do it? Is anyone else going to test the change with you? how will you collect the data?
Do: Implement the test and collect the data
Study: What did you learn? Did the data match the predictions?
Act: What you need to change before the next cycle? Did it work well enough that you can apply it more broadly?
Notice from the above figure that the cycles are designed to be iterative, meaning one cycle flows smoothly into the next cycle as the data guides the improvements. Too often in healthcare, we identify a change and go straight into implementation. This is the dangerous practice that both Scott and Simon cautioned against. The below figure illustrates why it is important to implement changes very slowly. Every change comes with a cost. The higher the cost, the smaller the test should be. If there is a potential of harming a patient, VERY small tests are the first choice. This helps to mitigate the harm while allowing for future cycles to scale up if the change seems feasible. It also allows the early adopters to get things right before pushing the change out into the workforce.
While this model may seem complex, with a little bit of trial and error, it is quite simple to apply. It is also universally scalable. Want to lose weight? Apply the model: What am I trying to do? Lose weight. How will I know the change and improvement? My weight will drop, my clothing will fit easier, I’ll feel better, etc. What changes can I make? Eat less, exercise more, etc. These three critical questions, once answered, can then be tested, measured, and modified. Whether trying out simple new things were attempting to modify complex systems, use of this model allows a practical approach to making changes that drive improvement. It also allows “less expert” early adopters to safely dip their feet into the world of FOAM.