false
Catalog
Best Practices in Mortality Review - 2024 Quality ...
Best Practices in Mortality Review
Best Practices in Mortality Review
Back to course
[Please upgrade your browser to play this video content]
Video Transcription
Please welcome assistant professor at Atrium Health Wake Forest Baptist, Dr. Olivia Gilbert. Welcome back, everyone. Hope you had a good lunch and enjoyed your first morning with us. We will be here for our general session on best practices in mortality review. With this, we'll have two talks, two 20-minute talks to frame our thoughts and discussions, and then we'll open up to Q&A for about 20 minutes. And I absolutely loved all of the thought-provoking questions you provided with the first session, and as you all think of them, do the same to submit them through the portal, and we'll address them during that session. I want to introduce our speakers and our panelists. So starting on our far side, we have Amber Clampett, who works as a data abstractor for the Chest Pain MI Registry at Indiana University and is on our planning committee for the conference. Cindy Gillian, who's assistant president of service line quality for HCA's Continental Division and a cardiac nurse practitioner. Sitting next to her in the middle, we have Dr. Jason Wasfi, who's an associate professor at Harvard Medical School and director of quality and outcomes at Mass General Cardiology Division. And then we have Dr. Steven Bradley, who is a general cardiologist and director of quality and associate director for the Healthcare Delivery Innovation Center at Alina Health and the Minneapolis Heart Institute. He also serves as the chair of the NCDR Oversight Committee and is a senior medical advisor for measurement at ACC. So we'll jump into our talks, starting with Dr. Jason Wasfi. We'll then have a talk from Dr. Bradley, and then we'll transition into questions from our panel. All right. We'll get started. Thank you, Dr. Gilbert. I'm on the understanding that my mic is on, and I think it is. So very good to see you all. My name is Jason Wasfi. I'm an associate professor at Harvard Medical School. I'm a cardiologist at the Mass General Hospital. I've had a variety of roles over the years that have to do with governance of quality. And I'm also currently the chair of the metrics and reporting subcommittee of the NCDR. And it's really nice to see so many people here. I do not know how to advance my slides, but I bet it's this. Let's see that. This is a cartoon. It's a cartoon that was drawn by a surgeon named Ernest Amory Codman in 1915. And this is someone who I think about a lot. He was sort of in his mid-40s. He was an associate professor at Mass General Harvard Medical School. He was sort of in the same kind of career, sort of graying a little bit like me, and in the same kind of level of career. And he had very strong views back then about the reporting of quality. This was over 100 years ago. He had very strong views at a time that no one was measuring quality and outcomes, that every physician in every hospital should measure mortality outcomes, report mortality outcomes, and constantly think about how to reduce and report mortality for our patients. You can see this slide is he's saying, you know, if people knew the truth, you know, would they still come to our hospital? There's an ostrich with head in the sand. The idea being that doctors and hospitals at the time didn't look at outcomes, didn't assess quality. It was a much different time in medicine, and it actually was a disaster for his career. He got kicked out of the Suffolk District Surgical Society, ostracized, people stopped seeing the patients, got kicked out of the hospital, and died actually relatively poor. This is his grave in Harvard Square. But obviously these ideas that he had in 1915 have since become core aspects of how we think about delivering health care, and he's really had his reputation kind of surge even before he died in the 1940s. For many years at my hospital, the Center for Quality and Safety was actually named after him. So he really has had a restoration of his ideas and his opinion. And one thing that these ideas are now sort of core to what all of you do in your profession, sort of strictly in a Donabedian framework, thinking about the structure of health care, where we deliver services, the processes that health care service deliveries entail, and aligning that with trial-derived evidence of best practices and how we can best help our patients, should improve outcomes. We could, in the delivery of health care, even if there were never to be another new drug or never to be another new procedure, improve mortality for our patients simply by taking what we know and what we have now and doing it better. That is underlying any kind of quality improvement in terms of reduction of mortality. But the problem has been that this idea, which is so obvious and so compelling and so important, who would argue with the idea, maybe they did in 1915 but we don't know anymore, that we should measure outcomes and report them and constantly think about them and disclose them. This has tremendously strong, tremendously compelling face validity, but in the early days of public reporting of inpatient and procedural mortality, it's actually not been clear that it's been very helpful. And depending how those data are measured and used, it has been apparently associated with not only not making things better, but actually causing harm. And I don't mean to say, I'm not going back to 1915, I don't mean to say that we shouldn't do it. We obviously should do it. A lot of the roles that these people have and that you all have are around that. But I think the lessons of that history are very important in terms of how we use mortality to reduce mortality for patients. The problem is you obviously can't measure what you can't measure. It's impossible to imagine designing clinical programs, maintaining clinical programs about data that you can't see. And the public has a right to know. There are some people that say we should stop doing this and we certainly should stop disclosing it because it's caused harm. I think that's not going to happen. That's like saying that doctors and APPs want to sort of stop doing prior authorizations. There's not a world in which the public will say that we don't care about the outcomes. I think any kind of transparency calls for the disclosure of these outcomes, it's never going to go away. In the early days of public reporting for PCI mortality where I work in Massachusetts, the initial data showed that mortality after PCI is pretty rare. It's 1.68%. And the overwhelming majority of patients who die after PCI come in with cardiogenic shock or other very high risk features. So you're not supposed to come in and die. It's very, very rare to die after a stable PCI procedure. And part of the problem in the early days was that a lot of variables that were very associated with high mortality weren't in the risk calculator. So the risk adjustment methods didn't incorporate some of these other variables. And so what happened in that setting when you don't have adequate risk adjustment, it's a rare event, and one or two cases can really throw you off and it's publicly reported, what happened is this. So the lower line is Massachusetts. The higher line is control states that did not have public reporting for PCI mortality. What happened when public reporting began is that doctors stopped doing PCI for high risk patients with myocardial infarction. So that's not what anyone wanted here. We wanted to measure to improve performance, not to have risk aversion or avoiding high risk patients, but that is what happened. You can see quite clearly that in Massachusetts when this happened, people did it a lot less. People did PCI for AMI less. And what happened in that setting when doctors are doing less PCI for acute MI, something we know in many cases is indicated with a class one recommendation, is that mortality went up. That's not what you expect, right? That's not what we want. That's not what we expect. But with the disclosure of this information with inadequate risk adjustment, the mortality of AMI, the true denominator, not PCI, went up, presumably because patients were being turned down for the procedure. And this has happened in other settings. So this is the early days of public reporting in New York State. The black bars here are hospitals in New York State, and the white bars are hospitals in Michigan. So they're totally different. This is the proportion of individuals coming in with cardiogenic shock, again, a very high risk feature for PCI mortality, who received PCI. The black bars are New York. So there's clear evidence of avoidance. The doctors are not doing the procedures in high risk patients when you have public reporting with inadequate risk adjustment. And so this has been a lot written about this since then. It's happened in different settings. And we're certainly acknowledging that we're not going to not do it. We're not going to not measure and not report. There are probably better ways to do it, disease based reporting as opposed to procedure based reporting. So you're capturing the true denominator. If you're doing diseases or episodes, you don't have the problem of avoiding procedures for higher risk patients. Thinking about what we're measuring, so process measures as opposed to outcome measures, and thinking about patient centric reporting, things that are really important to patients. Thinking about non-public reporting. So it may very well be that we should measure mortality and talking about it and saying, my mortality is 7%, yours is 4%. What's going on here? Because then we can talk about it internally. But the thing that seems to be really associated with potential harm is sort of full disclosure of these public metrics. I do think some of these things can help us in an aspirational way think about how we look at mortality for individuals in the hospital or as outpatients, focusing more on syndromes rather than procedures, and focusing more on processes. There's many cases where mortality events are truly inevitable. And in other cases, there's some cases where people don't have a mortality, but mistakes were made. And that can be something that can be improved upon. I don't think that we're ever going to see any reduction of interest in this type of information. And so we have to think about this legacy and how to improve and how to use this information in a way that can improve the quality of care and outcomes for our patients. I'll also say, and this is entirely self-interested because I'm the head of the metrics committee at NCR, that improving the science of quality measurement is a big pathway here. So to make the metrics better so that they measure quality better will create more confidence among operators that they're measuring actual quality. And therefore, if the metrics are perfectly capturing, they never will perfectly capture all risk. But if you can get closer to that, the doctors will have more confidence in them. And then there's no risk aversion. So you don't need to improve numbers by risk aversion if the metrics are adequate in sort of encapsulating all the metrics, all the variables that would be relevant to risk adjusting a specific metric. And that's what we seek to do when we develop and evaluate new metrics at the MRM, is that we really think about how to make these measures better, how to make them real, how to make sure that in the data collection forms, we're capturing data elements that are pertinent and relevant to the treatment, to not only outcomes, but also the reasons why caregivers would select one procedure or another. So thank you. I'm delighted to be here. It's also so wonderful to see so many people who are passionate about improving the quality of health care to come to one meeting. So I'm delighted to be here and look forward to the questions and the discussion. Thank you. And Jason disclosed to me before he started speaking that his public speaking probably began when he was a captain of his wrestling team. And I learned two things. I'll never grapple with Jason. And I wasn't a captain of a wrestling team. So that explains why he's a better speaker than I am. But I'm really happy to be here with you today. It's been wonderful to see friends old and make new friends and be part of a community that speaks to what we're all passionate about, improving the quality of care for our patients. Jason spoke kind of foundationally about what do mortality measures mean and how do that contribute in terms of kind of the 30,000 foot level. And my objective today is to speak to you about how do we take that down to the granular level to really understand what's happening to the individual patient? How do we learn from that? And how do we move forward? It's interesting. I've done this work kind of day in and day out for a period of time. But this is the first time I've given a talk on this subject. So I'm excited to have the opportunity to speak with you about that. You've heard before, I do serve as the chair of NCDR oversight. And I'm the senior medical advisor for measurement for the ACC. So the objectives for me to explain the rationale for mortality review, what are we trying to achieve by doing that, talk about standardized approaches to mortality review. We really need a standardized structure so that we're really getting that granular data, that detailed information to get to the root cause of what happened and how we're going to move forward. And you'll hear me refer to this as mortality review is that we've taken this in the context of mortality measures. This concept can be applied to any outcome of interest. So cath PCI, bleeding rates, acute contrast injury, TAVR, pacemaker rates, STS, surgical issues, anything. If there's an adverse event, you can apply the same concept of detailed review in a structured format to get to root causes. You can also do that for near misses. And that's an important concept as well to define how are we going to identify those episodes of care where something bad didn't happen, but we came pretty close. And how are the ways that we're going to find those so we can identify ways that we can improve? And then finally, how do we systematically put those pieces together to identify themes that help us drive process change to improve the care of our patients? So first, what is the rationale for mortality review? As I've gone through training, as I've gone through years of being a physician, the same truism has remained since my first day as a medical student. I learn more from my mistakes than I do from the things I've done wrong. But it's because of taking the time to reflect on what happened in that mistake, what was the problem? Was it a lack of knowledge, a lack of training, or was there some other systemic process that led to that mistake? And so in reality, there are no mistakes save one, the failure to learn from that mistake. And so we need to use the opportunity to learn from our bad events, our mistakes, for us to get better for the sake of our patients. Mortality review is not blame-seeking. It's not looking for the scapegoat. We're not trying to find a way to point fingers at individuals, but it's a way for us to collectively and systematically look at what happened to the patient through their process of care to identify those ways in which we can improve. And it's very, very important to create a culture in which people understand this is not about seeking to place blame. This is about us working together in collaboration to elevate the work that we all do for the betterment of our patients. Defined medical mortality review is medical and other disciplinary experts reviewing the circumstances of an individual death to explore the root causes and identify interventions to prevent future deaths. I think there are a couple of key parts here. Other disciplinary experts. This is not one physician and one quality champion sitting in a room together. This is the whole team. It's aspects of nursing, APPs, ancillary services, respiratory therapy, laboratory, whatever it might be that contributes to the processes of care for a patient to understand how these things tied together, how they contributed to the adverse event, and how we can do better. And I said this before, but I want to reiterate, this process of mortality review can be applied to those other types of events and near miss events so that we can learn the most in moving forward in the care of our patients. We talk about hospitalized standardized mortality rates. There are primary drivers. You can think about avoiding needless harm, ensuring that we're applying evidence-based care and applying care in the appropriate setting. And then there are secondary drivers that feed into those primary. So from a needless harm, how do we eliminate never events? How do we eliminate preventable deterioration? What we call in surgical space, failure to rescue. And how do we eliminate preventable complications? Was there aspects of the pre-procedural setting where we didn't do an adequate job of identifying what was the patient risk and what could be mitigated? Or was it something afterwards that we didn't act appropriately to avoid the deterioration that resulted from a complication? Evidence-based care, are we providing core measures and clinical guidelines in an appropriate care setting? This will become increasingly important as we think about ambulatory surgical centers. These are going to grow and the care delivery in ambulatory surgical centers will also continue to grow and there will be those that will push the boundaries of what can be done in those care settings. It's incredibly important that we take the time to review what happens in those settings as well to say, did we identify that this was the right setting for this patient? Do we have the right processes in place to ensure that the patient, based on their risk and estimated outcomes, is getting the care in the appropriate care setting, not just because of convenience and the ability to grow programs. When bad things happen, the Swiss cheese model, I'm sure many have heard of this concept, but it's this idea that it's not just one event, but a series of events that happen in sequence that get through defenses that were designed to prevent that from happening, but a hole in that defense allowed the event to go forward. So we can think about each slice of the cheese as a defense. We have a monitoring system. We have processes. We have education. We have training. We have staffing. But if there are weaknesses in each one of those, staffing, we all are less staffed on the weekends. There's clearly a weakness in that defense. Monitors or alarms. There are times when I have to take a break from our CCU and tele floors because the alarms just, the headache is persistent. So how does that then contribute in terms of what are the potential weaknesses in those defenses that allow an event to move forward in that act of failure that leads to a serious event? And in the process of looking at that event, looking at those defenses and seeing where those failures occurred, the intent is to identify ways to close those holes to prevent these things from happening again in the future. So let's talk a little bit about a standardized approach to mortality review and it's really about that root cause. We're trying to do an analysis to get to the underlying cause and it's referred to in the singular, but in reality we're talking about root causes. There are oftentimes multiple causes that led to the event, particularly in that Swiss cheese model of events. It's a formalized in-depth process for investigating an incident with the goal of identifying the factors contributing to poor performance with three fundamental concepts. One, we're looking for causal and contributory factors and we want to outline all of them in as much granularity as we possibly can to truly understand how these play together and then do a causal analysis prioritizing corrective actions that lead to the development of preventive strategies and effective countermeasures, closing the holes in the Swiss cheese. There are five rules of causation. First, we need to show clearly the cause and effect of each contributing factor and one commonly used way to do that is with a so-called fishbone diagram where we can label the cause under certain categories of equipment, processes, people, materials, environment, and management to say what was the component and how did that then lead into the effect? Where did that fall into the Swiss cheese? It's important to use specific and accurate descriptors. It can fall into easy traps of bad preoperative assessment. What was bad about it? Was it that we didn't understand that their creatinine was elevated? Was it that we didn't understand that they had these comorbid factors? We did not understand that this wasn't consistent with their goals of care? So being very specific about what was the shortcoming in the phase of the process that led to the adverse event. Human errors must have preceding causes. So a provider failed to recognize STEMI on an EKG. Well that'd be a pretty big failure, but what was it? Why was that failure? Was it that the EKG was performed poorly and had a lot of background noise? Or was it that the provider wasn't adequately trained? Was it a training issue? Was it a staffing and fatigue issue that the provider was being bombarded with a million other things at the same time that they're responsible for this critical aspect for this patient? So what was the preceding cause for that human error? Similarly, violations of procedure are not root causes, but must have a preceding cause. If you have a procedure in place that wasn't followed, what was the reason? What led to that decision not to follow it? Not just that it happened, but what were the rationale or the causes that led to that failure of procedure? And then finally, a failure to act is only causal when there's an existing duty to act. There's a challenge sometimes in root cause analysis that hindsight is 20-20. I'll use an example of a patient who had an unfortunate event related to a delayed diagnosis in an aortic dissection. Very atypical presentation with seizure. No reason to suspect aortic dissection. As part of the workup because of aspiration, they had a CAT scan that showed they had a dilated aorta. Again, no suspicion for aortic dissection. But in review, there were a lot of people who wanted to say, well, why didn't we look for that dissection once we got that initial CT scan back? At that point, there was no pre-existing duty to act. It was not part of the clinical rationale. Now the next day when they developed heart failure and we delayed getting that patient an echocardiogram because we didn't have adequate staffing in place, that is part of the root cause. But it's important to differentiate those components. So getting to the root. At each step, it's important to ask what happened, what usually happens, and what should happen. And one way to do this is to use what's called the five whys. And there's a famous story about the Washington Monument that turns out it's a myth, but can help illustrate this example. So the story goes that the Washington Monument was deteriorating. And so they tried to understand why is the Washington Monument deteriorating? So they asked, why is it deteriorating? The first why was, well, because we're cleaning it frequently with toxic materials that break it down. Well, why are we cleaning it so frequently? Well, the birds really like to hang out here and the bird poop makes it kind of messy, so we need to clean that up. Why do the birds like to hang out at the Washington Monument? Well, there's a lot of insects in the area and the birds really like to feed on the insects and that's a good place for the birds as a result. Well, why are there so many insects on the Washington Monument? Well, we light it up at night to look pretty and the insects are drawn to the lights. So getting to that led to a decision about how are we going to light the Washington Monument, decrease the insects, decrease the birds, and help preserve? It's a myth, but it helps you understand the concept of the five whys. We can apply this five whys to human factors, so communication, training, fatigue, and scheduling, environment equipment, rules, policies, procedures, and barriers, so that root cause, those specific components, really getting granular on what was the reason using those five whys to help us move through. Another way to break this down is what's called a phase of care mortality analysis that was first developed by Fred Grover, a true inspiration for many of us as surgeons in Denver, and this is the concept of breaking down the patient's care pathway into a temporal relationship that lead into that surgical procedure, relative to that surgical procedure, and we can think about pre-op, intraoperative, ICU floor and discharge components of the care, and within each phase there's a characteristic set of therapeutic goals, care pathways, and recovery expectations that we can review and analyze. It divides the process of care into these interdependent compartments that contain multiple layers, and then within each layer, in each compartment, we can again get into, do we have the right people, equipment, training, education, and use those five whys to get into the details. I use the example of preoperative assessment. We can say, well, this patient wasn't appropriately assessed preoperatively. Why? Well, because we only have preoperative clinics on Monday and they needed the surgery on Friday. Why do we only have it on Monday? Because we only have one general practitioner who does this, so we can get to an understanding of what are the systematic issues in terms of either scheduling, structure, processes to help us move forward, and then we can parse that clinical course into time segments that help us focus on specific elements for quality improvement. When we do this work, it's really important to be aware of the second victim. There is an emotional impact on providers, and I can speak to that personally. I've experienced it myself. When something happens and you've been a central component to that patient suffering an adverse event, there's a kick, there's a shock as to what happened to you as an individual, and then there's a fall. There's an emotional spiral of feeling responsible for that patient's adverse outcome. It's very important that we do this work in a collaborative environment where the intent is for us to all work to learn from the event that happened, recognizing that the work we're doing is to avoid that happening again and providing the best care for our patients. The recovery and long-term impact is really dependent on having that collaborative environment where people can feel vulnerable, expose things that happened that they're not happy happened and not proud of that happened, but then we can learn from and move forward to create the systems that help us do better. So how do we take this all and then move towards implementing quality improvement? How are we supposed to do all of this? I think in the prior talk, you heard a lot about how evolution of NCDR dashboards is going to continue to provide you insights about how you're doing and specific measures, and that's a great place to start. That 30,000-foot level, how are we doing relative to benchmarks? Is there an opportunity for us to do better? In cath and PCI, for us, it was in bleeding. In TAVR, it was pacemakers. And in that process, we did focused review of patient events to understand what were the processes that helped us move forward to address, I'll say bleeding continues to be an issue for us, pacemakers we improved. But that was one strategy to help us think about where we're going to focus our opportunities and our efforts. And we can also think about this relative to expected outcomes. So in our surgical work, and also in cath PCI, we have the anticipated mortality or the expected mortality of a patient. If you bucket patients into low, intermediate, and high risk, and a low-risk mortality patient dies, that's a signal. Something didn't go well in that situation. That's very, very unexpected, and there's an opportunity to review what happened in that specific setting as really a concerning flag of something needs to be paid attention to here. The other opportunity that comes from that is to review, did we actually categorize the risk of this patient appropriately? Did we fully capture all of the comorbidities that go into risk determination that are important not only for understanding how that patient's going to go forward, but how we report this patient to NCDR or STS or public reporting. The other part is engage, engage, engage. This is a team sport, and I think we all understand that. But we need those champions, not only in our sphere of quality improvement, but within the clinical area that we're trying to address. We need that clinical champion. You cannot do this on your own. That clinical champion has to be working with you, that clinical team. It also helps you define those priorities for evaluation and action. By doing that, you can move from what's called root to root squared, the concept of taking these concepts and then creating a process to improve and address the gaps, identifying, and then a common cause. Instead of root, but common cause. What are the patterns that are going to inform our actions, and how does that develop strategies and effective countermeasures to continue to close the holes in our Swiss cheese? So with that, I hope that's been helpful. I think there are probably many in the audience who do this work every day, but for those who are continuing to build their strategies of how to leverage the data from NCDR and the quality improvement work for our patients, I hope this has been helpful. Thank you. Wonderful. Thank you both for those talks and for setting the stage, the framework for our conversation. So we will move into our Q&A, and thank you so much for the thoughtful questions you all have conveyed through our system. So we'll transition to asking them of our panel. And so Amber will represent us from a data perspective, Cindy will represent us from an organizational perspective, Jason, Dr. Waspie, will represent us from an academic clinician perspective, and then Steve will represent us from an ACC registry perspective. So we'll jump into these questions. There has been a lot of activity just wanting to know what has worked at our hospitals in terms of how these discussions are taking place. How are hospitals openly discussing near misses? Are there ways of doing it that are sensitive to the nature of the discussions but allowing for open discussion? So basically asking for what has worked and what is currently being done at our respective institutions. So maybe we can start this way and kind of work our way down. Yeah, I kind of alluded to it in the talk that we do this presently in the space of our surgical areas most consistently, and we're developing more ways to do this consistently within our cath PCI space, and we've also done this in our TAVR space, finding a specific area of commonality of interest within the clinical group to say, we agree that this is a problem from a 30,000-foot level. Let's now get granular in terms of understanding what's happened to individual patients, using root cause analysis to get to a common understanding of where the themes are. And then particularly in the surgical space, it's been helpful to use that phase of care mortality analysis to then break it down further into, is this part... Are we predominantly seeing problems with preoperative, intraoperative, ICU, floor discharges as it relates to how we develop systems to improve? I will say, I'm a native Minnesotan, I'm back in Minnesota. We have the most passive-aggressive culture in the world, just bar none. And so there's a lot of work about creating a culture of openness and transparency because it is not comfortable in our culture to talk about these things openly. And so there's a lot of work about creating level setting and understanding that we are better served by being open and transparent and supportive than we are passive-aggressive and kind of alluding to problems rather than speaking to them openly. Yeah. No, I think, you know, I feel like I, whenever I hear Steve Bradley talk, I feel like I learned I think that, so I'll forgive you for the public reporting of my wrestling thing from high school. I think that, that's why public reporting is a bad news. I think that the culture stuff that he's talking about is so important, right? So it's, I do think it's critical, and I say this as a physician more than anything else, a researcher, it's really as a doctor that I say this, is to trust the legitimacy and the intent of the purpose, right? So if you're leading with this very intense, like, culture around this isn't blame, this is improvement, it really is just going to create a much better environment in terms of using that information to actually improve. That's why, I think that's why the public reporting stuff is a little bit difficult because it's almost definitionally very sort of public, and the private processes that occur within our hospitals can really be focused on this, acknowledging that a mortality can happen even when everything was done perfectly, right? And conversely, a patient can do okay when you made a pretty bad mistake, right? So both are true. And I think it's important to be in a safe space about that, to discuss this openly, also to create the safety of, I know we're talking about mortality, but there are these other situations where the patient does fine, but a process was violated. And that's like in Steve's way of talking about this is one of the Swiss cheese holes went through, but not the other, maybe. That's still really important. And if you're really feeling safe and really feeling like you're there as a team to do this right together for the patients, and this isn't about I'm going to blame you and you're going to blame me, that's the culture where you actually get really into it, right? So even if someone does well, you're still trying to figure out what they are, could have been done better or next time, or what if done wrong with a combination of another Swiss cheese event could actually result in a poor outcome or mortality. I think the cultural stuff is, you know, we deal with data all the time. We're thinking about these registries and how you're reporting things and how you're risk adjusting things, which is really important. But this cultural soft stuff is really critically important as well. I'd love your thoughts there. Sure. I'll dovetail onto that. And I will take yours even a step further that our doctors were not just passive aggressive. They were outwardly aggressive when we started our mortality reviews. It was about five years ago when we started these. And I recall at the time it was coming to Cindy's firing squad meeting, which was really not the case at all. And so we knew we had a lot to do with culture and really talking to our physicians about why, why we're bringing, why we're coming together. And it was really to improve outcomes for our patients to look at systems. We're not looking at you specifically and what happened during the case. We want to know if what we can improve on. So it was really level setting with them that this is not peer review because that was one thing they would ask a lot too. We have a separate process for that. And so we open every meeting now with what our intent is. We'll start with a safety, maybe a near miss, something like that, just to sort of ground everybody with the work that we're about to do. And that has really helped move us to a different culture, a very collaborative environment where we have a great multidisciplinary team, not just cardiologists that will come to the table and really have that safe space to identify things that from a systems perspective that maybe didn't go well. Maybe it was staffing. Maybe it was, wow, we are doing so many, and this is a perfect example. We are doing so many radial procedures now that we rarely do a groin, but we're having all these groin bleeds, what's happening. And what we identified is that our nurses just, they know how to take care of the radial, but we have so many new nurses that maybe we hadn't given them the tools that they need to really address the groin site. So that's just an example. We have a couple different approaches for mortality that I would be involved in. For starters, we have a mortality review forum, and so part of the data abstractors role is just to kind of put that together for the physicians. It is a peer protected forum where they can discuss openly maybe something that occurred to cause this mortality. And another way that we do is we look at mortalities in our chest pain PI meetings as well too, and it's not just specific to procedures, so we're looking at all mortalities. And so we also, like as an abstractor, I'll put that information together. And then we have different, a multidisciplinary team laying eyes on that, and we have different perspectives and different possibilities that come about of that. Some of the areas that we have addressed are some of those cardiac arrest, cardiogenic shocks, and CHF, causes of mortality, and we've tried to come up with some policies and protocols to treat those patients at a more urgent need than otherwise. Wonderful. This question I think we've kind of already addressed just in terms of changing culture, but I'll just ask it, and maybe we could just go down and each provide a quick response. But how do we engage providers and change the narrative of quality being police rather than partners? You want me to start? Yeah, or actually we'll start the other way this time. Come back this way. How do we engage providers? Change our narrative, our perception from being policemen to partners in the quality space? Yes, I know that can be challenging. I think that you just have to come in with a non-threatening attitude. We're here for what's best for the patient. We're working to improve outcomes, and ultimately I think that we're all on the same page with that. So I think it's really about what I mentioned and alluded to earlier, is that really setting the stage at the beginning of the meeting for why we're meeting and what we want to accomplish, and I think being open to their feedback. There were a few times where I would have a cardiologist call me back after one of our PACMA reviews and say, wow, would it be okay if we didn't have so many administrators in here, like our CMO and some of our other administrative staff that would come in? And so just the recognition of that and being willing to work through who do you think the intended audience should be, and what is it that you want to see? So I think just really being open to their feedback to change the process so that you get what you want to accomplish out of it. I'm trying to be reflective about this. Actually on Zoom I did an M&M this morning at my organization. I'm a cardiac intensive care doctor. We do these things, and I had to think about a case where a patient died. So I definitely agree with these points being made. So leading with the right intent and reassuring that the intent is to improve quality and not to blame or judge. I do think those are critical and necessary, and I also think, and I know I'm sort of self-interested here because my MRM role, but I also do think that people have to trust, a lot of this is about trusting the measurement. The intent is necessary, that needs to be established, but there's also an additional step around the relevance of the metric and what they're doing. If they perceive that what's being measured is inadequately risk-adjusted or inadequately focused on something trivial. I mean I remember even, I'm remembering back to the days of you know when I was in training and stuff, there was a large national effort on hand hygiene, right? So do you remember, this is maybe 20 years ago, so it was residency, 15 years ago let's say, and a lot of, they were people who had the score charge, and really it was quite clear what you had to do to get a good score, which is that you had to wash your hands like this when the person was looking, and then you would have gotten a good score. I mean it was a quite obvious how to do this. So that's a flawed metric. I'm not sure I have a better way to do it, to be clear, but it was like a little bit annoying, right? Because you just had to do this kind of, in addition to be running around seeing the patients, you had to you know find the person and kind of go like that, you know. So that's the kind of thing that's a problem, and I admit that for that metric, which I'm glad we're not talking about today, I don't have an alternative, but that's the kind of thing that really gets people down, right? And the hostility and the, that's actually not professional, right? So that's worth pointing out, but you can, it is also something that happens when people see these things over the years, and I think that to make sure that what we're measuring is as right as possible, and is as risk adjusted as possible, and is under the control to the extent of the providers as much as possible, and recognizing you're never going to be able to be perfect on that, those things at all, is important along with these very important presentation and cultural factors to sort of creating the safety to improve the quality of health care through measurement and discussion, I think. I, similarly being reflective, and I'll just piggyback on the first point about agreement in your partnerships about the measurement of importance, using Cath PCI and bleeding as an example. In our, in our institution, there are some of our interventionalists who are ephemeralists, and they will retire ephemeralists, and, and one of the challenges is that at a provider institution level, they don't actually look like outliers in terms of bleeding events. Added up over years, probably. You could also talk to them about patient satisfaction, but it's not something they're willing to engage on. So let's engage on something that we can agree on, that we're gonna work on, and then let's use that as a stepping stone building to other things that we're gonna work on, and we'll leave to the fact you're gonna live and die ephemeralist to another day. So I think that's part of it. I also find it fascinating that in some spheres quality has started to turn into a four-letter word, and I don't quite understand what that is. I think part of it is when individuals feel like they're getting hammered on measures that they either don't agree with, or they don't, they don't have buy-in to, so let's stop beating people over the head about measures that we don't have green buy-in to, unless they're wrong, and then let's keep beating them over the head. Let's help demonstrate the return on investment. So the other place where I think that quality has started to turn into a four-letter word sometimes is at the CFO level, when they look at the investment that they're putting into quality, and they're like, well I've got my CMS and my other metrics, why do I need to pay attention to these other things? And so helping them understand why this is first and foremost mission-centric in terms of what we do, and second of all, that bad quality is expensive, and tying that to dollars so that the people who are paying for the work that we do understand it's not only mission-centric, but it's actually bottom-line important. And so I think having, understanding your audience as you have these conversations, and I know it was about policing and the CFO doesn't care about policing, but understanding how to speak to the different audiences to get the engagement buy-in to continue to move forward, it's important. Okay, I'm hoping to spark some controversy with this question, but is basically on the use of risk stratification for deciding about procedures. And Jason, I know you've already alluded to the risks of risk stratification in your talk, but they are a part of care and decision-making, some more in some institutions than others. People have different feelings regarding them. How do you and your institutions feel about them? Why don't we start with you, Jason, since you've already sort of alluded to this in your talk. How do you all approach standardized use of risk stratification tools for decision-making regarding procedures? So, I mean, the fact that you led with controversy, I was worried you were gonna call on me first, but I'll forgive you for that. I think this is primarily about risk stratification pre-procedure, like, yeah, yeah. So this is, so I'll say, and I'll say this with, you know, I've got colleagues here and with tremendous humility, the first thing I did in my career was I developed a risk strategy. You can look this up in PubMed. The first paper I ever did was a risk stratification for readmission of PCI. So the idea was that we were gonna risk, measure people's risk, and, I mean, this was 12 years ago, and prospectively intervene on them in ways that would improve readmission rates by picking the people that were at highest risk. The idea, you know, if you have only so many resources to intervene on those who are higher risk, and I would say it was like one of the most difficult, I mean, it was younger and it was a while ago, but it was one of the most difficult things I had ever done to try to get people to do it, right? So one thing is that it was, you know, it was heterogeneous at EHRs at that point, right? So the EHR build was one problem, but it was just hard to convince people that it was worthwhile, and so, I mean, you could sit there and be like, hey, why don't you use my risk? It was so embarrassing and sort of difficult to do. You can hard-code things in the EHR, but then you get into that sort of hostility, you know, if people don't think something's useful or worth it in their spectrum of activities that day, hard-coding it doesn't really solve the problem because it creates, I mean, it makes them do it, but, you know, sometimes people click on something, bypass the unit, a lot of BPAs aren't hard-stop BPAs. So I think it's a challenge, and I think that, you know, how do we solve it? I've also developed measures from NCDR data around, I mean, this was later in my career, around discharge to nursing homes and trying to get people to risk stratify for nursing home discharge. Spertus was involved. Again, no one's used it. So I have a series of stories like this. I'm still working on it, and I think that, I think the real issue is, it's sort of aligned to my prior comments, is that people have to think it's useful. People do risk stratify, right? We calculate the CHADS-VASC score in clinical practice, and we use it to decide to give you anticoagulants. It's not that doctors will never use a risk stratification tool. It just has to be something that's clearly useful, ideally easy to use, and I think increasingly built it to EHRs. And I'm actually really positive and optimistic about this, because as we get more advanced predictive algorithms, the AI and these sorts of things, the algorithms are going to become better and better, and the clinicians will see their utility in a way that gets them to use them more. So I do think this is part of just the science. We need to advance the science of quality and outcomes measurement as a way to increase utilization and sort of increase usability and clinician confidence in these things. I'd love to hear the thoughts of others, too. So I guess I was thinking about it from just a little bit different perspective. We all use thoughtful pause somewhat. We try not to talk too much about that in our mortality reviews, because that's more patient selection. But I do always ask our physicians, what did the patient want? Because I think sometimes we don't always get to the heart of what their goals are, especially when...and maybe that generates some conversation. So a lot of times we had recently a 93-year-old gentleman who came to the hospital and said, I am ready to die. I do not want anything done. Somehow he ended up in the cath lab. And so it was really a miss for us. And so what I wanted to know was, what did we miss in our collaboration with our docs, from the ED to the intensivist that cared for him to the cardiologist? I mean, what are we missing that we could do better? Ultimately, he did end up agreeing to go to the cath lab, so it wasn't anything like that. But how did we not have the conversation and dig a little bit deeper into maybe what his true wishes were? So I think just something we need to remember. When I think of risk stratification, I'm thinking of a lot of our cath PCI patients and having those discussions with our physicians, which historically have never gone really as well as I would have hoped. But they do have buy-in. They do understand the significance, how it can be measured. We do talk a lot about documentation and those variables that are included in the registry, and that they are documenting appropriately as well, too, just so that we can fairly, I guess, risk stratify these patients. And we even made a little snippet that they have beside their desk, so they're able to report that out as well, too. So they have little cues, little reminders for them. I'll just say that our institution has not been very good about standardization of use of risk stratification tools and how we move forward. And I think in part, I think there's a lot of reasons to it. One of the challenges is the point about, is this consistent with patient goals of care? Because I think a big part of that drives it. So if the surgeon or their operator is willing to accept the risk and the patient is willing to accept the risk, what is the risk threshold at which we're willing to accept? And to Jason's point about mortality data, if we're overly aggressive and trying to say, you know, we need to be careful about doing these procedures in patients, oftentimes the biggest return is in the patients at the highest risk. And so there's a big tension and trade-off there, particularly seeing as the patient we choose not to go forward with doesn't land in our data. They don't land in our procedure and surgical data. So we can't then see. So there's a tension there. I do find that actually one of the ways I find risk stratification to be helpful is to just help align the team in terms of what truth is. Because I think we see a sense, and then the data might tell us something different about what their risk is. And obviously there's the component of there may be things that are missing, there may be an up, that there may be things about the patient that aren't reflected in the model, but can help us ground us in terms of in the thousands of patients like this, what does it look like? My favorite is AKI. The risk is always lower than I think it's going to be. And it can be very helpful in having grounded discussions about what the opportunities are. And I'll just interject what we've been doing at our institution with the risk stratification tools is in our multidisciplinary quarterly mortality review. We have been retrospectively applying the miracle and magic risk scoring systems and found that if either one of them, and there was different risk prediction based on the tools, and that's what we were kind of seeing what fit best for our institution. If either one of them were high risk, there was nearly a 100% chance of mortality for those individuals going to the cath lab. And that was for our out-of-hospital cardiac arrest patients that we were applying this to. So I would say that we have incorporated those into our decision-making, and I shouldn't say decision-making from our regard, but more just communicating with families. And I think that if we have tools to convey risk, it can be very powerful to patients' families to share that, you know, based on this data, there is such and such chance of, you know, mortality from what we've seen. And if it is their desire, of course, we'll honor that. But I do think that having that kind of discrete data can be powerful to families as you are supporting their decision-making. I do think, especially hearing some of this too, is that I do think it's worth noting that people, it's not, I mean, I was showing you some examples where it hadn't been adopted, but there are, you know, there's Chad's Vast, Syntax Score, STS Score, these are things you don't have to force providers to do. They actually seek out the website, do the calculator. I do think, especially when you start thinking about Syntax, I know that's more of a, that's not a quality score as much as a sort of risk gratification tool for decision-making, but it's actually relatively tedious to do it. And so I'm sort of bullish about the future of EHRs kind of pulling those data in automatically. I think these things will get used more if they're easier to use. But I do think there's evidence when the outcome is patient-aligned, and maybe my thing is just like they didn't really care that much if they might be readmitted in the future, but whether you die or not, or whether you could have a stroke with this procedure or not, those are sorts of things that people, patients do care about and providers care about, and they are, in fact, used pretty robustly in clinical practice. Are you gonna say something? I was just gonna say, I mean, I think you're right. I think one of the things that we have discussed with our, we've presented the tools, the risk gratification tools to the docs, and we were able to get one of them to pilot it and stuff, and sort of said the same thing. It's very clunky. I kind of know in my head what I'm gonna tell them. You know, if they're in cardiogenic shock, I know that they're probably not gonna do well, and so he's able to articulate it that, but it's not definitive, and I think that's where we're, we would like to get at some point, so. And I think just to add to that as well, too, selection criteria, any type of algorithm or process that you can make for that, could be important as well, too, not only for the patient, but also, I mean, for your provider, because it is such an emotional loss for them as well, too, so to have that as a, it's just an added protection, I think, so. Well, great. Well, really appreciate your all's perspectives on this very sensitive but important topic, and thank you all for your wonderful talks.
Video Summary
The session focused on best practices in mortality review, featuring talks by Dr. Jason Wasfi and Dr. Steven Bradley. Dr. Wasfi discussed the historical context and challenges of measuring and reporting mortality outcomes, highlighting issues with risk-adjustment and public reporting, which sometimes led to risk-averse behavior among physicians rather than improvements in patient care. He emphasized the importance of using these measurements to improve healthcare quality rather than as tools for blame.<br /><br />Dr. Bradley shifted the discussion to a more granular level, detailing how individual case reviews can identify root causes of adverse events and guide quality improvement efforts. He stressed the importance of multidisciplinary approaches, the need for a supportive culture to facilitate open discussions, and the utilization of standardized frameworks like root cause analysis and the Swiss cheese model to prevent future errors.<br /><br />During the Q&A session, various panelists addressed how to foster a culture of transparency, use risk stratification tools effectively, and engage providers as partners in quality improvement efforts instead of police. The discussion highlighted the need for trust in both the metrics used and the processes designed to enhance patient care.
Keywords
mortality review
best practices
risk-adjustment
quality improvement
root cause analysis
Swiss cheese model
multidisciplinary approaches
transparency culture
risk stratification
×
Please select your language
1
English