false
Catalog
Everything You Always Wanted to Know About the Com ...
Everything You Always Wanted to Know About the Com ...
Everything You Always Wanted to Know About the Composite Risk Model - - Fitzgerald
Back to course
[Please upgrade your browser to play this video content]
Video Transcription
Hey TAVR Nation, it's Susan Fitzgerald, I'm an associate at the American College of Cardiology and I support the TBT registry. I'm here to talk to you about everything you've always wanted to know about the composite risk model. I have no disclosures. Our objectives are to understand the TAVR 30-day mortality morbidity composite, what the endpoints are, who's eligible and how it's reported, how to interpret your hospital's overall performance as well as your performance within each component of the composite, understand how you can track your performance, and know where to find additional resources to understand a model on your hospital's performance. Here are some additional resources. On our NCDR website, under the TBT registry resources and documents, we have a four-part series understanding risk models, starting from the very basics to how to interpret your results and what to do about them. We also have an executive summary metrics companion guide that goes into all of the details about the model as it's reported in the dashboard, the model as a metric and the detail lines. There's a manuscript published about this model, and then there are other resources about composite outcomes that are good references because these are newer and more unique in reporting risk models today. So what is a risk model? A risk model is a statistical framework that quantifies an outcome, such as death, based on the probability that it will occur, which is based on the patient's risk factors. What is the TAVR composite risk model? The TAVR 30-day mortality morbidity composite, it's a hierarchical multiple outcome risk model that estimates risk standardized results, which we report as a site difference for the purpose of benchmarking site or hospital performance. So what is hierarchical? Hierarchical models take into account not just the patient risk factors for composite outcomes, but the hospital factor or hospital effect as well. These are also sometimes called complex models. Some of our older models in ACC and STS were what we would call non-hierarchical models, which are also called simple or traditional models, and the non-hierarchical models provide risk adjustment based only on the patient risk factors. They do not include the hospital effect. The TAVR composite is a hierarchical model, and it also includes multiple outcomes versus one single outcome like death. These are the composite endpoints. In-hospital or 30-day mortality, in-hospital or 30-day stroke, in-hospital or 30-day VARC, severe or life-threatening bleed, in-hospital creatinine increase, which is defined as acute kidney injury stage 3, or in-hospital or 30-day new requirement for dialysis, and in-hospital or 30-day moderate to severe paravalvular regurgitation, or a patient could have none of the above endpoints. How were these endpoints selected? Well, the endpoints were selected and ranked ordered based on their adjusted association with one-year mortality and the patient's quality of life, which was measured by their KCCQ. Any outcome with a significant hazard ratio was maintained. There were some outcomes that were not included based on their lack of significance as it relates to one-year mortality and one-year KCCQ overall score. Some that were left out, by the way, for example, was new pacemaker and vascular complications. Who and what is included? Well, the timeframe for this model is rolling three years. It's reported with a lag of the published reporting timeframe by one quarter, and this assures data completeness for the 30-day endpoints. So when we publish a 2020 Q4 report, we're actually reporting patients discharged from 2017 Q4 to 2020 Q3. Inclusion at the site level is green or yellow data quality submissions for base and follow-up, and you need to have 90% completeness for all of your patients having TAVR that had a baseline KCCQ, baseline five-meter walk, and event status of 30-day follow-up assessment. Patients need to have enrolled and submitted records prior to the rolling three-year timeframe, and they also must have at least 60 remaining model-eligible records. This is an example of the risk model and how it looks in the 30-day executive summary dashboard. It's reported as a site difference. So what is a site difference? A site difference is a newer method that reports composite outcomes that are both fatal and non-fatal. It creates the foundation for site rankings, ranking from the highest to the lowest ranking. It provides a different weight for each outcome based on the clinical importance and timing of that outcome. It's used to report composites and primary endpoints in clinical trials, and you can also see it called a win difference or net benefit in the literature. So what is a site difference? Simply put, it's a statistical equation that calculates that an average patient is better off going to your hospital versus an average hospital, minus the probability that an average patient is better off going to an average hospital versus your hospital. In this model, the median site difference is zero. It's not one, it's zero. So a site difference that's greater than zero or a positive number implies that an average patient is better off going to your hospital versus an average hospital, and a site difference that's less than zero or a negative number implies that an average patient is better off at an average hospital, not your hospital. It's important to understand that this is reported with a median of zero, but all of the components of the composite, for example, death, are reported with an observed to expected ratio. Observed to expected or OE ratios are interpreted with a median of one, where less than one implies better than average performance, and greater than one implies worse than average performance. This is how your hospital site difference is displayed as a metric. In this example, this hospital site difference is negative 0.1, where the median hospital performance is zero. The performance distribution of all US hospitals rolling three years is on the right side, and you may note that the range or distribution of hospital performance is rather small. If your metric looks like this, you did not meet the inclusion criteria that was outlined in the previous slides. The detail lines associated with this metric, as well as a patient drill down, will provide you with all the eligibility criteria so you can figure out where your hospital is falling short. The dashboard 30-day executive summary metric also can reflect your performance compared to the overall registry performance. That's the red box on the right side, and it can also provide a performance trend for historical performance. Since we just started reporting this in 2020 Q4, you won't see the historical performance for another few quarters. Now, let's dig into the detail lines of the composite metric. The first seven detail lines provide statistics on the number of timers at your hospital and how your site performed in each of the five eligibility criteria. In this scenario, hospital A is eligible, and hospital B is not eligible within any of the eligibility criteria. Please note that the executive summary metric companion guide will provide more details for each of these lines. The next detail lines reflect your performance in the executive summary metric, the composite site difference, as well as the upper and lower 95% confidence intervals. In this example, hospital A is a hospital who's at the median hospital performance. Hospital B, with a negative site difference, is below the average or median, and hospital C, with a site difference of 0.03, is above average or above the median hospital performance. Let's take a minute to understand confidence intervals because they factor into star ratings in a public reporting platform. This is a little different than looking at your metric performance, which reflects whether your hospital is above or below average, which is the median performance with a site difference of 0. In this example, hospital A is the only hospital with a three-star rating because their site difference and confidence intervals are all above 0. Hospital E and F have a one-star rating because their site difference and confidence intervals all are below 0. And all other hospitals, hospitals B, C, D, G, and H, have confidence intervals that cross the registry benchmark or average of 0, and they have two-star ratings. It's important to interpret that star ratings are different than hospitals that are above or below average. A two-star rating is average or expected, and that means your hospital is not statistically different, not statistically better or statistically worse than the average hospitals. About 84% of hospitals have a two-star rating. Now let's take a look at your hospital's performance in the five cumulative outcome categories. These sections provide the observed, expected, and OE ratio for the cumulative categories of model outcomes at your hospital. Please note that each row reports a cumulative count of each outcome. So the first outcome is death. The second is death plus stroke. The third is death plus stroke plus bleeding, etc. You can compare your site's performance of the observed, expected, and OE ratio for each outcome category. This is a great way to interpret where you may have performed better or worse in your overall site performance. Remember that an OE ratio greater than 1 implies worse than expected performance, and an OE ratio less than 1 implies better than expected performance, and that is different than the site difference where a site difference of greater than zero is better and less than zero is worse. I'm not going to go through each outcome category. The companion materials can help you understand all of these detail lines. But in this slide, I've skipped to the cumulative outcome category of death, stroke, or bleeding. I wanted to point out that Hospital A in the previous slide had 11 patients who died, that the same 11 patients had death or stroke, and now their observed outcome for death, stroke, or bleeding is 12. In this example, this hospital had one additional patient counted in the composite endpoint for death, stroke, and bleeding. All of the detail lines for each outcome category provide the same summary statistics, the observed, the expected, the OE ratio, and confidence intervals. The best reference point is the OE ratio, which will always be 1 for the registry benchmark. Let's take a deep dive into understanding OE ratios for hospitals with different procedure volumes and different OE ratios. Here's the difference between Hospital B and Hospital C. Both have about the same number of patients and about the same count of expected outcome. However, Hospital B had six observed outcomes, which is reported with an OE ratio of 2.87. So Hospital B did considerably worse than expected as compared to Hospital C, who did better than expected with an OE ratio of 0.37. Look at Hospital D. It's hard to draw conclusions because they have no observed outcome and their OE ratio cannot be calculated or compared to. This is one of the most important reasons why we've moved to hierarchical models, which always include a hospital effect and hospital factors in the risk model. If you were to only include patient factors, you can't draw conclusions and benchmark for a hospital like this, and you can't assume that they are better than a facility that has some observed outcomes, especially a hospital that has much higher volume. Hospital E and Hospital F are higher volume hospitals, thus they have a higher count of observed and expected outcomes, especially as you compare them to the lower volume hospitals like Hospital A, B, or C. In this example, Hospital E is performing worse than expected and Hospital F is performing better than expected. These hospitals' ratios, not their observed or unadjusted rate of performance of an outcome, provide really important feedback on their performance. The comparison of observed and expected outcome is really a key to benchmarking performance, regardless of your volume. The previous section provided cumulative counts for each composite outcome. This section provides details on the observed outcome based on the worst outcome category, and it's only reported once and not cumulatively. This section provides the numerator, denominator, and percent for each observed outcome at your hospital. It also provides the registry aggregate comparison as a reference point. Remember, these are all unadjusted counts based on the worst outcome for each patient. You can access the patient drill down in the dashboard for the 30-day executive summary metric. The patient drill down provides detailed information about each patient in the risk model. It includes information about whether a patient was risk eligible or not, and the reason why they were not eligible. There are columns in the patient drill down that provide details for each patient regarding their observed outcome, their cumulative observed outcome, and their expected outcome for each composite outcome category. There are also columns that show how the patient was coded for each of the 40-plus model variables. To summarize what we've reviewed so far, we've talked about the TAVR 30-day mortality morbidity of composite that's reported in the executive summary, 30-day executive summary dashboard As a metric, we've reviewed the patient drill down and the detail lines which include eligibility at the site level, the TAVR composite site difference with confidence intervals, the cumulative composite details, and the observed outcome details. So hopefully you've had a chance to review your site performance. Now you have an opportunity to improve patient outcomes and your hospital performance. We're going to use Focus PDSA as a resource for QI. I've provided a link here of a reference or source document for you to use for any QI project. Focus PDSA was introduced by Dr. Deming in 2001 in the Institute of Medicine report called Crossing the Quality Chasm, a new health system for the 21st century. It is the gold standard for QI models in healthcare. It's a simple template to use for a QI project where you essentially find the problem, organize a team, clarify and understand the problem, and select an intervention. And then you plan, do, study, and act. So what's your problem? Let's start by looking at your performance from the top down. Are you eligible for the model? If you're not eligible, why are you not eligible? Is your site difference above or below zero? If it's below zero, it implies you're worse than expected performance, and above zero implies that you're better than expected performance or better than average. Are your confidence intervals above or below the registry median, which is zero? What are your OE ratios within each cumulative outcome categories? If you have an OE ratio that's greater than one, it implies worse than expected performance, and less than one, it implies better than expected performance. You can review your observed outcome details, which are unadjusted, and review the patient drill down for patient-level details for your hospital's performance. So what's your problem? If your site's not eligible, there's a bunch of information in the detail lines and patient drill down to help improve your process of capturing data. What should I do if my performance is worse than expected? We're going to go through some scenarios in the next slides. First of all, I want to say if your performance is worse than expected, please don't consider it to be a problem with the quality of your program or with a specific operator, and please don't look only at one year or one quarter of performance. Let's look again at understanding your problem. What are some next steps? First, you should look at your data quality reports. Are your submissions up to date for all three years for base and follow-up submissions? Do you have any concerns with the completeness and accuracy of data collection? For example, do you have data missing in key variables or do you have some 30-day follow-ups that are missing? Did you review your patient drill down? So in overall performance, are you eligible? Is your site difference less than zero? Is your site difference confidence intervals within the registry average? And is your performance isolated to one reporting period? And then again, probably the simplest thing to do if you are eligible is to look at the OE ratios and which OE ratio is less than one, which implies the outcome is better than expected or greater than one, which might be your problem. Now in understanding the problem, we're going to dig a little bit deeper into each of the composite endpoints, and then I'm going to follow up with a subsequent slide with some references to help you with your next steps. Okay, so your mortality rate is higher than expected. Well, mortality obviously is the worst outcome associated with any interventional procedure. The relationship between increased site volume and lower mortality rates have been invalidated in several studies in the TBT registry, and particularly with those hospitals with less than 100 procedures. The factors, there are other factors that can have a positive impact to reduce your 30-day mortality rate. Besides your hospital or operator volume and experience, these can include appropriate patient selection and many aspects of post-procedure care, because many other outcomes, such as bleeding or acute kidney injury or stroke, also are correlated with a higher mortality rate. We also, in the future, will be reporting metrics for appropriate use criteria. Some of the indications for appropriate use criteria for patients with severe aortic stenosis is that there are rarely appropriate indications for patients that have lots of comorbidities that are very, very high risk or that are expected to not live for more than a year. So you would expect a higher mortality rate for some of these, and you might see that their indication for the procedure is rarely appropriate. If you've organized a team at your hospital to look at your mortality rate, here are six great reference articles for you to use as you work to reduce your mortality rate. So your stroke rate is higher than expected. Well, stroke obviously is the most debilitating complication after TAVR. Stroke is associated with a higher mortality rate. The incidence of stroke has essentially been the same over the last 10 years. We have noticed that a lower site volume is associated with a higher stroke rate, particularly again with those hospitals with less than 100 procedures. And there are some patient risk factors that increase the risk of stroke, such as history of smoking, peripheral artery disease, AFib, and use of anticoagulants or platelets. Transcatheter cerebral embolic protection has recently been approved and may reduce the incidence of stroke. The literature has some mixed results on the role of embolic protection. However, the clinical trials have demonstrated that the devices capture embolic debris in 99% of patients having TAVR and may reduce the incidence of stroke. If your stroke rate is higher than expected, here are four references for your QI team to use as you work to reduce the stroke rate at your hospital. Okay, so your rate of acute kidney injury is higher than expected. There are three things in the literature to look at. One is patient risk. You should assess the glomerular filtration rate pre- and post-procedure for all of your patients. You should identify other pre-procedure risk factors and adjust the management of this high risk group accordingly will reduce the incidence of acute kidney injury and mortality in patients who are undergoing TAVR. Fluids, hemodynamic monitoring approaches, composition of fluids and IV replacement therapy and avoidance of nephrotoxic agents, especially for patients deemed to be higher risk of acute kidney injury can result in a lower incidence of acute kidney injury. And contrast, contrast use has been found to vary amongst physicians and the amount of contrast used was not decreased for patients with higher risk of acute kidney injury in one of our TBT registry studies. These findings identify opportunities to reduce acute kidney injuries in patients undergoing TAVR. If your hospital has a higher than expected rate of acute kidney injury, here are three references for your QI team to use to help reduce the incidence at your hospital. So your rate of bleeding is higher than expected. Well again, bleeding is associated with a higher mortality rate. You should develop protocols to identify pre-procedure risks of bleeding and adopt bleeding avoiding strategies via meds, access sites, closure devices. Though patients should be discharged on dual antiplatelet therapy after TAVR, we have found many gaps in practice. Patients discharged on dual antiplatelet therapy, they do have a significantly higher risk for bleeding. There are new developments to reduce bleeding risk or bleeding events. Newer developments are things like alternative access sites, smaller sheet sizes, pre-procedure CT of the access arteries. We also have developing guidelines to optimize antithrombic therapy, which hopefully will reduce bleeding as a complication over time. I know, I remember working with a hospital that had a higher than expected bleeding rate and they couldn't figure out why and they used their patient drill down and they realized they had a lot of patients who had a documented GU bleed and they realized that they were putting Foley caths in almost every patient. They stopped putting Foley caths in their patients and it reduced their bleeding rates. It was as simple as that. If your bleeding rates are higher than expected, here are three references for your QI team to use as you work to improve your bleeding rates. Okay, so if your rate of paravalvular leak is higher than expected, what should you do? Well, first of all, a mild paravalvular leak is common. It can be clinically silent. Unfortunately, the more significant, moderate, or severe paravalvular leak is infrequent, but once it's detected, it should be monitored because it will be associated with a hemodynamic deterioration and worst outcomes and will require subsequent treatment. There are certain things that will minimize or reduce your incidence of paravalvular leak. Mostly, the appropriate valve sizing pre-procedure is the biggest factor. We started 10 years ago sizing valves using an echo. The gold standard today is a pre-procedure CT, which has reduced the incidence of paravalvular leak. Newer generation valves have also reduced the incidence of paravalvular leak, but also there are specific procedure techniques that operators can use to detect and treat paravalvular leak during the procedure. Most of the time, it happens when there's an incomplete adherence of the valve to the aortic annulus or a low or higher implantation of the prosthesis. In those cases, there are techniques to fix the problem and minimize or reduce or remove the incidence of paravalvular leak. If you have a higher incidence of paravalvular leak and your QI team is looking to work to reduce that, here are two great references for you to use. So once you've identified and understood the problem, you can clarify further by using data in the dashboard. Use your detail lines and your patient drill down to provide more information to your QI team. Present this data to your team and study it to determine the appropriate actions. So just to wrap this up, risk models are necessary to provide a fair comparison of outcomes across hospitals. It also aids in clinical decision making prior to the procedure. Risk model results are a tool that can be used to improve the quality of care at your facility. So what are the next steps for you? Let's be disruptive. To improve the transcatheter valve program at your facility, use Focus PDSA to start a QI project at your facility and submit your project as an abstract NCDR 22. I believe that each of you have the knowledge and tools to improve the process and outcomes at your hospital. If you use this Focus PDSA toolkit and be disruptive, which is a word that Harlan Krumholtz uses, to share your success with us next year. If you had a TBT participant who won first place in a poster contest at a previous meeting, I hope you will consider using these tools to implement a QI project at your hospital using your composite performance and present it at next year's quality summit. Thanks for listening to this on-demand presentation. I hope I helped you understand the composite risk model and I hope you learn how to use Focus PDSA to improve the performance at your hospital.
Video Summary
In this video, Susan Fitzgerald, an associate at the American College of Cardiology, discusses the composite risk model in relation to the TAVR 30-day mortality morbidity composite. The video covers objectives such as understanding the endpoints and eligibility criteria of the composite model, interpreting hospital performance within each component of the composite, tracking performance, and finding additional resources for understanding the model. Fitzgerald provides resources including a four-part series on risk models, an executive summary metrics companion guide, a published manuscript on the model, and other resources on composite outcomes. The risk model is explained as a statistical framework that quantifies the probability of outcomes based on patient risk factors. The TAVR composite model is hierarchical and considers patient and hospital factors. The composite endpoints include mortality, stroke, bleeding, acute kidney injury, and paravalvular regurgitation. The selection of endpoints is based on their association with mortality and patient quality of life. The video also discusses site differences, confidence intervals, and star ratings based on performance compared to average hospitals. Fitzgerald emphasizes the importance of analyzing data and identifying areas for improvement, suggesting the use of the Focus PDSA model for quality improvement projects. She provides references for reducing mortality, stroke, acute kidney injury, bleeding, and paravalvular leak. The video concludes with a call to action for viewers to implement a QI project at their hospital and present their progress at a future conference. No credits are mentioned.
Keywords
composite risk model
TAVR 30-day mortality morbidity composite
endpoints
hospital performance
data analysis
quality improvement projects
×
Please select your language
1
English