false
Catalog
Hierarchical Risk Modeling – You Can Get This! - 2 ...
Hierarchical Risk Modeling – You Can Get This! - K ...
Hierarchical Risk Modeling – You Can Get This! - Klein
Back to course
[Please upgrade your browser to play this video content]
Video Transcription
Welcome, everyone, and thank you for joining this Quality Summit Hot Topic session focused on understanding hierarchical risk models. My name is Connie Anderson, and as the CAF PCI and Chest Pain MI Registry Product Manager, I'll moderate our session today. And I'd like to begin with introducing our panel members. Kim Lavin is the Science Team Liaison to the CAF PCI Registry and works closely with the Data Analytics Centers, which develop NCDR risk models. She has a wealth of nursing experience and has worked extensively in driving hospital-based quality improvements. But most importantly, she has just welcomed her first grandchild, a boy who is just three days old. So congratulations, Kim. And we are joined by Dr. Klein, who is the Clinical Professor of Medicine at the University of California in San Francisco. He is one of the founding fathers of the NCDR, has served on the CAF PCI Registry Steering Committee, has championed the measurement of risk-adjusted outcomes to evaluate interventional programs and operator quality, and has been instrumental in numerous clinical and interventional practice guidelines. You may remember him from previous NCDR meetings, where he spoke eloquently on the AUC and the risk-standardized bleeding model for CAF PCI. Dr. Klein, thank you for joining us again. Let's get started. It's a great honor to speak at this year's American College of Cardiology Quality Summit. Thank you, Connie, for inviting me again. As NCDR participants, you collect your lab's data, submit it to NCDR, and receive a report benchmarking your results with other hospitals. Have you ever wondered what happens to the data that you submit, how it is organized and analyzed, and how those reports are generated? Well, in this talk, I'm going to tell you briefly what happens. CAF PCI uses statistical models to provide risk-adjusted outcomes to participating hospitals. The results of these models can be used to benchmark performance and enhance quality improvement efforts. There are important benefits to the statistical method, and they are easily understood and also easily misunderstood. We're going to talk about two different kinds of models, a traditional model and the hierarchical model. The hierarchical model is what is typically used within the NCDR to create your lab's report. In a traditional risk model, patient variables and all expected events are analyzed as they occur. And then an expected incidence percentage of complications is generated, bleeding or mortality. These traditional models assume that all information about a hospital's performance is found only in that hospital. So if in your hospital, you've done 10 catheterizations and interventions and have one adverse event, then the observed complication rate is 10%. Very simple. The traditional model then provides an observed-to-expected ratio, and that is multiplied within the registry aggregate to get the event rate. This simple calculation can be done by the participant looking at the detail lines. Individual cases cannot be assessed because the weight for each patient variable is incalculated. So traditional models say, out of the entire registry, everything we can know about hospital A's true bleeding rate or other complication rate is only in those 10 patients. Put another way, your hospital has nothing in common with any other hospital that might inform its performance. This is the observed-to-expected ratio for mortality. As you can see, the mean OE ratio is approximately 1.0 to 1.2. This way of looking at mortality was actually originated by me with other founding fathers. You take the O to E ratio, multiply it times the standard rate of the occurrence of it, and then you have the percentage that you're expected to have. Hierarchical models take a different approach. This is a database design that uses a one-to-many relationship for data elements. It's optimal for managing large data sets in which fields and records are nested within smaller groups or classes. A hierarchical database consists of a collection of records that are connected to each other through links. Each record is a collection of fields, each of which contains only one data value. A link is an association between two records. This uses a tree-structured design in which a tree structure links together a number of disparate elements to one parent-family record. Files are related in a parent-child manner, with each parent capable of relating to more than one child, but each child only being related to one parent. You can tell this is a tree-structure diagram because it has boxes, which correspond to the kinds of record, and lines, which corresponds to the links between them. And the idea of the diagram is that it shows the overall logical structure of the database. So here, for example, is a classical hierarchical model. So the black dot on the top may represent something like mortality, and a shared hyperparameter might be ST elevation MI. And then under each of that might be, for example, age, or number of cases, and so on, and so forth. Rarely do you get something that looks that simple. It more often will look like this, where you have a number of different child connections to the parent. There's usually a root or top-level directory that contains various other directories. Here it's a one, and then each subdirectory then can contain more files and directories, and so on. So in a hierarchical risk modeling, we don't just take patient variables, we also include hospital variables. Because each hospital type will see a different kind of patient, we put it through a mathematical model, and then we determine a predicted value for your hospital based upon the kinds of patients that you're seeing. What does this add? Well, hierarchical models say that that 10% event rate that I told you about before in a small rural community hospital has a different patient basis, and a different relative impact on outcome than the same event rate at a large urban university hospital. Because rural hospitals have a different patient population with different risks and indications, and they have a different number of patients, and so therefore, any one complication is a larger number, adds to a larger percentage for the complication rate. So therefore, we need to make further adjustments to the observed rate to level the playing field. So the observed rate, remember, that was 10% for hospital A, is replaced by a predicted rate that integrates your hospital's observed performance, the performance across the population as a whole, and the performance from hospitals similar to yours. So what ends up happening is you get a weighted average. Hospital with large volumes, the predicted rates will be close to the observed rate, but those with smaller volumes will be close to the population or group means. It also takes into account some interesting mathematical problems with statistics, including shrinkage in which little unique information is available for these hospitals. Let me show you an example later. One of the main advantages of hierarchical modeling is that we can analyze the data by regression. This can be done for prediction or for description. This allows us to analyze complex data sets. We can analyze it in a nested structure and in a non-nested structure, and we can take into account random effects. Why do we need to do this in NCDR? Because hierarchical models take into account the fact that there are common factors amongst hospitals, as well as distinctive factors that influence performance. There are universal factors that influence all hospitals, guidelines, new treatments, reimbursement, temporal trends, and your observed performance is going to vary randomly about the population mean. But there will be specific factors that influence similar hospitals, what the size of the hospital is, the academic status, if you have fellows working in the lab, urban or suburban, and so on and so forth. So instead of getting an O to E ratio, you get a P to E ratio, in which the P stands for predicted rather than observed rate, which is the estimated rate from the hierarchical model based upon patient factors and all of the other factors that we've mentioned. So I know you would love for me to show you the mathematical underpinnings of all of this, because after all, what kind of a talk doesn't show you integrals and differentials, but I think it would probably be a lot more understandable if I give you a non-medical example. Let's say that we're all baseball fans, and let's see if we can predict how major league baseball players, how they will perform in terms of their batting performance for an entire season, 162 games, based on just their first 45 at-bats, which is just the first 10 or 15 games. In fact, this example was the one which first brought hierarchical modeling into the statistical world. So the standard model would assume that the first 45 at-bats or random sample of all at-bats and therefore comparable to the rest of the season, but a hierarchical model accounts for shrinkage. That is that the more times at bat, the closer, the smaller amount of variation you will see and recession to the mean. The more times at bat, the closer everybody gets to average. It also takes into account random variation on prior observations of how major league baseball hitters perform, and it assumes that we know nothing about the individual batters and doesn't distinguish them based on their known history and their quality. So just as a Roberto Clemente, we know is going to do well every year, and we know that a Spencer King is going to do well every year. We can't take that into account when we do modeling. And we want to take into account part of the game, hot streaks, slumps, injuries. They happen to baseball players. They happen to interventional cardiologists as well. And we need to take into account all of these factors. So how can we predict batting results for the whole season from the first 45 at-bats? Well, we assume as we show at the top, a bell-shaped curve, Gaussian distribution. Batting average, in case you don't remember, is the proportion of hits to at-bats. The average major league baseball player hits about 250 for the year. That means he gets a hit 25% of the time. In general, major league baseball players not going to bat under 100 because they wouldn't be in the major leagues anymore, and they're not going to bat over 400. The last person to do that was over 60 years ago. So based upon this, we can now figure out that all the batters we're going to be looking at are going to fall somewhere along this distribution. So here are 18 major league baseball players from the 1970 season, 50 years ago. And what you see along the diagram are three dots, a gray dot, meaning how these players actually did in their first 45 at-bats, a black dot, the predicted average is based upon a hierarchical model, and a yellow dot, which is exactly how they did do for the rest of the year. And what you can see going through this quickly is that in every case, the black dot is closer to the yellow dot than to the gray dot, except for one case. So you take, for example, a Thurman Munson, a Hall of Fame catcher from the Yankees. He started off the year doing not so well, batting under 200. But you can see that by the end of the year, he was batting over 300. And you can see that the black dot is closer to the yellow dot than the gray dot. On the other hand, you have Roberto Clemente, one of the greatest baseball players of all time. He started off batting 400 for the first 45 at-bats. Well, he didn't stay at 400, nobody does, but he ended up hitting about 340. And again, the predicted model shows that it was closer than the 45 at-bat schedule. And if we look at it from this standpoint, we can now see exactly how the hierarchical modeling works. You can see shrinkage. You start with a range that goes between a batting average of 160 to 400, which we know is not going to be the truth at the end. The hierarchical modeling found them to come in a much closer variation. You also see that all the lines tend to push them down toward about 250. That is the mean. So this shows you recession to the mean, as well as shrinkage. And as you can see, these models did very, very well. They did an excellent job of predicting the real outcomes. Well, how does this relate to the baseball game we play in the cath lab? First, we have bleeding. And by bleeding, we mean local bleeding, as well as retroperitoneal hematomas. What we have here is a graph along the x-axis of traditional risk-adjusted bleeding. The line of unity is in red, and the hierarchical adjusted bleeding is along the y-axis. What we see is that the slope is flattened compared to the unity line. That shows recession to the mean, and that the range is compressed shrinkage. In other words, sites with lower observed bleeding rates will be predicted to have higher rates, and those with higher observed rates will be predicted to have lower rates with a decreased range overall. Just as we saw in the baseball model. And as was predicted by the model, this model is highly predictive, as published in Jack seven years ago. Just like in baseball, where not every picture is the same, not all high-risk cases are the same. And let me show you that related to two out-of-hospital cardiac arrest patients from my practice. It shows you why we need hierarchical models to account for all the variables. This is a 38-year-old international businessman who developed chest pain riding on the way to Wrigley Field. He got off at our hospital station, had a cardiac arrest in the street a half a block from the emergency room. He has diabetes, cholesterol, and smoking. He had returned to spontaneous circulation in five minutes, replaced stents in the LAD circumflexion right coronary artery, and he's still alive and doing well eight years later. This is the kind of patient in whom a good interventional cardiologist would be expected to salvage. This guy has a lot of very positive things. He's younger, his arrest wasn't for that long, and it was only a half a block from the hospital. Now take this patient, a 48 year old mother of three, who's been in cardiac arrest for 30 minutes. She comes to the emergency room, is comatose, has no neurologic function. She's placed on hypothermia, but the husband wants everything done. Everyone knows that this patient has a very unlikely chance of surviving. After 72 hours, neurology said that the brain damage is irreversible and she was placed in hospice. What this shows you is, is that you can't just take out of hospital cardiac arrest as general. You have to have these kinds of nested tree-like structures that I showed you in order to understand that a patient like this isn't going to be salvaged by anyone and not to count that against the program. So here, for example, in the acute MI risk-standardized mortality model, we do have cardiac arrest, but we also have age and a lot of other factors which would be taken into account by these examples. And we can see that presentation after cardiac arrest is a very strong predictor of poor outcome, but within there, how much is adjusted would matter a great deal. So here we have, for example, 14 patients whose predicted values, predicted mortality is based upon their patient level of factors, but the blue line shows how the hierarchical model takes into account the factors that I've mentioned and gives you a completely different kind of value. Some are lower, some are higher, just as the patients themselves. So in summary, hierarchical models offer certain benefits over traditional risk prediction models. Mortality and bleeding risk, for example, can be better approached by a hierarchical model than a traditional model and provide more accurate appraisals of program quality. The CAF PCI risk model is an excellent example of benchmarking CAF lab quality that goes beyond observed adverse events such as mortality and bleeding. Thank you very much for your attention. Thank you, Dr. Klein. Okay, we have some questions. So the first one, Kim, is directed to you. Dr. Klein discussed two different types of risk modeling used by the NCDR, hierarchical and traditional. Can you explain the overarching NCDR vision for risk model reporting for the registries? Yes, thank you. The NCDR is moving towards hierarchical risk modeling for all our executive level metrics. Since we know that facilities may differ with respect to their outcomes, this will allow the model slash metric value to represent both patient factors and hospital effect. However, we will continue to report the non-hierarchical models or what we call risk-adjusted model in the detail lines. So you will be able to access your OE ratio in your detail lines for the risk-adjusted metrics. And then for your risk-standardized metrics or your hierarchical models, as we move to those, those will be resulted as an executive level metric. And then the good news is that both of these models can then inform your internal performance improvement activities. That's good you went there because that's my next question is how, you know, so the hierarchical model is very abstract feeling to participants. I mean, Dr. Klein just finished saying that a lower rate will pull you, give you a higher predicted value and a higher sort of event rate you will get a lower predicted value. So all of that feels sometimes very arbitrary when looking at the hierarchical model and it makes using it for quality improvement purposes very abstract. So can you help us understand how a hierarchical model helps drive quality improvement at the facility level? Sure. Well, so one of the major things NCDR has taught us is the exceptional importance that patient characteristics have in determining PCI outcomes. So the level of the playing field complication rates have to be adjusted to account for the risk of the patients that a center sees. Plus small centers have smaller volumes, the denominator, if you will. So any single complication, the numerator will impact the complication rate more and hierarchical modeling is the optimal statistical way to correct these factors. So if you are an outlier hospital, then there may be a reason to address this question and see if there's some opportunity to improve things. Whether you are an outlier or near the standard deviation cutoff, it doesn't really matter. The hierarchical modeling is not going to make an average program look awful or look terrible. It's not going to make a great program look awful. So you need to take these ratings into account and see if there's something at your institution which can be fixed. Dr. Klein, the risk model value a hospital receives is derived from the results of other centers. It's a predicted value. Do you think that's fair? It is fair. And as a matter of fact, it's a more accurate way for a center to understand its results. So let's take the baseball model that I showed. If you face Bob Gibson that day, you can, was Cy Young award winning pitcher in 1970, you can expect to have a very bad day at the plate. Whereas if you're hitting against a rookie pitcher, you would expect to have a great day. These all average out over the course of the season. And so you end up with an average over great pitchers and bad pitchers. The same thing is true in a cath lab. If you see a lot of patients, you're going to have patients like the first one that I showed you who a good center will be able to serve to save and to do well. And you'll also see the other kind that no one would be able to save. If you only see a few patients, then one or two of those kinds of extreme patients will incorrectly impact what your observed rates are. So hierarchical modeling is the way to correct for that. And it will give you a more accurate assessment of what your quality is. So Kim, when you hear that and having worked in quality for so long, how will the predicted values the hospital receive from a hierarchical model help institutions when they are seeking to improve their processes or improve their quality? Yeah, great question. I think that the model outcome or the predicted value shouldn't be necessarily taken in isolation, right? It should be a starting point to really delve into your data. So what can we learn from that and partnering it with some of your other metrics in terms of adverse events? We talked about bleeding, et cetera, but also thinking about the predicted value, not just necessarily with that rolling fourth quarter that patient population you're looking at, but how then can you use this data to predict even going forward if your patient population and your hospital factors remain the same. But overall quality improvement efforts, I think that I've had the most success when you have a robust cardiac service line, when you really look at the severity of illness of your patients, when you review every mortality to determine opportunities for improvement, if new clinical practices are put in place, for example, you start to do chronic total occlusion cases that you have certain markers that you're looking for, whether it's actually a quality program or if you're having events like perforation, those types of things that can really impact overall care, but you might not see them as a mortality, but they will change your patient mix as well as potentially your hierarchical or risk standardized mortality due to changes in your hospital mix as well as your patient mix. What you're saying is we shouldn't really look at a hierarchical model outcome as the one thing to guide us. It's a global effort, right? It's part of the process. Correct, I don't think it's gonna give you your answer. It's gonna give you a starting point in a conversation, but in terms of really understanding your patient population and what's going on, you really need to look at certain other, I would say indicators, right? Of that you have a quality program going on in terms of we have other metrics that look at your adverse events. We have metrics that look at just overall bleeding events as well as you'd looking at your STEMI are you doing things correctly, et cetera, as well as there was a slide there looking at the actual predicted value for each patient. When you look at that in your patient level detail, we show you the variables, the patient level variables that are taken into account by, in order to create that value of predicted. And so for instance, let's say you look at a mortality and you say, wow, that was a really low predicted value there comparatively. Did you, was the, excuse me, was the documentation such that it captured true severity of illness? And not just for that one patient, but for every patient in that time period so that we truly know that case mix of your patients. And that's another good way of looking at things. I think that that's also a nice way to use for a suggested model that we still provide participants with detail lines because that O to E ratio is almost very black and white, right? These are your observed bad outcomes and then this is what was expected. So we can really evaluate, are we documenting and capturing all of those predictors or all of those risk factors to come up with a decent expected rate? And if that value is really off, it could just be off because we did have somebody die unexpectedly that didn't have risk factors and that will happen. And so what I'm hearing is that the predicted model will not be as susceptible to those kind of events and not, and with that predicted value, we'll get a more even keel performance trend, which will really help us understand how our hospital is performing when we have either very few events or many events. And it's pulling us to that mean, right? I think that, Connie, you mentioned looking at our patient level detail and detail lines and looking back at the risk adjusted value for mortality or bleeding. And an OE ratio is very easy to look at in terms of, are you performing as expected? Are you not performing as expected? Or are you better than, are you, so anything over one, no, you're performing worse than expected, less than one, you're good. So that's a starting point, right? And so sometimes we get caught up in the rate that's on our executive level metric without, when that's just a multiplication of the overall registry observed value. So really, I always felt when I was talking to, whether it be the quality board or the service line is sort of just quickly was showing them the ODE sort of saying where we are right there. And then also knowing that we were doing all these other things in the backend. And so again, for the hierarchical model, we have the PE, but we're also providing the detail level so that you can look at your actual outcomes over expected and quickly look at your OE ratio and see where you are. All right, so I have a question for you. How- For me? For both of you. I'll let you go answer it first and then Dr. Klein can take a stab at it. The hierarchical risk model is harder to understand and it's not as transparent, right? We don't have numbers we can add up from the detail lines. So we are accepting its output on faith. How can we be assured that the hierarchical risk model is functioning correctly? What checks and balances are there in place to make sure that the results are right? Well, I think that way to go about doing this and I think you're doing, is you're looking at expected rates and you're looking at the mixture of those expected rates from place to place. And you're making sure that there's a continuum. So that's how you're doing your checks. I think that for a center, they're still going based upon whether there are two standard deviations beyond the mean. And I still think that despite hierarchical modeling that you can have good years and bad years. I know that when I was in the cath lab, I had great years and I also know I had one or two really, really bad years. That year, I just had bad results because of the patients that came in. I think that you just have to look at it year by year or six months by six months and just see where you are and try to see if you can find trends that last and also look at your case selection. I think you just have to do good medicine. I don't think there's anything you can do with the numbers per se. Yeah, I think that's our summary note. That's our takeaway. So I think we need to close this session now. We're at time. Thank you, Dr. Klein, for the thorough explanation about hierarchical models and helping us understand them. I think I'll still have to rewatch this presentation to absorb some of the many details you gave us. I so appreciate your dedication to our participants and your willingness to meet with us. This is, I think, the third time I've asked you to speak to our group and everybody is always thrilled to see your name. And thank you as well, Kim, for joining us. Again, congratulations on your new grandson. You're probably pretty tired right now. And thank you for supporting the CAF-PCR registry team and the science team with all the work that you do. If there's any other additional questions from our participants, please feel free to email us at ncdr.acc.org. Thank you for joining us.
Video Summary
In this video, moderated by Connie Anderson, the panel discusses hierarchical risk models and their use in the National Cardiovascular Data Registry (NCDR). Kim Lavin, the Science Team Liaison to the CAF PCI Registry, and Dr. Klein, Clinical Professor of Medicine at the University of California, San Francisco, present on the benefits of hierarchical risk modeling and its application to PCI outcomes. <br /><br />The panel explains that hierarchical models take into account both patient and hospital factors when creating risk-adjusted outcomes, allowing for more accurate benchmarking and quality improvement efforts. They contrast hierarchical models with traditional risk models, which assume all information about a hospital's performance is found only within that hospital. Hierarchical models, on the other hand, incorporate data from similar hospitals and the entire patient population to predict a hospital's performance. <br /><br />Dr. Klein illustrates the concept of hierarchical models using examples from baseball and out-of-hospital cardiac arrest patients. He emphasizes that hierarchical models provide a more accurate assessment of a hospital's performance by accounting for patient characteristics and hospital effects. He also discusses the challenges and benefits of hierarchical modeling, including shrinkage and regression.<br /><br />In summary, hierarchical models offer benefits over traditional risk prediction models, providing more accurate appraisals of program quality and driving quality improvement efforts. The NCDR is moving towards hierarchical modeling for all executive-level metrics, while still providing non-hierarchical risk-adjusted models in detail lines for participants to evaluate their performance.
Keywords
hierarchical risk models
NCDR
PCI outcomes
patient and hospital factors
benchmarking
hierarchical modeling
program quality
×
Please select your language
1
English