false
Catalog
Conversation with the Experts — Dashboard Mania - ...
Conversation with the Experts: Dashboard Mania
Conversation with the Experts: Dashboard Mania
Back to course
[Please upgrade your browser to play this video content]
Video Transcription
Hello, I'm David Bonner, the team leader of clinical operations at NCDR. I'd like to thank you for joining us today and welcome you back to session two of our pre-conference for the ACC's 2021 Virtual Quality Summit. This year, we're pleased to offer a four-part series of conversations with our Clinical Quality Advisor team. In each of these sessions, we're going to be highlighting important information aimed to help lead you to a successful journey in transforming cardiovascular care and improving heart health. The second session is focused on our e-reports dashboards where we're planning to discuss the why. Why it's important for you to use the dashboards, why it's important for you to be fully engaged in all things e-report dashboard, and why it's essential for you to take the next steps once you have metric results to move your facility forward in its efforts to improve patient outcomes. Before we get started, let's begin our conversation with a quick introduction of our expert panel of Clinical Quality Advisors. First up is John Jarrell. Hello, everyone. Welcome back. And Denise Pond. Hi, everyone. I miss chatting with you in the halls. Melissa Nita. Aloha, everyone. Karen Colbert. Hi, everyone. I miss chatting with you in the halls. Melissa Nita. Aloha, everyone. Aloha, everyone. Karen Colbert. Hi, everybody. Shelley Conine. Hi. I wish we were in Dallas, Texas together. Fernando Garcia-Barbone. Hello, again. Kristin Young. That was the quickest room change I've ever done. Vietra Cetapudi. Hi. Hello, everyone. I miss chatting with you in the halls. Melissa Nita. Aloha, everyone. I miss chatting with you in the halls. Kristin Young. Hi, everyone. I miss chatting with you in the halls. I'm sorry. Hi. I miss chatting with you in the halls. I miss chatting with you in the halls. It's the best way to get to know you. It's the best way to get to know you. Vietra Cetapudi. Hi, everyone. And Yan Huan. Welcome, everyone. Again, from all of us at NCDR, we'd like to extend a warm welcome to each of you and especially thank you for your hard work and dedication to always strive for excellence in your quality efforts with ACC. Now, let's get started with our conversations about our eReport dashboards. So, John, let's not assume everyone knows what an NCDR eReport dashboard is. Can you start us off by explaining what the dashboard is and if it's available in all registries in our suite of data registries? Yeah, sure, David. Thanks. I'd be happy to. So, I think the best way to put it is the dashboard is the culmination of all the hard work and effort that you and your facility have put into abstraction and data submission. It's the final product where a great deal of the value is ultimately realized participating in NCDR. We do have eReports dashboards for all registries. So, on the front end, you and your facility, you're submitting the data, using all the different resources that we provide and coding according to coding instructions, target values, supporting definitions. On the back end, the eReports dashboard is where data from all sites that are participating in the registry is aggregated and then displayed in the form of executive summary metrics, as well as detail lines, volume summaries, et cetera. So, it's this data that you're investigating and evaluating that ultimately leads to recognizing gaps in performance and hopefully identifying some quality improvement opportunities. If you're not using the dashboard, then you're not really taking full advantage of being a participant in the registry. It really is where the rubber meets the road. Awesome. Thank you. Now, Kristen, if I'm looking at the dashboard through the eyes of a participant, I might ask a couple of questions. How does the dashboard help me identify opportunities for improvement in my hospital or my facility is the first thing I'd ask. Yeah. So, whenever you log into your eReport dashboard, you're going to notice right next to each metric there is a bar graph, and that bar graph compares your hospital performance against to what we call the benchmark, and you're going to hear that word over and over again. But benchmark is where we say strive for green because that is the logical data, that is the most complete data, and that is what is used to make that comparison of how you're doing amongst U.S. registry hospitals. That benchmark is divided into the 10th, the 25th, the 50th, the 75th, and the 90th percentile. And we say whenever you're kind of like teetering by the 50th percentile, it's kind of where you need to see do I need to improve here because we might be doing okay, but you want to be better. And we all know that our cardiology fellows and physicians and everybody are super competitive, so you want to be at 100%, but you know, it's just not one of those realistic things because things come up. So it's just a good opportunity to see where you fall and to see whenever you're comparing yourself against the 50th percentile of where you could possibly improve. So given that information then, for participants who are unsure what to do with their results, what would be a first step they should take when they see their dashboard results? I think whenever you log into the registry and you're looking at your metric results, you should have your executive summary measures and metrics companion guide in hand. Because in order to really appreciate your data, you have to understand what those metrics are reporting. Because some of the metrics are actually reporting positives, but then some of them are reporting not so pleasant results, and you don't want to have a number there. So that's where you should start out by using your resources whenever you're looking at your dashboard and to see if it really is an area of improvement based on what the metric is reporting. But talk to people in your facility, you know, don't just look at the information and keep it to yourself. Talk to people. Make sure that, you know, you're taking pathways to develop multidisciplinary teams to know what to do with that data and actually maneuver through it and understand the data and what to do with it is a big point as well. Now we used to have our outcomes reports where, well, we still have outcomes reports, but we did a quarterly benchmark aggregation, and then we published reports. And that was literally in the past, the only time you would get to look at your data results. Now that we have the eReport dashboards, we have what we call, I don't know, close to real-time data. Shelly, can you expand upon that a little bit? We have a saying here at NCDR, submitting early and submitting often. I'm sure you've heard us say this before. When submitting data through the data quality report, or commonly referred to as the DQR, then the data submitted will go through a process of complex and sensitive algorithms applied to the data, and this is commonly referred to as refreshed data. The refreshed data is then available to review on the eReports dashboard. This provides a near or real-time data aggregation. Every Monday morning, the eReports dashboard provides an opportunity to see the narration of the story being told at your facility. Additionally, it provides the ability to access for keystroke errors and et cetera. Quarterly, the data is aggregated and benchmarked with the data from the participating facilities within the United States, and the results are equipped facilities with the results to determine the need for a process improvement initiative, and it also provides tangible evidence of excellent care provided by your facility. So that's helpful. So the data does not appear in the dashboard the minute you enter it and send it through the DQR process. It goes through a refresh once a week, and so the close to real-time data is on a weekly basis you get that data, and then you can query it in the dashboard. Very good. Thank you. So in that data, a lot of times we're hearing terminology, numerator, denominator, and we're sometimes not really sure what's included and excluded. So there's a lot of documentation available for that, but Karen, I'd like to ask if you could tell the difference to the audience in regards to NCDR between the numerator and denominator. Absolutely, David. Great question. We're going to look at the numerator and denominator separately, and then we're going to look at an example of what would be a denominator number and what would be a numerator. So the denominator is your count of patients or procedures who remain after any denominator exclusions are applied. Now, this is going to be to the metric, the procedure, or the patient population involved in that metric. And then you have the numerator. So the numerator is the count of patients or procedures who meet the process or the outcome expected for that particular measure or metric. So if you're looking at a denominator, think of it as for a total quarter, the total number of TAVR patients that you had or aortic valve replacement patients. And then you want to know how many strokes occurred in that quarter. So you would look at all your data. You would look at how many strokes occurred. That's your numerator. So your denominator is your TAVR patients. Your stroke patients are your numerator. Thank you. That's super helpful. So also in the dashboard, we have patient detail lines where you can go through and cipher through all the different information that was coded for one particular patient. John, can you talk a little bit about patient detail lines and why that is important for participants? Yeah, sure. So patient detail is available in all metrics that are reported on the eReports dashboard. So once you've identified a metric, what the patient details do is it offers you the ability to dig deeper into that metric and investigate each patient that was eligible for the metric and met the denominator or did or did not meet the numerator. So there's columns of data provided within the patient detail that really illustrate all the pertinent and relevant variables that paint the picture and provide an explanation of why that patient either did or did not meet the numerator. So you can use this information to decide or come to the conclusion if data was abstracted incorrectly and something needs to be changed or modified to correctly represent what happened during that patient's hospitalization course, or perhaps there is a real opportunity for quality improvement based on what you're seeing with a certain patient population in that metric. Thank you so much. Now, is this data, you know, we hear the term raw data is, you know, from what I understand, raw data is that which has not gone through our DQR process. So if it's in the dashboard, that's technically not raw data, right? So what is the difference between raw data and data that's in our dashboard? And is there a way for us to actually find raw data? Right. So the data in our dashboard has been submitted through the DQR process. If you use the ACC data collection tool, it's already gone through a quality check. It's gone through the DQR process and it's been checked for logic and completeness. If you're using a third party vendor tool, it may have also gone through a quality check before going through the DQR. So this is quote unquote clean data that's been gone through a rigorous process before it's displayed on the dashboard. What you're talking about raw data is the data that you're submitting, you know, but it hasn't actually, you're submitting it to NCDR, but it hasn't actually gone through that process. And therefore you can go ahead and perform a data extract and filter and sort that data as you need to, as you desire. And if you're using a third party vendor tool, there's usually a different mechanism for how you can, you can do that, but it mostly involves extracting that data into an Excel spreadsheet and then manipulating it from there. So making a clear distinction between the data extracts and the eReport dashboards, Denise, is there a reason why participants need a data extract for, and use that, you know, to use that raw data? Yes, they have the ability to create their own specific reports based on the data elements that are available when you perform a data extract. Not every data element is used in the metrics. So there may be something that you want to identify for each one of your operators and provide each operator a report card based on that information. And it may be something your administration has asked you to do. Thank you. So, but when we're talking about the actual dashboard data, and as John said, as clean data, whenever we do that weekly refresh and we're using the query functionality, you know, there are certain quarters that are aggregated already and the benchmark has been provided. That data is only data that has had a green status. Is that correct? A green status in the DQR report? And that's to say that, go ahead. No, no, there, there are times when you can have yellow information that is, or a yellow quarter in your metric gives you the opportunity to evaluate that data as well. It's just not aggregated into the benchmark, into the benchmark, or the 50th percentile. There are some metrics, if you have a yellow submission, where you will get no data at all. And an example of that would be your risk-adjusted metrics. Your risk-adjusted metrics, you have to have your four quarters, four green quarters of data that's been evaluated through the DQR process. So data comes through the DQR, and it can be yellow or green, and depending on the metric, it may or may not be included on the dashboard. Thank you, Denise. I'm sorry to interrupt you. I just wanted to say that was a great clarification of that, and I really appreciate it. So while we're with you, Denise, the bread and butter of quality improvement opportunities lies in our performance measures and metrics. So can you help us all understand what an outcome metric is? And while you're at it, can you go one step further and describe the difference between a performance measure and a quality metric? I'd be happy to. Let's just start out with what a metric is. A metric's a means of comparing and tracking performance, and an example of this is an outcome metric, which evaluates patient outcomes within your facility and within that specific hospitalization, such as death or an adverse event. When you are looking at a quality metric, that metric supports self-assessment and quality improvement at the provider, hospital, and possibly a healthcare system-wide level, but there's less evidence behind that quality metric. Once the quality metric has been validated, it can become a measure used for performance. Now, a performance measure are measures that have been tried and true. They've been evaluated through a rigorous process, and they're suitable for public reporting, external comparisons, and possibly pay for performance programs, in addition to providing quality improvement opportunities for your facility. They're developed using ACC methodology and collaboration with other organizations, possibly. Now, again, if we're a participant and we see all the metrics and we use the dashboard information and we see where we stand in that and we understand what you just talked about, you know, what is a first plan of action that someone should do with those metrics when they see them? So, when you see your metrics, you should be evaluating the data that went into that metric itself. So, you should use the NCDR companion guide. It's labeled differently for different registries. Some of them are executive summary and metric guides. However, all the registries have them. It allows you to look at who should be in the numerator and who is included in the denominator and the criteria that is needed to meet that numerator, or as Kristen said, if it's a negative measure, such as adverse event and death, what has to happen for that patient to be in the numerator, which is not something you want to see, but in a perfect world, there would be no adverse events, but we are not in a perfect world. Right. So, when we look at these metrics and we realize that maybe the results are not exactly what we thought based upon what we coded into the registry, we go to the dashboard. A lot of times as clinical quality advisors, we hear from people, our patient fell out of a metric. Shelly, can you take a couple minutes and just explain to us what that means when we hear my patient fell out of a metric? Yes, David. Actually, when I hear the quote of patient fell out of the metric, it just brings a smile to my face because I envision a patient having a syncopal episode out of the metric, but at any rate, it is a term that is sometimes used to describe when the numerator is not met by a patient or an item or an element. When the patient, item, or element has been deemed eligible for the measurement by the criteria. So, it basically means the patient did not meet the numerator. And then by using that information, we can scrub down through the detail lines and kind of figure out what results are in a particular patient form that we coded that might help answer that question for us, why that patient did not meet the numerator. Is that normally the process that we would advise people to go through? That is correct. That is correct. So, like Denise said, you would use a companion guide. You would have the companion guide. I like to think of the companion guides as my companion when I'm looking at the eReports dashboard. And then in the patient level drill down report, it provides all the information necessary to evaluate why the patient met the numerator or why the patient did not meet the numerator. Very good. And, you know, while we're talking about the metrics, you know, there is within that guide kind of a grid-like chart, isn't there, that says inclusion criteria, exclusion criteria, and all the different information. Once again, could we repeat what document that is to help people for every metric understand what is included and excluded? Sure. So, that is the Executive Summary Metrics and Measures Companion Guide, or it is called the Outcomes Report Companion Guide, depending on what registry are you referring to. And why would we have different ones for different registries? Can anybody answer that? Oh, I can actually answer that. So, we have two different platforms currently for our registries. And for the UMTS platform, that is where you will find the Executive Summary Metrics and Measures Companion Guide. And for the legacy platform, that is where you will find the Outcomes Report Companion Guide. So, some of that might be internal dialogue, the legacy and the UMTS platform. So, basically, when we're talking about the UMTS platform and documents that would be in the Executive Summary and Measures and Metrics paperwork or documentation, that would be the, help me guys, it is the EP Device Implement Registry, the Chest Pain of My Registry, the CAPTCI Registry, LAAO Registry, and TVT, and the FTS ACC TVT Registry. So, those five registries would have the Executive Summary Metric and Measures Companion Guide. And the other ones, the AFib Registry and the Impact Registry would have the Outcomes Report, Historic Outcomes Report, just for clarification. Fernando, when we're talking about benchmarks, could you talk a little bit about the significance of the 50th percentile benchmark position or just, you know, whatever you can share about benchmarks? Yeah, sure. So, the 50th percentile term could probably sound like a hard term to understand or something, but it's actually a very easy term to understand. It's the middle percentage of the whole population included in the metric. So, for example, if you have 100 hospitals that are being measured in a metric, the 50th percentile is the percentage of the hospital that it's smack in the middle, that it's the 50th hospital in the row from 1 to 100 hospitals. The same rule goes to the 75th percentile and the 90th percentile. So, if you're above the 90th percentile, that means that you are above the hospital that fell in the 90th place of that 100 hospitals or the whole population being measured. Now, if I'm a hospital out there, is there a certain place on that range that would be better for me to be? Well, you will want to be above the 50th percentile. Right, and can everybody have an expectation that they're at the 100th or 90th percentile? I doubt it. No, the expectation is being above the 50th percentile is a good place to be. Right. Okay. So, let's change gears again. Melissa, do you mind taking this one? I'm going to kind of ask you to go over some of our data deadline information with, not the data deadline information, but the quarterly data deadline as we educate participants. You know, we always say, go for green, submit often before the deadline. So, I guess tying in the data deadline with the green DQR status, can you just explain the significance of having green and what that means in the dashboard and why this is very significant to our participants, having green and how that affects the dashboard? Sure. So, the green, everybody, we always say, submit early and submit often prior to the data deadline because we want everyone to be successful in their endeavors. If you receive a green benchmark prior to the data deadline, that means that your data was complete, it was accurate, and it's going to be included in the U.S. registry aggregate and benchmark process. It also means that you're going to get a complete outcomes report or executive summary measures report. It also will give you an indication of how your facility is performing for the rolling four quarters. And if you get a green, this also means that for the rolling four quarters of green submissions, you will have a complete report that you can submit to your other contract contractual obligations. If you get a yellow submission in any of the rolling four quarters, that means your data wasn't complete. So, when you're looking at my rolling four quarters, it's actually only representing your three quarters that you had a green submission. The yellow submission will also mean that you're not going to be included in the U.S. registry aggregation and the comparison group statistics for your specific registry. And when you're looking at your performance, some measure, some metrics or measures won't be reported, such as any risk model or risk standardized models, such as your adverse events, your mortalities, your acute kidney injury, your bleeding. You won't get any performance results for the quarter. And if your facility is participating in any public reporting, you're going to have a blank. And I know that a lot of sites have mandatory reporting to their state, and the state wants to see four complete quarters of data. Thank you so much, Melissa. Now, we're running a little short on time here, so I'm going to ask Kristen if you can summarize the importance of the eReport dashboard for our participant community. Yep. And as once a wise physician told me, you can't improve what you're not measuring. And if you're not looking at your dashboard, you're not seeing what you can be improving upon. So, go look at it. I just wanted to add, David, that you can be involved with the development of the metrics. We do use an ACC methodology, but we do ask for your opinions when we have a public comment and a peer review. So, please look for those announcements on the homepage of each registry so that you can have an impact on how those metrics are developed. Thank you for that, Denise. I think we're running too close on time to have any more conversation, but I do have a couple questions that are in the queue that were sent to me real quick before we leave. What is the purpose of a volume group in the dashboard reporting? So, in the dashboard reporting, you have the U.S. Registry, which is every single other site that's participating in the registry, and then you also have your volume group. So, what that does is it breaks down your performance just among your peers that have a similar volume of procedures performed in that quarter. So, it just kind of makes it one extra way of comparing yourself to like hospitals as far as the size of the facility or the volume of procedures. Super helpful. And then if quarters are not published, will they still show up in the dashboard? You mean unpublished quarters? Will they be able to see them? The way the question was if the quarters are not published. So, yeah, I guess that means they're not. They haven't been aggregated as a benchmark report yet. Will they still show up in the dashboard? Yeah. So, that goes back to us talking about the weekly aggregation of our unpublished quarters. You'll still be able to see that data after you submit through the DQI. Okay. Thank you for that clarification. So, we've reached our allotted time for this session. I think there's so much more we could talk about. But again, I'd like to thank everyone for attending this session called Dashboard Mania. We know there's a lot more detailed conversation, as I said. So, if you do have questions, please make sure you're reaching out to us using the contact us link or 1-800-257-4737 or emailing us at ncdr.acc.org. Now, we're going to continue this series. We're going to take a little break, but we'll be back this afternoon. Our next session is called Conversation with the Experts, Quality Improvement in Action. We were very lucky to have tapped into a unique quality team at Iris Medical Center in Iville, Idaho. So, you won't want to miss that. See you later this afternoon. Bye, everyone. Bye, everyone. Bye. Bye.
Video Summary
In this video, the team leader of clinical operations at NCDR, David Bonner, welcomes viewers to session two of a pre-conference for the ACC's 2021 Virtual Quality Summit. The session focuses on e-report dashboards and the importance of using them to improve patient outcomes. Bonner introduces the expert panel of clinical quality advisors and discusses the significance of the dashboards in evaluating and improving cardiovascular care. John Jarrell explains that the dashboards are the culmination of the data submitted by facilities participating in NCDR and provide executive summary metrics and detailed information about patient outcomes. Kristin Young discusses how the dashboards help identify areas for improvement by comparing a facility's performance to benchmarks. Melissa Nita explains the importance of submitting complete and accurate data before the quarterly deadline to receive green status and ensure inclusion in benchmarking. The panel also discusses the use of companion guides to understand metrics and the availability of raw data for further analysis. The session concludes with a reminder to submit feedback during public comment periods to influence the development of metrics. The next session will feature a conversation with the quality improvement team at Iris Medical Center.
Keywords
e-report dashboards
patient outcomes
clinical quality advisors
benchmarking
companion guides
raw data
quality improvement team
×
Please select your language
1
English