false
Catalog
Performance Improvement Initiatives Optimizing Dat ...
Performance Improvement Initiatives Optimizing Dat ...
Performance Improvement Initiatives Optimizing Data from Two NCDR Registries
Back to course
[Please upgrade your browser to play this video content]
Video Transcription
Hi, and welcome to the Performance Improvement Initiative Optimizing Data from Two NCDR Registries session. Please locate the session in the mobile app. This is where you can participate fully in the session with polling questions and Q&A. We have two speakers today. The first one is Maria Ava Andrew, who is a Continuous Quality Improvement Specialist at Houston Methodist Hospital. And Dr. Leslie Davis, she is an Associate Professor at the University of North Carolina at Chapel Hill. She is a Fellow of the American College of Cardiology. In 2021, Dr. Davis received an ACC Distinguished Associate Award, and she currently serves as the Chair of the ACC National Cardiovascular Data Registry Chest Pain MI Registry Steering Committee. Both of them have many accomplishments, which you can fully read in their biographies. All right. Welcome, everyone. Thank you for inviting me to the ACC. And I've got to tell you, I've heard a lot about the Quality Summit, having in the past served on the NCDR Oversight Board and then very much involved with the Chest Pain MI Board. And this timing of this conference always was exactly when I'm supposed to be teaching. So I'm a nursing professor. And so today my students have an asynchronous class. They're very happy about that. But I am thrilled to be here. I've got to tell you, I've been to a lot of professional conferences for decades. I have never felt so much energy, even at 7 a.m. And so I am excited to be here, and I look forward to our time together for the next hour. So today we're going to talk about performance improvement initiatives optimizing two registries. And my disclosures have already been disclosed here that I'm the chair of the NCDR Chest Pain MI Registry Steering Committee. That's the only thing relevant to this talk. So the goal for our session, which is written in the description, we're going to emphasize the value of maintaining data integrity, and we're going to showcase one hospital's approach to performance improvement with illustration and insight. So our objectives are to identify the rationale behind STEMI validation across registries and data sources, recognize the value of patient drill downs and benchmarking comparisons, discuss the value of a nonlinear project progression, which is going to be our case illustration, and the value of a team-based approach. So before I get started this morning, our president of the ACC mentioned the PEEPS. I've got to tell you, I'm an old ER nurse, in ER before it was emergency department. You know, we had the emergency room. I really miss the fact that we can defibrillate, like, with the paddles when they went hands-free. At that time I said, all right, let me look at something. It was back then it was the NARMI, and I started doing Quayle. I was the data abstractor for that because my love for defibrillation and the ability to use those paddles. You know, I used to practice as a child with the air hockey paddles. I loved it. One of my sisters became a lawyer. One of them became a psychiatrist. I guess they were helping each other after I would defibrillate them so much. But I've got to tell you, I used to collect all this data as part of the NARMI stuff, and then later we have many quality registries. But what is a high-quality registry and the data you get from that? You all know, I'm speaking to the choir here, that a high-quality registry that produces data from that high-quality data, it produces or provides a mechanism to collect structured, automated collection of patient information. Registry data, if it's high-quality, you can use that to help both set the benchmarks for comparison and compare your performance at your place, regionally, nationally, globally, because we have global sites. We now know from this morning's poster presentation there are examples from around the world now, the registry data, for outcomes-based continuance performance improvement. Now, I've also been a head nurse before. We used to call it quality improvement, and then the buzz was continuance quality improvement, and now it's performance improvement. So the terms sort of change, but it's all getting better. Even the terms get better. We also, if you have high-quality registry data, it helps attain high-quality care, consistent with evidence-based guideline recommendations. So later today, around 3.15 or so, there will be a talk on how do you get from the clinical guidelines to what drives what you're supposed to do in clinical care, and then how we collect that in a data registry. But this is what gets you the high-quality registry data. The data can be used beyond performance improvement at your specific site. We have this massive amount of patient information. As a researcher, I want to call it data, but it's patient information that you can ask and answer questions about care delivery and cardiovascular outcomes. There's a long list of different publications from the different registries. In fact, I believe it was about a year or so ago where we did the chest pain MI, the 15 years in review, the number of publications. There's so much that can come out of this information, not just directly impacting one patient at a time, but all the care you give at your site, but improving just cardiovascular outcomes across the world. High-quality registry data provides data-driven insights to inform your clinical and operational decisions to optimize care. It helps meet requirements for accreditation, as you know. And in some states, this helps meet the requirements for state certifications. Okay, now what about data integrity? So I'm just teeing this up for Maria, what she's going to share in our example. But data integrity ensures that the data quality, it's quality, it's efficient, the efficiency with it. This morning's poster that was presented talked about efficiency and not duplication, continuity. So to have really good data integrity, you need to have accurate information. So the accurate information has to be correct, and that's what accurate means, if you look it up in Google or even AI from this morning's talk. It also has not been improperly changed, either intentionally, if somebody's gaming, I'm not suggesting that with any of this, but you can have intentional, incorrect data, or it could be unintentionally changed. Reliable, I think about reliability as if you were to collect that information and collect it five minutes later and collect it next week to keep looking at that data, that in this case is documented, that it doesn't change. It's repeatedly over time, whether it's one person or other people, if it's reliable depending on when you collect it and who collects it, it's already there documented that it will be the same. It's consistent in the fact that it doesn't change over time, and to maintain data integrity, it needs to be complete. And I think about complete, you need to not have missing data, that's an issue usually due to information not being in the medical record, but also no stolen data, we don't think about this in our case, but just eliminating missing data. So that's what makes up good data. All right, also maintaining data integrity. So to maintain that data that's in there, and this is patient information when we talk to IRBs, Institutional Review Boards, but it is data. They have to be trustworthy throughout the life cycle. So from the time data are acquired until the time you archive it, or in some cases, not this one, destruct the data, it needs to maintain that data integrity. So these are terms. So how do you get good quality data and how do you maintain it? And how do we maintain this data and how do we get it into registries? Well, digitalization offers data capture solution, and you heard about some of that this morning. But also, back in the day, in the 90s, I did a lot of hard copy, you know, from the hard copy med records to the hard copy data forms that had three like the triplicates, sort of like when you get a bill, three copies. I think the yellow copy is what later I used to do research papers, heart and lung papers and different things where we used our own data to write reports and manuscripts about our community hospital. All that was hard copy. All that, I'm not saying I can make a human error, but human error can happen when you take the time on those ECGs, put it in that case report form, we called it that time, and then later I used that to ask and answer research questions. Now it's much more automated. An online data collection tool is great. And also, as you know, the NCDR offers a certified third-party software vendor that can automate data capture. Also, we didn't worry about this so much in the 80s and 90s. You need to have safety plans to help avoid privacy violations or information breaches and a built-in audit system, a random selection of those, and our quality registries do that. So that's part of, if you look up at the definition of data integrity, that that's part of that. So what are the consequences if you don't maintain data integrity or don't maintain that high-quality data? Well, you can have inaccurate or incomplete data that can pose challenges for the patients at large, clinicians, health systems, and society at large. If we don't have that data that's informing or patient information that's informing our choices clinically or for cardiovascular outcomes, that's a problem. Good quality data can lead to publications that are evidence that could lead to some guideline recommendation changes. So it all matters. And what happens is if the data are not good quality, collected, maintained for that whole life cycle, and you're basing decisions on that, whether you're a researcher or performance improvement experts, there's a threat to conclusion validity. Now, if you're in my research classes as a master's or doctorate student, you're like, oh, all those threats to validity. Threat to conclusion validity means we're making these conclusions based on that information, and if it's wrong, if the data's wrong, then our conclusions are wrong. Either you're saying something's there and it's not or vice versa. So if you make incorrect conclusions about the relationships, if you have duplicate data entry, we heard about that this morning in the electronic poster, you just want to make sure. And also there was conversation, and Maria's going to talk about this, the definitions of these metrics, the data dictionaries, do they matter? Yes. All right. So what are methods, I'm teeing it up for Maria, to compare data elements to optimize data quality? All right. So I talked about reliability. If you keep collecting it with the same person intra, rate of reliability, or inter, two different people or two different systems, rate of reliability, that you're getting the same information to say that's correct information, I can make decisions on those conclusions. It's going to help evaluate the accuracy and the integrity of data abstractions. So this we call IRR. IRR is an assessment of sample cases. You usually don't do them all, but in some cases with small numbers you can do them all, to measure the degree of agreement among the reviewers or among two different sources. Other forms of comparison, you can look in the literature. There's the intra, rate of reliability, or triangulation of data sources. We do that. You also, there's the whole body of literature on drilled down data where you look behind the numbers to improve care, especially if you have a select number of cases. So, again, to tee it up for Maria, who's going to be talking about how they looked at chest pain in my registries and cath PCI registries from a subset of patients. What is a reminder of the data we collect in the chest pain in my registry? There's demographic data about all the different subgroups that can be in this registry, patient-specific treatment strategies and processes of care, provider and facility characteristics, hospital-specific therapies and reperfusion strategies, and the compliance with ACC AHA guideline recommendations. So, anytime we refer to what health systems and health people, health care professionals do, that's compliance. If it's with patients, that's adherence. So, reminder, everybody should know this, but data from the cath PCI registry, also the demographics for diagnostic and PCI procedures, those risk factors, cath lab visit indications. I just went through the most recent data forms, coronary lesion information, and all this other information that's collected there. So, these are the different things that are collected as part of that. So, what we're getting ready to hear about is how data from two registries were used to cross-validate STEMI cases, ST-elevation MI cases. So, I'm going to pass the baton to Maria, and we'll take your questions at the end. Thank you. Thank you, Leslie. So, good morning, everyone. My name is Maria Eva Andrew. I'm a continuous quality improvement specialist at Houston Methodist Hospital. I'm really honored to be here today to present to you an intent to cross-validate STEMI cases between cath PCI and chest pain of my registries. Is it a brain tease? I am here with our external reporting department and my team members, manager Vicky Tu, our chest pain of my abstractor, M.J. Bialovasquez, our registry associate, Becky Perez, and a few other colleagues. Some information about myself, I've been in this role about a year and a half, but I was previously an infection preventionist at the same acute care hospital for many years, where I was involved in the external reporting of hospital occurrences, and I worked on numerous performance improvement projects as well. So, it's been a really interesting journey to learn all about NCDR. And on a personal note, I'm also very honored to be here today because coincidentally when I started this position, both of my parents experienced significant heart conditions, and both underwent procedures at Houston Methodist Hospital literally months apart. So, I can attest firsthand to the importance of good quality care and timely care, and so I want to thank you for all the work that you do on a daily basis as well. So, we'll start with a poll to get to know our audience better. And do I need to go back to scan? What is the role at your facility? So, we'd like to know more about you. If you could answer from the answer options here. And if you're like many of us and you wear numerous hats, there is an answer for you where you can select more than one role. We'll have about 15 seconds per poll. Go back to the QR code? Sure. I'll pause here for a minute. All right, thank you. We have a few more polls in the presentation, so now we know how to advance the slides. So the wall-to-wall ear facility, we see we have an eclectic blend. Many of us wearing multiple hats. We have abstractors, project specialists, registry site managers, and so that just speaks to the complexity of the work that we do on a daily basis. Followed by a more personal question for how many of you know someone who's been affected by heart disease? So 96% of course. So this just speaks to how prevalent this condition can be and speaks to how common it is we can either be directly impacted or indirectly by those we know and love. Further highlighting the important work of this conference. So diving into the presentation, some more information about Houston Methodist Hospital, or HMH as we call it. It's a leading academic medical center in the Texas Medical Center, and the Texas Medical Center is one of the largest, if not the largest in the world, I believe. Houston Methodist has eight community hospitals in the greater Houston area. It has been named to the U.S. News and World Report prestigious honor roll, which is the best hospitals list for the eighth time overall and sixth consecutive year. It's number one hospital in Texas, also for an impressive 13th consecutive year. We're also ranked in numerous specialties, in 10 specialties in 2024-25, and cardiology, heart and vascular surgery ranked number 15. We do participate in numerous registries. For NCDR purposes, the registries that we participate on are listed here with chest pain of my, cath PCI, LAO, STS, ACC, TBT, and EP device. And you can see the volume for the past rolling four quarters, last published in quarter one, 2024 displayed here for chest pain of my, and then the procedure volume for the registries that follow. You can see for cath PCI with a notable procedure volume there. So touching on Leslie's points about IIR, we do have a poll question next. For your facility's inter-rater reliability, or IIR, do you use an internal or external auditor? And we'll give you some time to answer. Thank you for your responses. And so we do have a mix. And I forgot to add, for those of you that don't do an IIR or are unfamiliar with an IIR, that's completely fine. There was a response option for have not done an IIR at this time. So majority of actually do internal IIR, but we have an external as well. So that's a nice transition to the topic of this presentation, because an IIR is really how it started. So our external reporting department performs IIRs routinely on our registries. And in 2023, the cath PCI registry was up for rotation for an IIR. So the goal of this project presented here today was initially strictly just to validate STEMI patients across this cath PCI and chest pain in my registry, and also an internal data source, which was our internal STEMI activation team review data. So we did recognize that the chest pain on my registry included both in STEMI and STEMI patients. So here's a visual outlining our quality improvement STEMI story that's very much still in process. And the focus is that this presentation is more about our post-IIR quality improvement approach. So we started with this iterator reliability that you see there on the top left, and then proceeded to do these NCDR STEMI comparisons, which led us to do patient-level drill downs to explore for performance improvement opportunities, then phased into looking at our internal STEMI activation data, and where we are today, looking at benchmarking across projects and thinking about innovative ways to display data for our stakeholders. So why STEMI? So STEMI was chosen as a project because it was included amongst the targeted variables during the IIR for cath PCI to accurately capture PCI indications. And then, of course, it's very relevant to patient type selection in the chest pain in my registry. And here's a visual of the patient type and PCI indication definition that's available in the NCDR chest pain on my data dictionary. I'm assuming everyone here is very familiar with this definition, so we'll move on, but I'll just pause here for a quick review. So again, this is about our post-IIR experience. And our approach was really just to form a small work group, as I mentioned, to validate these STEMI cases across these two NCDR registries and our internal STEMI activation review data. So this involved reviewing patient lists across data sources and really exploring the patient list differences. Our first phase, as we'd like to call it, really focused on defining patient inclusion and understanding the distinctions between the registries, particularly for me. So for chest pain in my being the highest value between arrival and discharge with a diagnosis of STEMI, and cath PCI, of course, being more a procedure-based registry with a procedure with patients with percutaneous coronary intervention. So during this first work group, we also reviewed patient lists for specific metrics for each registry, and a comparison of the metrics chosen is listed here. So for cath PCI, we chose PCI within 90 minutes, and the description is below. For chest pain in my, we chose first medical contact to device time, and the description is below. And for our internal STEMI activation data, we learned that the door-to-balloon calculation was aligned with this chest pain in my metric definition, so it was nice to see this aligned. So initially, this first group meeting focused on reviewing these patient lists to see if there were any differences. So we reviewed the patient list for the two metrics described with the rationale that that cath PCI patient list PCI within 90 minutes for patients with STEMI should overlap with this chest pain in my patient list metric, first medical contact to device time. Our findings, however, were for the list that had fallouts, the comparison revealed a rolling four-quarter match of 64%, or does it say 62%? So it was a 62% match. It wasn't 100% for the patient-level drill-downs when we did that comparison. And we didn't know that the chest pain in my registry had more exclusions, but this didn't account for the differences, nor did EMS transport time. So then we started going into the second workgroup phase where we reviewed this patient-level drill-down for cath PCI within 90 minutes to look specifically at opportunities for improvement. And we ended up reviewing for that rolling four-quarter, there were eight patients that were fallouts, and we looked at the list differences between registries and realized that one patient was coded as STEMI in one registry and STEMI in another registry, and the remaining patients had delayed door-to-balloon times that were revalidated and confirmed by the abstractor. So this really spoke to the importance of doing validation across both registries. In this third workgroup phase, again, we're looking more about performance opportunities, improvement opportunities. We looked at the specific drill-down for cath PCI in particular, and we categorized or came up with themes for the opportunities for improvement, and those are shown here. So with ED to cath lab transit time being the most frequent contributor to the fallouts related to PCI delay, and then you can see the subsequent themes that were developed as well. And this overlapped, actually, with the timing of a cardiology service line meeting, so we were able to share these themes, and it was presented in the meeting, so this was a nice coincidence as well. So now we're going to take another poll, just to make sure everyone's still with us, and ask for myocardial infarctions are more likely to happen on which day of the week? So we have a smart group, or an intuitive group, or both. So correct me if I'm mistaken, but the internet does say that Mondays are the most frequent day of the week. So for those of us that have standing, recurring Monday morning meetings with management, such as myself, might need to reconsider. Yeah. All right, so hoping everyone's still with us. We're coming back. We're now in this third work group phase, as I like to call it. We're thinking about more performance improvement opportunities. So then we start comparing our patient lists with our internal STEMI activation data, and then we start to also concurrently validate that our patient list for inclusion in the chest pain in my registry was complete, so that it included all necessary cases. So now we're starting to branch off from our initial goal, right, of just validating these STEMI cases. So during this comparison of patient lists across these three data sources, again, we're starting to brainstorm performance improvement ideas, and we started having discussions about how we could trend data, particularly to improve time to PCI. But we realized that before doing any data analysis, we had to collectively understand the data elements that were being collected. So this led to the creation of a data dictionary for our internal STEMI activation data. So we did complete this data dictionary, and the goal was to eventually analyze data. And interestingly, this was shared with Cath Lab leadership, because they took an interest in analyzing this internal STEMI activation data as well, and so they had to also understand the data elements, as it could be very complex. And their aim was to also improve time to PCI. We did compare this data dictionary with chest pain in my data elements to assess for similarities and differences between data elements. And we'd like to point out that this data dictionary is different than the chest pain in my NCDR full specifications resource guide, although it was complementary. So here's a visual of part of the data dictionary. It is small font, but we're just sharing the concept. It's actually a very long document, but we did a crosswalk so you can see the data elements to the far left, the definition in the immediate column to the right, and then the comparison with chest pain in my with the data element with the definition included as well. And then the last column to the right was the numeric value for that data element that was included in our software that's used to report chest pain in my data to NCDR. And we just included that for fun in case we analyze the data going forward, which I know will probably be an immediate next to do. So some example of some brain teasers, as I'd like to call them, particularly, again, for me, where we're developing this data dictionary, are listed here with the explanation that we found that for our internal data, we actually had a lot of internal goals that may not have been directly comparable to NCDR. So for example, time spent in ED having a 30-minute internal goal, time for EMS transport having a 30-minute internal goal, door to ECG having a 10-minute goal. And this didn't include the transfer patients, of course. And response time for STEMI, particularly for MD response time. So those are just some examples of internal goals that, at the time of review, didn't directly align with the metric in NCDR. And the form to the right illustrates how this internal STEMI activation review data is collected. And you can see it's just very rich with various data elements to use for trending purposes. Other examples of brain teasers, as I'd like to call them, are displayed here. And these just involved further conversation with the data experts for this source data to further understand if there were any differences with this NCDR data and our internal STEMI activation data. So this had conversations whether EMS was included in this time metric. Whether the internal STEMI activation data had excluded transfer patients and if it was in regards to a 24-hour limit, the provider listed, if it was just the cardiologist or if it also included the admitting provider and diagnostic cath operator, and whether that first ECG was regardless of STEMI. So during this data dictionary exercise, again, we're starting to brainstorm ideas. And we realized, as I previously shared, that this door-to-balloon calculation used in our internal activation data was aligned with the chest pain in my registry metric. So we decided to do a comparison of these two data sources. And the intent of this comparison was to compare our internal STEMI activation data with the goal of PCI within 90 minutes to registry benchmarks, or first medical contact to device time. So before we give away the answer for our benchmark comparisons, we'll take another poll. So for this chest pain in my registry, what do you think is the 50th percentile benchmark for median time first medical contact to device? No exceptions. So again, we have a very smart and intuitive group as well. So for the 50th percentile, it is 76 minutes, as the majority indicated. And for those institutions, like Methodist, striving for excellence, what do you think is the 90th percentile benchmark for median time first medical contact to device? No exceptions. Another remarkably smart group and response. It is indeed 64 minutes. So again, we tried to answer the following question. So although internally we were meeting this less than 90 minutes or 120 minutes for transfers as best practice, how are we performing against that NCDR 50th percentile benchmark? And we found that if our goal was to be at that 50th percentile NCDR benchmark, we needed to be at the 76 minutes as we shared. And we realized that currently we're performing between the 25th and 50th percentile. So here's a visual of the graph that's in development, I would say, for this benchmark comparison. I'd like to illustrate that it's very much still in progress, but we're just sharing it to share the exercise, to share the concept, and to share some lessons learned as well. So again, this concept started with a comparison of our NCDR chest pain in my data, which is the gold bars, and our internal STEMI activation data, which is the blue bars, and we looked at best practice guidelines. In this exercise, however, we realized that the patient lists were not identical, even though we were still targeting that best practice guideline. So this is evident by the variation of the N or volume between graphs per quarter. So for example, for quarter two, you can see 16 patients in the STEMI and blue activation data versus the 13 in the gold. So initially we thought this was a problem. We were like, why are these, why do we have different patient list volume? But we realized later that this was okay and it could be explained by a variety of reasons, which we'll get to in the next slides. So going then back to the interpretation of the graphs, so for the gold NCDR graph data, you can see with the gold bars, this is displaying data for the metric median time first medical contact to device, no exceptions. And again, we needed to be at the 76 minutes to be at that 50th percentile, right? So that is represented by the green dashed line that shows you that 50th percentile of 76 minutes, and then you have the best practice guideline of 90 minutes with the blue dashed line above. So you can see that for quarter one, for example, in this 2023 data, we were at 103, sorry, I can't see very well, or 109, 103 minutes for time to first medical contact to device, which was longer duration than those 76 minutes, which would be the 50th percentile and even the 90 minutes. And you can see some fluctuations across the quarter with really only quarter three that was below the 76 meetings, so essentially doing better than that 50th percentile benchmark. We also started having conversations of, well, maybe we should have that 90th percentile illustrated in the graph, and that would then lower the threshold, and we would have it at 64 minutes underneath the 76 minutes. And then we also started asking ourselves, for the NCDR patients that had an internal STEMI activation, did they have better time metrics? And for 2023, the answer was preliminary yes, that their median time for time to PCI or first medical contact to device was 80 minutes, and then for those that did not have that internal STEMI activation, their median time was 109 minutes. So as you can tell from this presentation, we started with one goal, and that was strictly just to do validation across data sources. But then we progressed to validating across other data sources, such as our internal data, while brainstorming performance improvement opportunities, we developed data dictionaries, we started thinking about benchmark comparison graphs. And so, for those of you familiar with spaghetti diagrams, this is a visual of how the experience felt along the way. It wasn't a linear project path, and I can see some people smiling in the front row from my team. However, we still had meaningful findings along the way. So one of the lessons learned in this benchmarking exercise was that our patient list volume differed, as I mentioned. And this was explained by a variety of reasons, and you can see the reasons shown here with the NCDR patient list on the left and the internal STEMI activation data reasons on the right. So simply put, the internal STEMI activation data was based on the month of the patient's STEMI occurrence versus the discharge date in the NCDR patient list. And our internal STEMI activation data was based on pages received versus a STEMI diagnosis at discharge for the NCDR list. And the NCDR list also had various layers of exclusion or excluded patients as well, such as transfers. So in the end, quite a bit of time was spent investigating these patient list differences. So kudos to our chest pain MI abstractor and our manager for kind of working through all those intricacies. And we really tried to do that to ensure completeness and accuracy. So we started with reviewing our patient list for inclusion in chest pain MI. Was that thorough and complete? And if not, there needed to be an investigation as to why and an escalation to resolve that coding discrepancy. For our internal STEMI activation data, we had to apply filters to only include those patients that had a PCI. And then we did something that you would call bidirectional patient comparison. So essentially looking at whether all internal STEMI activation patients were included in the NCDR registry or excluded appropriately, and then vice versa. If the NCDR patients were included in our internal STEMI activation data and had been activated with a PAGE. So our project summary, to outline what we've gone over today, we started with the review of two specific metrics for two registries. Then we did the drill down on a metric for a cath PCI, PCI within 90 minutes to explore for performance improvement opportunities and list differences. We compared that with our internal data for STEMI activation and concurrently started doing some validations that our patient list for inclusion in the registry was accurate. We then started brainstorming performance improvement projects, which led to the development of this data dictionary and then also development of this benchmark comparison graph between data sources. So where we are today is asking our leaders and ourselves if we should update targets for these time metrics and essentially align guideline recommendations with these NCDR benchmark performance metrics, particularly for time metrics such as first medical contact to device, STEMI to device. So in other words, internally reframing if we need to target 90 minutes or 76 or 64. So our next steps are to continue to do these comparisons and compare our internal STEMI activation data with our NCDR registries, investigate if there's list differences or patients missing, escalate if needed if there's coding variations, and then moving forward, we've considered doing benchmark comparisons between the chest pain MI and cath PCI registry as well. And lastly, another lesson learned is the value of a team-based approach. We each have different skill sets. Each one was a data expert in their own data, and so we really had a defined role for each person in our small work group. We used the competencies of everyone. Again, we come from different diverse backgrounds. We did allow for enough time to review or try to files before meetings to allow for productive meetings. We included our manager for cross-collaboration. We were rather forced to adapt to external circumstances while we were actively working on this. Hurricane Beryl came through Houston, and for those of you very familiar with the situation, we had a power outage for many days, and for those of us in the South, we know that can be really intolerable in the summer. We were given space to allow for innovation and think about how to approach this project, and for anyone wanting to replicate efforts, I would advise for the inclusion of a data analyst to be able to pick up on all the nuances with the data and the differences. And this concludes the presentation. I want to thank you so much for your time today, and my team and I are available for any questions as needed. So now we would like to open this up to questions. So we've talked a lot about a few things. I just will make a few remarks. So I really like that you, at the beginning, this was very subtle, but I called it, that you have a rotation, it would seem, that the different registries are on this. It wasn't that your favorite, like me, as an ER nurse, okay, let's go for the bleeding traumas or the STEMIs, but you found a way, you have a rotation list where you evaluate the quality and the performance. Would you comment on that? So you had mentioned they were next up. Sure. So if you can all hear me through the mic, yep. So maybe perhaps from my background in infection prevention and control, we followed something called the PDSA cycle, so Plan, Do, Study, Act. So it's very common to start with one approach, and then as you gather data, you can re-evaluate and redefine your scope as needed, and it can be a very fluid process, and we were given the opportunity to do that and focus on whatever we found needed to be addressed and share it with stakeholders and follow their lead. So you said chest pain MI was up next, or STEMI cases were up next. So my question was, is there a calendar or is there a list for how do you choose priority or how long does it take to get through all of this? For the IAR specifically? Yes. Yeah, so we do have our manager here who might be able to answer more concisely, but we do have an internal team that is capable of doing the IAR within registries, but in the instances when the staff is lacking or we need an external auditor, we do contract out, specifically we did with registry partners. So we'd love to take some questions from the audience. Are we able to do that? Yes, we are. We have several questions. The question with the most upticks are, what patients does HMH abstract into the chest pain in my registry, including low risk? And into the chest pain, or I'm sorry, into the cath PCI, i.e. interventional only or intervention and diagnostic? So I can try to answer, and I'll have my team answer as needed. Correct me if I'm mistaken, our registry partners as well. I can pass the mic or answer, I guess provide feedback for cath PCI, it would be both diagnostic and procedure, and for chest pain in my STEMI and in STEMI primarily, but if you'd like to add anything further, please do. If you'd like, don't do the non-STEMI, STEMI, no low risk, no unstable, and then PCI only, do diagnostic if they have PCI, but it's PCI only. Yeah, so it's PCI only, but they do diagnostic if accompanied by PCI. And you said for the chest pain in my registry, you are currently not doing the low risk. Correct, just non-STEMI and STEMI. Non-STEMI and STEMI. And we'll have that microphone in case others in the audience. Go ahead, Shelly. Okay, so the next question is, do your abstractors abstract the STEMI case for both registries, or do your abstractors focus on just one registry? I think we should invite your abstractor up to this, because these are boots on the ground. How about that? I want to hear the abstractor, come on up. And maybe even a show of hands if we're, I can't see with the lights whether they're coming up, but with our abstractors, are you primarily, we need a polling question, but we can't get one right on the ground. But raise your hand if you're an abstractor for a registry, first of all. Okay, so you all can see each other. Keep your hand up if you're an abstractor for more than one registry. Keep your hand up if you're an abstractor for chest pain, MI, and PCA. Okay, first of all, anybody that had their hands up, I think everyone needs to applaud anyone who's boots on the ground, because without you, this quality data registry would not be possible. I mean, with everyone in this audience, we need you all here. But without the abstraction, without the great data, so Mary Jo, you can, so the question was, so do your abstractors, are you the abstractor? Yes. Yes. Boots on the ground. For chest pain, MI. Okay. So do you do chest pain, MI, and cath PCI? No, I don't. We, I only handle the chest pain, MI registry, and the cath PCI is for the CARTA. We have before the internal for the cath PCI registry. Who's collecting that data today when everyone's here? So it's not real time. Okay. What's our next question, Shelly? I want you to stay in case there's a question. Okay. What was the goal for the physician's response time for the STEMI? So probably the goal for the cath lab response, because I've been in many of those cases where the physician's there and you're waiting on the whole team. Correct. So the cath lab response, it was at the question really about what's the cath lab response? Well, it was for the physician. Physician. Okay. So the specific question is, and maybe you all know that internally, what was the goal? These are very specific, but very good questions. What's the goal for the physician to arrive in the cath lab? His or her, are they on their response? It should be 30 minutes. 30 minutes? 30. Okay. Is there a different goal for the whole cath lab to arrive? No. It's the same. All right. You need the team, right? You do need the team. Okay. Next. How many abstractors are you auditing, and how do you handle IRRs that may not meet your standards for accuracy? That's a tough one. So how many abstractors are being audited? Audit sounds like, ooh, jigsaw. Are being evaluated. Congratulations. This is our manager. On such great work. And by the way, I'd challenge, while we're getting that question, I'd challenge that one question. How many have internal or external inter-rater reliability? Part of the NCDR has a percentage of cases that are audited. So you do have an inter-rater reliability check. Well, thank you for that question. My name is Vicky Chu. I'm the manager for external reporting for Houston Methodist, and I know your first question before is rotation. I do have, as part of my goal, is to do an IRR to at least three registries per year. Okay. That's a good question. That was my question. That was one question. Three registries per year. Great. And what was the next question is, how do you relate? We do have, with our outside contractor, they're very specific, and it's really nice because they do have goals and criteria to meet, like how many, like you need to meet at least 95%, and if you don't meet 95%, there's different steps. You know, like if you don't meet 95, then you go for the second round, what are your learning points, and then you have to at least meet that for the second round. Of course, for the third round, we don't want to talk about it. But we just want to get it right. So it's all in the spirit of continuous performance improvement. Yes. The main goal of the IRR, really, is to make sure that we are interpreting it correctly, and we're giving them a chance to continuously learn, and then give some feedback. Because really, the goal of the IRR is making sure that we're really giving a valid, reliable data. Okay. And hence, the manager has a lot of those ribbons, by the way. So what's the next question, Shelly? Could you please share your data collection form document? If we reach out to you via email, I would love the layout, and would love to use it at our facility. And I can tell you that we do have a participant resource sharing page, that if you submit it to the NCDR, then everybody can actually use it. But that would be something that you would need to decide to do. But are you willing to share that? That was the question. That's awesome. Thank you. And Shelly, I'm going to put on my academician hat, slash also editor, but I won't disclose which journal. But I say this is the type of work that not only needs to be presented in this format, but also needs to be written up if you don't already have a manuscript. Because these are the very things that need to be disseminated beyond this group. So go ahead. So this next question, it's ticked up high. But I'm going to ask it for that reason. So discharge diagnosis, notoriously inaccurate. How can that be corrected? Whoa. Let's see. Whoops. The time is over. We are done today? No. You know, this is tough. I mean, I deal with this on a firsthand basis just in the clinical research I do. I'm more interested, I did explain that I do ACS, acute coronary syndrome, and there's certain I do interventions where I do behavioral interventions. And that discharge diagnosis sometimes doesn't match what I know is in the medical record. And so they're eligible for the study that they could potentially benefit. So it's tough for many reasons for what we can offer clinically afterwards. And you still, and what these quality data, every patient matters. That's on our website for NCDRI, like that's mine. But it's every patient matters. And what you're doing is for that every patient, someone's mother, someone's father. So that discharge diagnosis, I don't know how to take that curveball. I know when she was building up to it, it was going to be hard. But I know it's a reality we live in. And hopefully the AI will help. But can you all comment? I mean, do we, we go to leg day. I'm not sure if it's going to answer the question or is the answer to the question. But at Houston Methodist, we do have our resource, and I know we didn't mention it. We do have a coding team. We call it the coding query team, that when our abstractors do have questions about how is this coded or is this coded correctly, we do have that access to them that they do review the charts. And that's one of the process that we did also with CAF PCI and the STEMI review, that we did reach out to our coding team, because they're the experts on the coding and say, we think this is not that. And then they would come back to us and say, well, these are the guidelines. They would explain why it's coded that way, because we think it should be coded this way. Or they would say, well, we reversed it, and thank you for your review. And I also think it goes beyond this. We have these discussions at the steering committee level. We have these discussions in the guideline committee level. Type 2 MIs are probably going to create an MI in me. But type 2 MIs, and I got to tell you, you know, because I'm looking at the clinical trials I do, you know, it says I'm going to exclude type 2 MI. I've quit putting that on grant applications, because it's too hard to explain. But even in the med records, I mean, so I say, if you see something, say something. Because a lot of times, somebody might be where they're saying type 2 MI. But I'm like, OK, our champions for our registries look at that case. Our head of cardiology will look at the case. I'll look at that case frontward and backwards. And I know they could benefit from treatment. And I know in my heart and soul that they're not a type 2 MI based on the evidence, based on this, that. So I say that case needs to be flagged, because that coding is not right. And it could have been somebody's note in this. And I use it as an opportunity for improvement, an opportunity for journal club, an opportunity for grand rounds. But I'll just tell you, that's my soapbox on type 2. This was on STEMI. I'd say non-STEMI should be next. That's tough. But non-STEMI. Yeah. And I wanted to add, we do have a very good relationships with all of our physicians, that we do send it out to them as well, and CC them with our query to the coding, so that they are also looking at what we're querying and why we're asking the question. OK. The next question is, we currently utilize a third party abstraction company for the cath PCI registry. However, they abstract the chest pain in my registry internally. How could we incorporate an IRR? Is there any feedback you can give for that question? We're actually in the same scenario for whatever question. We're actually have MJ as our internal abstractor. And then recently, we had our cath PCI abstractors retire. So we did outsource our cath PCI. In terms of IRR, we're still in the early stages. So I'm still thinking of how can we do the validations. And I know MJ, as you know, cath PCI and STEMI are closely related. She is also learning the cath PCI, because in the future with AI talking, there might be possibilities of using AI for cath PCI. Then that would give our abstractor more time to do the validation rather than the manual abstraction. OK. So we are at time. We have one more minute left. And I'm not sure any of these questions can actually be answered within that minute. But maybe this one. Do you have an automated data entry, and if so, what? Where it comes directly from the electronic health record that comes into that? The only automated we have right now is really the demographics that interfaces into our vendor software. But we still do validate those just to make sure that we're mapping it correctly. OK. All right. I'm going to ask one last question. I'm going to take the prerogative of having a mic here. And I asked the same question, and I asked it this morning at that wonderful poster. What surprised you the most when you did this brain teaser? What surprised you the most? This is pretty. What surprised you? That's a good question. Oh, I'm hurting her. Well, I think there were, as I shared, some meaningful findings along the way. And while we were doing this work, it coincided with meetings, and it coincided with asks for data requests. And so actually, it's very rewarding that we worked on something, and now it's a project at a hospital. Because along the way, the data was shared. So for example, the hospital has formed a quarterly STEMI meeting. It's physician-led, and the intent is to improve time to PCI and explore PCI delays. And it's very active with data requests. And so it can be very rewarding. I think change can take time, right? And if data is shared along the way in a project, if it's the right time to the right place to the right audience, it may take off as a project and improve patient outcomes. So it was very rewarding for me. And personally, that was the most surprising of how it all coincided. So thank you for having us. This is wonderful. Thank you.
Video Summary
The session, "Performance Improvement Initiative Optimizing Data from Two NCDR Registries," focused on leveraging data from the NCDR Chest Pain MI and Cath PCI registries for performance improvements. Hosted by Maria Ava Andrew from Houston Methodist Hospital and Dr. Leslie Davis of the University of North Carolina, the session emphasized data integrity and patient drill-down analyses.<br /><br />During the session, it was discussed that systematic validation of STEMI cases across registries is vital. An initial goal was to perform internal audits (IRR) for the NCDR registries, comparing metrics like PCI within 90 minutes and first medical contact to device time. The results showed significant discrepancies with only a 62% match rate. The audit revealed issues like inconsistent data entries, especially concerning STEMI diagnosis variations between registries.<br /><br />The second phase involved deeper patient-level analysis to identify performance improvement opportunities, focusing on delays such as ED to cath lab transit time. Furthermore, an internal STEMI activation team's data was compared to registry data, revealing the need for enhanced benchmarks in patient care times.<br /><br />Maria Eva Andrew outlined their methodical approach: rotating assessments across registries, team collaboration, and iterative data dictionary development. They discovered a need for benchmarking beyond internal guidelines, aligning more closely with NCDR standards.<br /><br />The session concluded with a Q&A, discussing the importance of accurate discharge data, the need for automated data entry systems, and how the findings led to broader performance initiatives at Houston Methodist Hospital. This comprehensive analysis exemplifies the critical role of high-quality data in improving cardiovascular care outcomes.
Keywords
Performance Improvement
NCDR Registries
Data Integrity
STEMI Cases
Internal Audits
Patient Drill-down
Benchmarking
Data Entry Automation
Cardiovascular Care
Houston Methodist Hospital
×
Please select your language
1
English