false
Catalog
Data Abstraction Quality – Lessons from Your Peers ...
Data Abstraction Quality – Lessons from Your Peers
Data Abstraction Quality – Lessons from Your Peers
Back to course
[Please upgrade your browser to play this video content]
Video Transcription
the Heart Hospital Plano. I'm gonna be your moderator today for this exciting session that we have, Data Abstraction Quality Lessons from Your Peers. Got two really great presenters here for you today. We have Jennifer Waters from Baylor Scott & White, the Heart Hospital. And we have Vicky Van Meyder from Ohio Health in Columbus, Ohio. Jennifer's gonna open it up for us. Once Vicky is also completed, we will have time for questions, so please make sure you're submitting your questions through the app. Jennifer? Hello, I'm so excited to be here today and share with you the story of our department and how over the past year and a half, we have moved from a department that was surviving to a department that was thriving. This is my first time at this conference, and I first wanna just say, wow. I mean, it's gonna make me teary-eyed, wow. I am so inspired by all the posters that I've seen, by all the presentations, and I'm really inspired to see so many women stepping up and taking on these roles. So thank you for motivating me, and thank you for making me wanna be better because, again, this, I'll be back. This is spectacular. And I also wanna thank a couple of people. The first one is our moderator, Lisa. If you don't know her, she is an absolute national treasure for Cath PCI. She was my first mentor when I started, and she taught me about being thorough with my documentation, and she is a total workhorse. The other person I wanna recognize is Christy Verschelden, and she is like our department manager. If you've been on any of the registry resource pages, you've probably seen a million things that she has shared onto those resources. She is a data freak. It's her love language. She has a wealth of knowledge in all things data, chart analyses, and she is so generous with her knowledge. So thanks to them, and thank you to you for just being so motivating to me. So just a little bit about where we are from. I'm from Baylor, Scott & White, The Heart Hospital. This is located in Plano, Texas, and we're a suburb about 30 minutes north of Dallas. So the people that are not from Texas, Dallas is just a quick five-hour drive straight north from here. And so our little facility is like a little powerhouse, and we have the main campus in Plano, and then we have two smaller facilities that are each about 20 minutes from where the main campus is. And so between those three places, we have 30 acute care beds. Since this is a quality conference, I get to talk for one second about our quality awards. And I think you, more than my husband or our friends, understand kind of what goes behind all of this. So we are four stars for all public reporting for cath PCI and EPDI. We are CMS five-star ranked. We are STS three-star rated in all five categories. We are VQI three-star ranked, and U.S. News and World Report. For the best hospitals, we are 30th in heart and vascular. So as you can see, our quality team has done a lot of work, and we are super busy, and we're so excited that a lot of our work we're able to help shine and show how proud we are of where we work. Okay, so we're a high-volume cardiac hospital. We do our own case abstractions in-house, and along with that, we manage monthly reports and monthly scorecards. So each registry has a meeting with the physicians once a month where we bring them the data, and we present it in a rolling 12 months way and in a scorecard format. So we present them the data, we look at the metrics, and we're also responsible for our process improvements, and we lead the hospital with our data analytics, the process improvements. We help with all the accreditations and certifications. We're proud to say we just completed our Comprehensive Cardiac Care Center certification, and those of you who have done that, you know that's a lot. And we're currently working on our EP certification, so we are very busy. So if you look on this background, one of the things that we find is that we really want to keep our data in real time. But when you get pulled away to run reports for physicians or for administration, and you are always working on kind of managing the quality part and the data part, then we run the risk of our registries kind of falling behind. And so what would happen is over this past year and a half, we realized that, man, we were all in survival mode because we would get to work a little bit early to catch up and get our charts in. We would stay late, or oftentimes, we would work on the weekends just to get those cases in, especially because Friday can be really high-volume days. So we were really in pure survival mode. So this is just an idea of the number. So we have 10 registries that we manage, and in the past year, we did over 8,000 base procedures we put into the registries. And then down at the bottom, we put over 6,500 follow-up visits we put into the registries. So that's a lot. I mean, and again, this isn't a healthcare system. This is 130 beds of acute care stuff. So we are hopping. Then we've kind of broken things up where we have sort of teams, or we call them pods. And so the pod that I work within includes TAVR, LAAO, and EP. And you can see, we did 800 EP cases went into the registry this past year, 436 TAVR cases, and with those, as you know, come follow-ups for 30 days in one year, and then 627 LAAO cases. Now, I used to do LAAO, and my LAAO brothers and sisters think about those follow-ups. So that is 461 45-day follow-ups, 431 six-month follow-ups were put in this year, 343 one-year, and 215 two-year follow-ups. And we do all of that in-house. We do that ourselves. We are busy. So then we're thinking about, wow, we're busy, but we're surviving, and we can do better. So we did a deep dive, and we looked at the literature on teamwork, and of course, it kind of came out the way we thought it would, that teamwork is critical. And you can look at the research on teamwork as it relates to businesses or healthcare systems, and it all shows that teamwork results in improved job satisfaction, improved productivity, it increases motivation within the workers, and it decreases overall tension. And then I kind of like this little quote that I found where it just says, the healthcare industry is fast-paced and high-stress, and we're often short-staffed, and this leads to burnout with the employees living in pure survival mode. And I'm sure some of you guys feel like your department is in survival mode as well. So we needed to make a change. We needed to do something. So here's what our problem statement was that we kind of created a while back. Can we improve our overall outcomes in case-finding turnaround time, in our abstraction turnaround time, and reduce our backlogs if we turn from going to this one person per registry kind of survival mode and go into something that's more of a pod-based or like a team-based abstraction? So that was the statement, and we weren't really sure. We kind of felt like we could based on all the research about teamwork, this is where we started. That was the jumping-off point, but it took a whole lot of work, and it really took a lot of vision from the bigger people in our department. So this wasn't just a little thing. This was redoing it all. So our manager, beforehand, we were, if you worked from home, then you would log every single case that you abstracted. And mind you, that's a lot of cases. Your start time, your finish time, and productivity was just kind of a beating. And you just kind of, you dreaded it. And so we needed to do a lot of new stuff. So our manager created a new tracking tool to look at our productivity, which I'll show you here in a little bit. And then we decided that we have so many great new people coming into this department, and they just didn't feel like they had a way to grow or anywhere to go. So we reorganized our whole entire department, which I'll show you in a second. And then once we did that, then we started the process of cross-training. Well, it just so happened that all this is happening. And within my pod, which is LAO, EP, and TAVR, we found out that we were gonna have the EP abstractor was going to go out for a month on paternity leave. Now again, EP, last year, did 700 cases into the registry. That person being out for a month, that registry, it's gonna take a hit. And the data's not gonna be current, and the backlog's just gonna get worse and worse and worse. So we were so glad that we started this process. So we really, our group, focused our cross-training initially on us learning EP. Okay, so this is what we did. This is how it started as a big thing. And this is our department org, and you might see the second person in the black box is Ellie Huff. And she was actually the moderator this morning. And she came to our facility and joined our team, and she had such great vision, that she saw that we have great people that we can do more than just sit at our desks and survive. And she was kind of the driving force behind us redoing everything and getting to where we reorganized our department. We have sort of a plan where we can grow and have sort of a way that we can be on a pathway to advance ourselves. So if you look at this, you can see like the orange on the left is the pod that I'm in. And that, we kind of organized, because it seemed logical, because you know TAVRs, EP, and LAOs, those patient population is very similar. And you're looking at a lot of the same data points. And then we have the pink that we made, and that's our STS team. And we do all of the heart surgeries, the thoracic surgeries, and Intermax. And then the purple, the last one, is our cath PCI group and our get with the guidelines. Side note, cath PCI, just PCIs, because we don't do, we don't put in just straight caths. PCIs last year, they did 2,000 were put into the registry. And then blue is vascular, and we have a person that's going to join that team. And we really realized after doing this that we were so busy with follow-ups that we have a new position, and she just does follow-ups. And with that, she's able to start a lot of process improvements as well. So this is where we have grown to. This is really cool if you want to kind of have an idea of how to look at your department's productivity. This was created by Christy, our brain, child, and data lover, and this is our productivity scorecard that comes out every week. So this is our Monday morning thing, and you can see that the first part it has is the current case-finding turnaround time. So for some of the registries, you have to do a little bit of digging to see if they should be included, or not like with EP, so you have to get in there and see if it was the proper, we don't do, if it's a novel pacemaker or ICD. So if they're gonna be included in the registry, and so we want to make sure that we keep that case-finding time down. Sometimes it gets a little bit higher, because if you are going into ARMIS, it takes a little while for when the patient discharges for it to get over to ARMIS. So that's the first thing that we look at. And the second one, which we love, is our abstraction turnaround time. And on this report card here, at this time, the abstraction turnaround time was 2.2 days. We want that number as close to zero as possible. And what we have found is that when we can keep that turnaround time, like essentially in real time, then we can be really proactive and consult the physicians when we see that maybe a medication doesn't look like it's on the DC med list. We can epic message them. If we're like digging through the chart as we're anchoring it before the patient's gone, we can make sure that all the documents that we need to get all of our data points and have the metrics covered are included. And if they're not, then before the patient's even discharged we have the ability to communicate with the physician and nurse practitioner and have that taken care of. And then the last thing is the current backlog. And we just want, we have set numbers that we want that to be within, but you can see on this day that EP was doing good. There was only one case in the backlog. And the closer we have that to zero, it means our data is pure and it is ready to go. Okay, so this is what happened. It was so cool. So we started this process like right around the holidays, and this is our team looking at EP. So prior to starting this cross-training thing, the EP, bless his heart, that's what we say in Texas, bless his heart, he had baby number two coming and the Cowboys were trying to get into the playoffs. So he was a little scatterbrained. And so his turnaround time had gotten up to 19.7 days and his backlog had started to build up. And that just, again, we want data to be done in real time. And so right out of the gate, we started cross-training and my partner Amber and I were taught how to do the EP registry. Within 10 days, the turnaround time for getting those cases went from 19.7 days to 2.3 days. Now granted, that's three people working on the registry and you could start to see that the backlog of cases started going down. So it was down to 33 cases. By the end of February, and while we're doing this, we're all doing inter-rater reliability and he's assessing that we're doing it well. And then he went on leave at the end of February and by that time, we had gotten the turnaround time for EP down to two days and the backlog was down to five days. So he went off for a month and during that time, Amber was doing TVT and then EP and I was doing LAO and EP. And we were actually able to maintain the EP case log so that the turnaround time was three days and we had zero backlog. So we were really proud. So on this thing, this just kind of shows you, it's kind of crazy looking, but this kind of shows you where we were that the red line is the back, the top green line, sorry, is the backlog. So it was kind of all over the place and the square is kind of drilling down into the time that we started this whole process. So it started at 37 days, 37 cases of backlog and 19.7 days of the turnaround time. And once we started working as a team, abstracting together, you could see that the numbers really went down. And then what's really cool is that they stayed down. So the other thing that's really cool about that is this is one of the things. So I said that in our department, we do scorecards. And every month, it was really neat. Last week, the first week of September, I'm now on the EP team. And we had our implanters meeting. So all of the EP physicians get together once a month. And one of the things that quality does is we have a whole slide deck that we present at those meetings. And so this I wanted to share with you because this is looking at, we noticed a while back that we weren't consistent with our metric 14, our DC medications, and that's the composite. That's ACE, ARB, ARNI, and the beta blocker. When you look at this, this is the three facilities. You can actually see that when we started cross-training and we could get the turnaround time for our cases to be done in real time, that we became 100% compliant starting in April, May, June, and on with our DC medications. And the reason is that when we get to work in the morning, we do the case finding. And we also look at those DC meds to see what the anticipated DC meds are. And if we see that their EF is indicative of them needing that ACE, ARB, or ARNI and it's not there, then we can epic message the physician as to why it's not there. So that was one of the really great benefits that we had when we started keeping our abstraction time so close to real time. You can see the same thing happen with our shared decision making. You know, we restarted this registry last year and shared decision making, it was a booger to get everybody kind of on board. But then when we were able to kind of just stay with it, go back to these meetings, talk to the physicians, but then keep them, we did make a smart phrase for epic, which was spectacular, but then just the ability to communicate with the physicians in real time really helped us to get those numbers to improve as well. So what we have found by cross training is we've had improvements in our data. We've given real time data, which aids in people's staffing, it aids in equipment management, it aids in reduction of errors. We've had process improvements. We have that turnaround time is in real time and we can catch those medication errors before they become a medication error. We can do a stop the line for shared decision making. And we also have improved job satisfaction. So we have improved morale. There's no more working on the weekends. And you know, when you're just doing one registry, it's really can be pretty monotonous. And so we're really lucky that we did this because the team member that we covered for never came back. But we were okay because we had cross trained and it allowed us time that we could bring in a new person and get them taught. And what else is really cool is that, you know, again, we're going back to this vision of our whole team and Ellie seeing that we can be a quality team that affects change, to be empowered, to take on our own process improvement projects and to manage our data. Then right now, our department has approved for all of us to be CPHQ certified, which is so cool because we are just really moving from being thought of as just like number inputters to being like an active part of improving the healthcare in our system, in our hospital and then really affecting change and we love that. So I wanna challenge you that if you feel like you're in your department and it's stagnant and you're dying and you're just are kind of lacking in motivation because you just come to work and you just put in numbers, that you can move away from surviving and you can start thriving. So just thinking about that every man for himself mentality, there is a better way. And cross training, we will just go on and on about how much it helps with the numbers, it helps with morale and it just really makes going to work a whole lot more joyful. So we would like you to encourage you, you can contact me and I'd love to share even more of our story about how you can help your team. And again, we're high volume, like really high volume and if we can do it, then I think that anybody else can do it. So that's what I have and we'll have some questions here at the end and I'll pass it on to Ms. Vicki. Thank you. Thank you, Jennifer. I've got a tough act to follow. So my name is Vicki Van Meter. I am from Ohio Health in Columbus, Ohio. A little bit about myself. I've been a registered nurse since I was like two. I've been a nurse for 42 years. I've worked for Ohio Health almost since birth for 40 years. I've worked in CCU. I've worked in the cardiac cath lab, EP lab and I had a device clinic. But for the past nine years, I've been working with cardiac registries. And I do have some polling questions. So if you all want to scan this QR code right now, I'll give you a few seconds to do that. So, first, let me introduce the team that worked on this project. So, this is the team that worked on this project with me. And, actually, in the back row on the left is Susie Arnold, as she was the lead and she is here today. There's myself in the middle as the presenter. Amy Morris is on the back row on the right in pink. She is our manager. On the front row on the left is Tara Ewing, and in the front row on the right is Beth Black. So, who is OhioHealth and what do we do? OhioHealth is a faith-based, not-for-profit healthcare system. We have 30,000 employees, 5,000 providers. We're a teaching facility with over 400 fellows and residents, and, of course, we need our volunteers. In 2023, we had 3.7 million outpatient visits. We had a little over 570,000 emergency room visits. We had almost 160,000 admissions. We did almost 108,000 surgeries, and we brought life into the world for 14,000 new babies. We are located in central Ohio in Columbus, which is home of the Ohio State Buckeyes, so O-H. Awesome. Riverside Methodist is our largest hospital, and it's our flagship hospital. It has over 1,000 beds, and in 2023, it was the 15th-largest hospital in the United States. We are also home to Ohio's busiest Level 1 trauma center at Grant Medical Center. And also in our OhioHealth family is Doctors Hospital, which is the second-largest osteopathic training facility in the United States. We have now 16 member hospitals, and we are located in 50 of Ohio's 88 counties, so we get around a little bit. We are also involved with cardiac registries, and with a large system, we have a large volume. We have six sites that participate in Cath PCI, which includes Riverside, Grant, Doctors, Mansfield, Marion General, and Dublin Methodist. We have five sites that participate in the VQI registry, and we have three sites that provide open-heart services that participate with TVT, LAAO, and adult cardiac STS. As a system, we submit almost 7,200 just indexed procedures, and that doesn't include our follow-ups. In Cath PCI alone, and we are only submitting the PCI volume, we submit 3,481, and Riverside has actually almost half of that volume. So it takes a village to take care of Riverside. And if you're not working at Riverside, you're helping Riverside with that volume. We also participate in stroke, ELSO, and the Intermax registries. And we have registry coordinators that we provide services, or we cover abstraction for the different registries at different sites. So we're doing multiple registries at multiple sites. So today, I'm here to talk about IRR. So what does IRR mean? Well, it kind of depends on what you're working in. If you're working in finance, IRR stands for Internal Rate of Return, and it estimates the profitability of your investments. So actually, if you look at your 401K or your 403B, you should see an IRR on there. So it lets you know if you're making money. If you're in the military, IRR stands for Individual Ready Reserve. My son was in the Army, and when he got out of the Army, he was in the IRR because he had a reserve commitment to fulfill. But for us today, IRR stands for Inter-Rater Reliability, and it refers to the degree to which different raters or observers produce similar or consistent results when you're evaluating the same thing. If you have a high IRR, it indicates that your rates are consistent, and the raters are consistent in their judgment. And if you have a low IRR, it just suggests that there's different interpretations when you're evaluating the same thing. So my first question will be, do you currently have an IRR process in place? You'll press 1 for yes and 2 for no. So if you didn't get the QR code, it'll be up here, too. Oh. Okay. So actually, I'm kind of surprised. So 58% do not currently have an IRR process in place, and 42% do. So good. I'm glad you're here. So my next question is, for those of you who have one, or even if you don't, what does your IRR process look like? Is it 1, a peer-to-peer review, 2, a team-based approach, 3, something other that I may not know about, or 4, you do not have an IRR process? Wow. All right. So you're pretty split on whether we have a peer-to-peer or a team-based approach. I would like to know what other is, just for my curiosity. And then, of course, 49% do not. So again, you've come to the right place today. Hopefully. I won't disappoint. So our objective. When at OhioHealth, we knew we wanted to start an IRR process to best achieve our desired data quality and consistency among our multiple hospital registry teams. And the registry team wanted to incorporate this IRR to be more as a true value-added means of continuous improvement. We didn't want something that was just going to be a bunch of extra work for us to do, a bunch of busy work. Nobody likes that. So when we started, we actually, we had, it was an internal audit between two abstractors. And you would get an email, and you literally, you dreaded this email. Because you knew, ugh, I've got to do this IRR. And it was a single case that was randomly selected for review from each registry. And so another abstractor would re-abstract that case using select data points. And then the results of that internal audit were then compared against the original. And the differences were discussed one-on-one. Seems easy enough, right? But what we found out was it was really confusing. So with a show of hands, if I can see you, how many of you see four? And how many of you see three? So we all are different in what we're seeing, and that's what we found out. It was really confusing, and we were seeing something different. It was a lot of I said, you said. There were some strong personalities. You didn't feel free to defend your data. And if you did, you knew you were going to be wrong. So you just really, you gave up, and you moved on. It was non-meaningful. It felt like a total waste of time. But we knew it was something that we needed to continue to do. We just needed to improve what we were doing. So we made some changes. So now we have a single complex case suggested for the team IRR by an abstractor for a registry. And then the team from that registry re-abstracts that case using a selected set of data points. And then all of the abstractions are compared to the original to determine an agreement percentage. And then any items of interest, mismatch answers, or items that had a lower agreement percentage are selected for a roundtable discussion. And then all of the abstractors meet for the roundtable, and we reach a consensus, and we document our findings. And I'll show you what we do. So this is an example of our CAF PCI IRR. And actually, I think the NCDR has something similar on their website. So we do this for every registry, whether it's CAF PCI, STS, TAVR, LAO. But we have a select set of data points, and you only re-abstract the select set of data points. So you're notified that there's an IRR, and you complete this. But for example, this is just a section of what we complete for CAF PCI. So the one thing that we'll look at is we're going to look at indications for CAF Lab visit. You know, we can list up to three, and you can list those in alphabetical order. We'll look at chest pain assessment. We'll look and see, was there any cardiovascular instability? We'll look at those CAF results. What were the CAF results? When we get to the PCI, was it elective? Was it urgent? And if it was a STEMI, what was their onset of symptoms? What was that door-to-balloon time, most importantly? And then we're going to look at labs. We'll look at the pre-procedure labs and the post-procedure labs to make sure we're capturing the correct ones. And we will look at the lesions that were fixed during the PCI. We try to look at things that are going to affect our risk adjustment and AUC. And so this is the example of the spreadsheet that is completed. So once we complete the IRR, we send our completed IRR to the data manager who compiles this into the spreadsheet. So once everything is entered into the spreadsheet, the person who submitted the case or the original will set up a Teams meeting for us to review, and we'll go over things. So anything that you see in yellow is where we had a mismatch. Anything that you see in red is where we had an agreement percentage of 75% or less. And on my first example, we looked at EF, and the original person had 38. But when we looked at our IRR, we had anywhere from 38, 43, and 60. So definitely we had some disagreement. We just needed to look to see where's everybody getting their numbers. When we looked at our indications for CathLab, we were 100% agreeing that it was ACS greater than 24 hours. But we also found out that some people listed three, and some people only listed one indication. So we did review to see if those were correct. When we looked at our angina assessment, we all agreed that it was atypical, except for one person who just didn't click anything. And then the cardiovascular instability, again, we all agreed there was none, but two people just didn't click no. So once we have reviewed the IRR, we document our results, again, in a spreadsheet. It's called the roundtable discussion, and we'll select, like, five items that we had a disagreement. We do the same for all the registries. So in this example, although, unless you're looking, you've got really great eyesight, you probably can't see that, but we looked at tobacco use, and we had an agreement percentage of 81%. Well, that seems pretty good, right? Or actually, I'm sorry, 71%. Anyway, what we noticed was that one person, or a couple of people, had said the patient was a former smoker. So we're kind of like, well, how did you find that? Where did you find it? Somebody had searched the chart, we use Epic, and found the patient had been a former smoker in Care Everywhere. So 71% of us were wrong on that one. So just because you've got a high score doesn't mean you're right. When we looked at aspirin, we had an 86% agreement that aspirin wasn't given, but somebody said it was, so again, we're not going to get burned again, so we looked. And yes, indeed, aspirin had been given, but aspirin was given after the start time of the lidocaine. So we looked at the definition, just to clarify, trust, but verify, and it was after the start, which did not meet the definition, so aspirin was not given. And we used that as an educational opportunity. And we looked at events, and we were pretty split on this case as to whether there was an event or not. But somebody said there was hematuria, so again, we're going to search Epic. We found hematuria. Sure enough, there was. Now, does it meet the definition? So we're looking at hemoglobins, and sure enough, there was a greater than 3 gram drop. So in this case, and as we would all of them, we made the corrections and resubmitted it to the registry. So what have we learned from doing this? We've learned it takes some time. But we've also learned that we are getting better with our matched data elements. So our percentage of agreement has slowly increased over time. And now we're, the last one, we were like 94%, so we are getting better. We've been running in low 80s, now we're in the mid 80s to 90s. When we looked at frailty scale, though, we're pretty flat. Frailty is something that our providers do not document, so it's left up to us for that interpretation. And what one person thinks is a 3, somebody else thinks it's a 4. And what somebody else thinks is an 8, somebody else thinks it's a 7. So we're kind of worlds apart yet on that. When we look at our indications for cath lab, we're actually getting pretty good at that. We were kind of all over the map at first. And our last few IRRs, we have been 100% in agreement on those indications. And of course, they do have to fit that algorithm. So just because the patient had a CABG 20 years ago and the doc is charting new onset angina, we all know that's worsening angina. So we have to kind of make everything fit. And then when we look at NYHA class, that's another area where we've shown some great improvement. Again, we were all over the map and actually really low on this. But NYHA class can be documented, if there's heart failure documented, and they have descriptive documentation of their symptoms. So we have become more consistent in our interpretation of that documentation now. And we've also shown improvements in our audit results. And at OhioHealth, we get audited a lot, or at least I get audited a lot. I have been audited literally on every registry for every site more than once. And I am comfortable that the NCDR has actually reviewed all of my data at this point. And then we've also, we're so good, we had somebody who got 100% on their STS audit. Who gets 100%? She did and she retired. So perfection, perfection one. But since we've changed the way that we're doing things, when we have gotten our results back, we are greater than 93%, which has made us a high performing facility for data abstraction. And we do share that information with our physicians. But if there's any physicians in the room, I'm sorry, but a lot of the physicians don't read their emails. So all they see is that it was high performing. They think their program is all about the program instead of about the data abstraction. So they send out emails congratulating each other. They never give us any credit for what we've done. But we just go ahead and let them give themselves their pats on the back. And then we know that we did a good job. So we sleep well at night. In conclusion, our IRR, our group IRR especially, has promoted consistency across our multiple hospital registry teams. We have developed standardization of where to find that data in the electronic medical record. Value was added in that group IRR discussion regarding inconsistent items found by the team. And we have more points of view now instead of the one-on-one, I said, you said. Definitely that mentality has gone away. And the group IRR changed in nature from feeling like an audit, although don't let me fool you for a minute that we look forward to this because we don't. But it has come to where we do know that it is a constructive method to debate our best practices. And then documenting the group IRR roundtable discussions has resulted and provided us with some valuable educational resources for some of those difficult data elements. So what's next? We would like to share the results of our IRRs with our physicians and stakeholders so that they can have an increased trust in our data. That way they know that what we're submitting is correct. We're abstracting the same across the system, so no matter who's doing it, it's the same. And that their reports are accurate. They may not like their reports, but what they're getting back is accurate. We would also like to individualize our IRR so that we can make it more individualized for that difficult, complex case that you have questions about that you submitted. Instead of just doing a blanket IRR that one size fits all, try to find out what it is that we're really trying to pull out. Why did you submit this case? What is it you want to look at? So we can all look at it together and learn from that. And we would really like to have some vendor automation for our IRR results with an automatic comparator instead of having a data analyst input all of this data manually into a spreadsheet. And I'm sure he would really like that, because it would save a lot of time. So this is our contact information. Please feel free to reach out to any one of us at OhioHealth. Myself, Amy, and Susie, who is Rebecca, are all here today. Beth and Tara are back in Columbus abstracting those 3,500 PCI cases for Riverside. I can guarantee you that for sure. And so with that, I thank you for your time today and your attention. I hope I was able to entertain you a little bit, and you were able to learn something. And so I wish everyone a happy, healthy ACC conference, and I think we'll open the floor up to questions. All right, thank you both. That was really great, and I'm amazed at all the questions that have been submitted. So, hang on. First one, I'm just going to start off, and it's for both of you. Okay, my screen just moved, and that question moved. Hang on. Are most of your abstractors clinical or non-clinical? I'll start while she's having a drink. So, in our department, it's spectacular. So, we have the actually abstract. We have, I'm a physical therapist. Amazing that I ended up here. We have a respiratory therapist. We have a registered nurse, one that's a true abstractor. We have a cardiac rehab exercise physiologist. And then the other people on our team just came, one came through lab quality, and the others were essentially working in the clinic offices. And I think that's what we have really learned, is that it doesn't have to just be clinical people, and that if you can open your mind. We had somebody that worked like for AT&T, that if you can open your mind and just teach people the definitions and teach them the criteria, that really you bring in that ability to bring a lot of different points of view and excitement for how to look at the data. So, we are all over the board in our department. At Ohio Health, we are all clinical. The majority of us actually did have cath lab experience, and I think all of our PCI abstractors have cath lab experience. STS, we had a perfusionist. She was also a nurse, but I don't know if she ever really worked as a nurse. She was the one that retired. But she had open heart background. I mean, like I said, she was perfusionist. And our two abstractors in STS have open heart. Of course, I have, I do STS, and I was cath lab in critical care. The same thing with TVT and LAO. They all had cath lab experience. So, all of ours are clinical in short. And I believe you may have both already answered this, but are any of your abstractors the nurses actually in the cath lab doing the cases or EP lab? Lisa used to be the cath lab manager, and then she came over to the dark side. The leader of the cath team, but that's the only one. I think that is that the only one that's worked in the cath lab or EP lab for our department. Yep. All right, this question is for Jennifer. How many team members do you have? And please tell me you are not salaried with all that overtime. Well, we are salaried. And so, we have we have the ten registries, and cath PCI has two people dedicated to actually cath PCI has two, and then Get With The Guidelines has one, and LAO has one, and then everybody else has one, but STS has three, and then our follow-up person. So, I think the true just pure people working on registries themselves, there's like 12 people, and then we have some people that just do sort of like work on the quality side of our team that working with a little bit more towards the certification processes and process improvements, and then we have some that just do the abstract. I think our department truly has 15, but maybe 12 just focused on registry stuff. And actually, Vicki, kind of the same question for you. How many abstractors do you have for each registry? For cath PCI, we have five. For TBT, we have two. For LAO, we have three, and for STS, we have three. We have actually eight full-time abstractors. I think it's our total. That includes VQI. Let me say that, too. For either of you, do you have any issues with waiting on scanned documents affecting your turnaround times? I will say for me at OhioHealth, a little bit especially not with NCDR registries per se, but with STS, I do. We have a paper document for perfusion, and sometimes I have to wait for all of that to get scanned in before I can finish that case totally. Yeah, I don't think we have as much of a problem with that because we're like such a small facility and together that those forms are walked over to our department on a regular basis. And same with like our TAVR patients. The fellow fills out the last part of the registry form, and they just get them to us regularly because we are really in one location, so not really. For Vicki, can you share your IRR tool for the various registries? Yes, we would be happy to share that. And they can reach out to you. I mean your contact is in the app here with the meeting. Yes. When your EP person went on leave and you were able to maintain no backlog, was there pushback from administration that you didn't need the EP person anymore? No, and the reason is is that each registry, I think I alluded to it earlier, we have slide decks. So it's nice because it's not just the abstraction part. I could see if it was just the abstraction part maybe, but each person is responsible for maintaining a slide deck of taking the numbers from the dashboard and then turning it into a scorecard and having that ready to present to our physicians. And so while we were able to cover the abstraction and get the cases in, we still had to kind of stretch out to be able to get quality scorecards. And we have, our physicians are quality champions, so we're really lucky that they want that data, they want to see where we can improve, and so we didn't have any pushback. And also, each one of us, when you're a registry site manager, if you have a metric that falls below the 50th percentile, there's an expectation that we're thinking, what is the process improvement that we can do? We need to look at that a little bit deeper. So it really did need to have one person, and we were lucky that we were able to cover it, but yeah, no, thankfully, they didn't try to eliminate anything. Kind of along with what you just mentioned for either of you, with such high volumes and only so many abstractors, how do you divide your time between abstraction and quality improvement work? Well, what we do is... And then you can jump in. So we start... Typically, most of us start the day kind of on the abstraction side, because it's really critical that we catch any of those patients if there's a medication that's missing or something that's not there. And I know that Lisa... Lisa is the most dedicated. She's at the crack of dawn there, looking through all those EP cases to make sure that all of the meds are in order, to make sure cardiac rehab has been... Put their information in. So we kind of do our day sort of like in a chunk. We kind of get cases going, and then that allows us time the rest of the day to work on with whatever different meetings we have to go to, or different committees that we're on for certification processes. And then maybe at the end of the day, go back to any cases that haven't discharged, and you can have them pre loaded in, and then ready to put the final bit with the DC meds. So we kind of split it up throughout the day. And at Ohio Health, we're pretty much similar. We do break our work up into chunks, and you've got so many meetings you've got to attend, so you're just kind of playing your day around, how much abstraction can I do? Some of us kind of set a goal for ourselves, and try to get through those meetings. If you know that there's things that are missing, try to do the follow ups on those things. It is very busy, though. I'm not gonna lie. When you've got that volume, it is hard to keep up. Ooh, and what we did... And I want y'all to just take this, if this is a gift for you. We implemented Focus Fridays, which means no meetings on Fridays. And that's a great way that you can catch up. So we can't... Even if you're just... I mean, every blue moon, and if people are like, what? Is this a meeting on a Friday? Because Fridays are our day to really get the rest of that data done, to work on our scorecards, and allowing ourselves that time to do our work has really just been a great way to kind of get that balance throughout the week. So highly recommend. Vicki, how frequently do you do your IRR audits? Are they monthly? Quarterly? How many cases do you review at each round of audits? These are kind of lumped together, it's one long one. And do you rotate between the different registries? We do an IRR quarterly for each registry. So we will do one for Cal PCI, one for TVT, one for LAO, one for STS, and I think one for VQI. So overall, one per quarter per registry. I mean, we do 20 in a year, because some of us are involved with multiple registries, so you may end out doing two or three IRRs and be involved with that. So we try to do only one per quarter, just to be conscientious of everyone's time, because it does take time to do those. And again, you don't abstract it like you would your own case that you were going to submit. You do look, but are you quite as accurate? Probably not. But it does take time, but one per quarter. Kind of tagging on to that, do you see any barriers to not doing a full case IRR audit? Just the time that would be involved with that. I mean, we're all familiar with the registry. If somebody's got a really complex case, they probably sent out an email or chatted somebody to ask questions. So we just try to do the highlights of what we think are important, like AUC and risk adjustment, just to make sure that we are capturing those things that are gonna affect our scores a lot. Thank you. For both of you, are you using a third party vendor or doing manual abstraction direct into NCDR? We do direct... We use third party for STS, Cath PCI, and then the rest of them, we use straight into NCDR. And at Ohio Health, we use the third party for Cath PCI and STS. We don't use third party vendors. They're not? We don't use anybody to do our abstraction for us. Is that what the question was? Well, it could be taken either way, if you use an outside source or if you use a third party software. So we use LARMIS as our software for Cath PCI, STS, and actually TAVR, but we do the abstraction ourselves. Yeah, we do all of our own abstractions. We use EPIC for Cath PCI, we use CDERON, we just switched to CDERON for STS, and we input directly into the NCDR for TVT and LAO. And I don't know what we do for VQI. We do something, I don't know what. You both just answered this other question. We're hitting multiple questions with your answers, so you're doing a great job. Having the staff to accomplish real time data abstraction puts a financial load on an institution. How are you able to accomplish convincing leadership to invest in your department for the staff? Well, our hospital is about quality. We have a quality wall. And I think that they have... It's a complete buy in, because if you want to be 30th in heart and vascular in US News and World Report, that is reflected by your quality. And if you want to be CMS five star or STS three star in all categories, you don't do that without the quality department taking a really active role. So I don't think that there was ever really a question and that's why we're so grateful, but it is about quality. And I think the mission and vision of our hospital has always been quality. And so we've never really had to have a lot of pushback on that. We're very lucky. For IRR case selection, do you use a randomizer? As you displayed the indications for cath lab visit has had a 100% match rate, were those complex cases or straightforward? It's just straightforward that we're doing. We don't randomize... We randomly select a patient just by somebody submitting a case they want reviewed, but then we randomly review and it's our... I hope I'm answering your question. It's our ideas about what we think were the cath indications for that, especially if the physician doesn't mark it. Did that answer the question? It did. Okay, perfect. Refresh here. This is a little bit of a repeat, but a little bit differently worded. How do you select the IRR data points and do these change once there's no mismatch over a period of time? That's a very good question. And right now, they do not. If we're matching, we've still kept it, but it's one of the things that we have talked about that one of the things that we do need to do is to rework that when we're getting really good at something and pick something else that perhaps we're not as good at. And obviously, we have to do updates when we have new versions. Thank you. Jennifer, what is the difference between a QI coordinator one and two in your org chart? Okay, well, so you come in as a coordinator one, and really it's about a two year process, a three year process if you don't have a clinical background to be able to move up. You have the ability to move up to a coordinator two, but a coordinator two is gonna take on a few more of a leadership roles, like maybe be a pod leader or maybe take more responsibility in presenting the data to a larger group. So, and maybe be like... If you have like Cath PCI, and you have a couple of people working that, then the registry site manager would be a little bit level higher than the entry level person. But the nice thing is that everybody has the ability to progress up with time and then experience. So it's really a matter of the longer... To move up, you have more responsibility in leading chart audits for your team and who you present your data to and maybe a little bit more responsibilities when you do EP certification or our comprehensive cardiac care certification. So the responsibilities grow as you grow. Thank you. Vicky, have you explored including IRR as one of the factors for the abstractors annual evaluations? We did do that, actually. That you have to participate in the IRR process. You can't just get a pass and try to skip out of it. And we do try to use like, how well are we doing? Now, it's not punitive in that respect, but at least it's used as educational. Thank you. And for both of you, do your abstractors work in house or remote? Our abstractors, we are all remote, so we all do work from home. Well, not so much. So we work two days a week from home and then the other three days that we work in house. But built into that productivity report card that I show you that comes out every week. If you fall into the red on your registry that's showing that you're struggling, it's not punitive, but you're gonna be asked to stay in house so that the team can help you more. And in the perfect world, we would all love to be able to all stay at home, but there's so many... We're so interactive with our physicians that it's just really not practical for us to work from home because you would be up there at least a couple of days a week to be at a meeting and interact with the staff. So two days from home and we are very grateful for those two days. Alright, we have less than one minute left. Take it. And questions are still coming in. I do know that when we were prepping for this session, they said that any questions that were left over, we would still try to filter through and possibly get answers out. So both Jennifer and Vicki, we'll see these questions and we'll try and answer them as best we can. But do either of you have any closing remarks? Thank you for your interest and I'm really... I've appreciated the questions. Definitely has at least piqued some people's curiosity about what we're doing. Yeah. And again, being my first Quality Summit, I... I'm just blown away by you. I'm motivated by you. I'm so proud of... I'm proud of all of us for just being so proactive about affecting change at a larger scale. And when I moved to quality, my friends were like, you will never last because I was at the bedside for 32 years. And I love that relationship with my patients and affecting change on a micro level, but to see how every one of us is affecting change on a macro level, it's quite lovely and I'm so grateful for you all. Is that it? Thank you both. We did it.
Video Summary
The session, titled "Data Abstraction Quality Lessons from Your Peers," was moderated by a representative from the Heart Hospital Plano and featured presentations by Jennifer Waters from Baylor Scott & White, the Heart Hospital, and Vicky Van Meyder from Ohio Health in Columbus, Ohio. Jennifer Waters shared how her department transformed from merely surviving to thriving over the past year and a half. She attributed this success to teamwork, real-time data management, and process improvements, particularly by transitioning to a pod-based abstraction system. Jennifer emphasized the importance of maintaining real-time data, which prevents backlogs and allows proactive consultations with physicians. Her team manages 10 registries covering over 8,000 procedures and 6,500 follow-up visits annually. She also highlighted successful cross-training efforts that enabled her team to maintain quality data abstraction during staff shortages.<br /><br />Vicky Van Meyder discussed Ohio Health's approach to improving Inter-Rater Reliability (IRR) in cardiac registries. By shifting from a peer-to-peer review to a team-based IRR approach, they achieved more consistent and accurate data abstraction. She described how they engaged in roundtable discussions to reconcile discrepancies and improve processes, resulting in Ohio Health being recognized as a high-performing facility for data abstraction. Vicky stressed the importance of regularly reviewing and updating IRR practices to maintain data quality and leverage automation for efficiency. Both presenters emphasized the role of teamwork and continuous improvement in achieving excellence in data abstraction and quality assurance.
Keywords
Data Abstraction
Quality Assurance
Teamwork
Real-Time Data
Pod-Based Abstraction
Inter-Rater Reliability
Process Improvement
Cross-Training
Automation
Cardiac Registries
×
Please select your language
1
English