false
Catalog
Best Practice Sharing - Using Data to Drive Improv ...
Best Practice Sharing - Using Data to Drive Improv ...
Best Practice Sharing - Using Data to Drive Improvement - Verschelden
Back to course
[Please upgrade your browser to play this video content]
Video Transcription
Hi, everybody. Thanks for joining this session focused on using data to drive improvements. My name is Christy Burchelden. I'm a registered nurse in North Texas, working as a process improvement specialist at Baylor Scott & White, the Heart Hospital Plano. I work in our healthcare improvement department, where we internally abstract, process, and report registry data. Our department also supports our two sister campuses nearby in Denton and McKinney. This month I'm actually celebrating my 10th year in the department, which correlates to a decade of experience with the NCDR suite of registries, so I'm especially happy to be with you here today virtually at the conference. I have listed my contact information here on the screen. Please don't hesitate to reach out with any questions about the presentation or even to share your own experiences with this topic. You might be wondering if this session is actually for you. For a quick pulse check, let's play a variation of the game called Never Have I Ever. Here's the basic rules. Hold up an open hand. You're going to start with five points represented by your five fingers. I'm going to share a statement about something I might have never done. If I've truly never done this, I'll keep the point and keep my fingers up. However, if I have actually done this, I'm going to lose a point and lower a finger. Are you ready? Here we go. So, never have I ever reported old data. Never have I ever abstracted data points incorrectly. Never have I ever unintentionally reported inaccurate outcomes. And never have I ever felt like nobody speaks my language. And never have I ever worried about my case backlog growing if I decided to take time off. Do you still have five points at the end of the game? Great for you. I do hope you'll still stick around. But if you have less than five points like me, you are the target audience for this presentation. In this session, I'll share how our department uses data validation and productivity monitoring to increase the accuracy and timeliness of data we report. There are two learning objectives. The first, we want to describe an action that can be implemented to achieve improved data quality. And the second, describe an action to help improve the timeliness of the data we report internally. Our goal as registry professionals is to collect data and then measure our clinical performance and benchmark ourselves against our peers. We can use our outcomes to highlight and celebrate areas where we perform very well. Or we can use our data to identify areas where we have an opportunity for process improvement. In order to do either one well, our goal really should be to provide our clinicians with access to timely, trusted and actionable data. But maybe you're struggling with abstracting data in a timely manner. And maybe that's even complicated by not having complete confidence in the way you're applying your definitions, which then in turn can make you question the integrity of the outcomes that you're reporting. Where should you start? Should you focus on improving your abstraction timeliness or focus more energy first on improving your data accuracy? It's a bit of a chicken and the egg analogy. As I'll line out in the next slides, both topics really do go hand in hand. You do have to prioritize based on your individual situation. But ultimately, think about developing a process where each concept complements the other. In 2016, our hospitals were seeing exponential registry volume growth. Our department had recently been approved for additional staffing to accommodate that growth. And my position was redefined to help support productivity and data accuracy for our registry site managers. At baseline, we did not have an internal tracking system to monitor our average abstraction turnaround times. Instead, we were really focused on meeting our registry defined call for data deadlines. At that time, we were managing seven unique registries across two campuses. At baseline, our average abstraction turnaround time from patient discharge to case abstraction was 51 days with a range between one to 93 days. We also lacked a standardized process for pulling data to report to our Medical Executive Committee or MEC who meet monthly. Some of our abstractors relied on published outcomes reports. However, the average age of data possible in that format was 172 days old. Others were proactively using data available on the registry dashboards, but they were pulling it just after the registry data deadline and that data was still 133 days old. To combat this for high focus metrics, we were processing data in a monthly fashion through manual auditing. That was a significant duplication of effort for us. We were manually auditing cases to compile these reports and then later going back to abstract the same case into the registry. So despite these efforts, the average age of data we were reporting to MEC was 146 days old. It wasn't timely and we were well behind the curve to course correct for negative trends. In addition, to further complicate things, we had some variation in the time spent scrubbing data prior to each deadline. Some abstractors performed a patient level drill down for every metric on their reports, while others were only drilling down metrics of high focus internally. In addition, we did not have a process in place for inter-rater reliability auditing, so often we didn't have a second set of eyes on data trends before they were reported. While we recognized we had an opportunity to improve the confidence in the data that we were reporting, it was a bit of a catch-22 because, as I've mentioned in prior slides, we hadn't developed a level of productivity that allowed us time to spend scrubbing our data. This graph represents one metric we were concerned about at the time. When the ICD registry revamped reporting for appropriate use criteria, our gut told us that the integrity of the data we were looking at might be in question. Please remember we're a cardiovascular specialty hospital. We knew that our providers were in tune with the guidelines and they've shown great engagement in meeting our documentation requirements. It really didn't feel like we sat that far below the 50th percentile benchmark. However, we did recognize that this was a new metric for one of our newer abstractors and we were a little suspicious we might have an opportunity to ramp up knowledge about the data points that fed into the metric. I do want to take a minute to point out that the early work we did to address the obstacles we were facing in 2016 are available on the NCDR participant resource sharing sites. In these workbooks, you should be able to find the tools and concepts we applied for data validation and productivity. If not, please don't hesitate to reach out and I'll be happy to forward them to you. I'm going to dive into each concept in the following slides, share the outcomes of our early work, as well as give some insight on how we've made those processes sustainable over the last five years. Let's start with data validation. When I talk about validation, I really mean data cleansing or data scrubbing. It means checking the accuracy of your source data before it's published or reported. And that's where energy can be wisely spent to enhance the confidence in the data you're reporting. Different types of validation can also be performed depending on your objectives. Here's an overview of three processes that we put in place at our facility. The first is focused auditing. That's done at the data point level. Here we're measuring someone's understanding of how to apply a coding instruction or a target value. It also can reveal their ability to apply FAQs in relevant case scenarios. This process actually includes a lot of data this process actually replicates the NCDR annual auditing program. Next we implemented metric level audits. So that type of auditing we do quarterly prior to every data deadline. Here we're looking to measure someone's ability to recognize whether a patient level fallout was truly a fallout. That process looks at how those single data points feed up into a numerator and denominator and maybe even impact metric inclusion criteria and outcomes. And then finally we created a process for registry adjudication. The tools we're going to touch base on are used to query clinicians whenever we need clarification about specific pieces of documentation. Let's start with how we apply our focused audits. We use the concept of iterator reliability. These audits are going to show us the degree of agreement among raters. The process provides a match score based on how frequently two people agree on the appropriate way to abstract a data variable. By having that process in place you're going to gain someone to collaborate with and really somebody who speaks your language. There's several ways to do this type of audit. You could do full case audits which are very valuable because they can help you identify areas of unknown weakness or in the interest of time you can select data points to audit based on specific concepts. That might look like auditing specific sections on your data collection form, maybe your history and risk section, maybe your procedural section. A good way to get started with doing an audit is to use the NCDR self-audit workbook. This can typically be found on your document's home page. If one is not available for your registry you might reach out to NCDR for an example from another registry. I'm also happy to share a more generic version of the tool we created. Please feel free to just email me and I'll send that over to you. But again the NCDR tool is a great place to start. It provides a nice overview on the purpose of auditing, lines out a process map, and really has detailed instructions for implementing an annual auditing program. They do suggest a match score of 90 percent. When scores are below 90 percent they suggest developing an action plan and then reassessing those variables regularly. Implementing a process like this can help you enhance your preparedness should you be selected for one of their annual audits. And now I'm going to tell you a little bit about our quarterly metric audit process. This is a process that we found very effective to enhance registry-based learning. If you're familiar with the teach-back method from the clinical setting this basically models the same. We use it routinely for each of our new registry site managers. To start patient level drill downs are used to identify metric fallouts at the patient level. In the primary validation process the learner is then going to review each of those fallouts and the goal is for them to self-identify any abstraction errors. The learner is then going to correct that data where it's necessary and resubmit the quarter. At that point they're going to hand it off to a trainer for what we call secondary validation. The trainer is going to repeat that same review for each fallout. If the learner did not apply new knowledge and self-identify abstraction errors it provides the trainer an opportunity to provide clinical insight and references to applicable registry resources. The learner is then going to correct the data resubmit and then confirm the impact of any changes made on their final outcomes. Soft deadlines are a critical component of this process. You have to ensure there's adequate time from the hard registry deadline for a two-person pass at validation and submission. So at minimum primary validation should be completed two weeks before the hard deadline. That allows opportunity for the review as well as resubmission and to view your data before it goes to a published format. You might also consider using some soft deadlines if you're using an abstraction vendor. By doing that you do have time to preview your outcomes prior to data being published. This is just a quick screenshot of how we apply a match score for our quarterly metric audits. Basically we're using an excel workbook to list our patient level fallouts for each metric. There's a place for the learner to make comments if they made any changes as well as a place for the trainer to jot down their notes and thoughts. Those notes are going to make it easier to complete your review when you sit down to discuss the findings together. We do apply an all or none match score basically evaluating if all the data points that fed into the metric were abstracted correctly. By making that process measurable it does allow us the opportunity to evaluate competency with metrics and measures. We did put in place some adjudication tools. These are processes that help our abstractors query providers when they're either missing pieces of critical documentation or it's not explicitly documented. An example might be when a patient is not prescribed a beta blocker at discharge after having a heart attack. Maybe you see the patient was hypotensive throughout the stay, but there's no explicit documentation of why a beta blocker was not prescribed. So it becomes a metric fallout. In that scenario, you might want to query the physician to confirm whether the low blood pressure was the medical reason for not giving the beta blocker. We did create a registry adjudication form that we have in hand when we know we're going to go talk to a provider and ask about the patient. It can also be sent by secure email or secure fax. For us, that's just a step up from having the discussion with the provider and then asking them to go back and addend the chart later. Once the form is completed, we do scan it into the medical record. And like many of you, you probably have a process in place within your electronic medical record to query the provider directly. That's a feature that we've only obtained in the last year. And it's something that we are starting to implement in our department as well. Let's shift gears and talk about abstraction productivity. The goal for monitoring should be to enhance access to timely, actionable data. A key metric that we use is abstraction turnaround time. That's simply a formula that measures the number of days between patient discharge and case abstraction. We average our performance over the course of our fiscal year. In order to track, you need to have the discharge date and the abstraction date documented and available for trending. It's also helpful to have the name of the abstractor documented. We use one of two formats to meet those needs. For registries that are entered into our vendor software, we requested custom fields to capture the data points we needed. That allowed us to pull a data extract with those fields and then average performance over time. For registries that are not entered into software that has those custom fields available, we created Excel workbooks. During case finding, the abstractor enters each new patient into the workbook, then tracks discharge and abstraction dates. Those workbooks often serve other purposes, which help us to consolidate processes and save time. One example would be tracking patient level exclusion and inclusion decisions. For registries with complex scenarios like the chest pain in my registry, it allows us to capture the reason for exclusion and helps us prevent reviewing the same case more than once. The workbooks are also used to capture information that falls outside of registry fields. One scenario would be the TVT registry, where we track cases that are excluded from the registry but are enrolled in a specific research study. That gives us a quick way to count our commercial volumes versus our research volumes or our total cases. And then to take things a step further, since you have your data available in Excel, you can quickly apply pivot tables to track your outcomes and trend them over time. We also implemented a really robust anchoring process as a strategy to reduce the amount of time we were spending at the end of the month doing those manual reviews for our high focus metrics. One example is how we use anchoring for our discharge medications during our case finding process, which usually occurs every day where it's feasible. We go ahead and anchor our cases into the abstraction software. That's not full case abstraction. It's just enough information to tag the patient into the software and to have it available to enter key pieces of information. So the patient's been discharged. We go ahead and look at their discharge medication list, confirm whether what they need has been prescribed or has not, or it's been contraindicated. We go ahead and abstract those variables. And then on demand, we have the ability to pull an extract from the software, and then we can quickly apply a few filters or a pivot table and know at any point in time where we stand on medication compliance. We also standardized our expectations for submitting data. If we're abstracting a case during the week, we're submitting that case every single week. So that allows us data to be processed by the registry. We always have outcomes available to pull on demand. As I mentioned before, we do have a productivity workbook available on the participant resource sharing sites, but I thought using a calendar might be a good visual way to kind of talk through what that looks like. And it's a paper tool that you could use just as easily. So I'm going to use that calendar to visualize and identify some productivity targets. Let's say today is August 30th. Today, I'm abstracting cases that were discharged back on August 2nd. So if you count back, that represents a 28-day turnaround time from patient discharge to abstraction. I've decided I really want to cut that goal in half, and I really want to have a turnaround time of only 14 days. So I've highlighted that week in yellow. I've looked at my volume trends, and I know on average we discharge about 10 cases a week. So at that rate, because I'm abstracting four weeks old data, I have probably about 40 cases in backlog. So in order to achieve a turnaround time of two weeks, I can estimate that that would be represented by about 20 cases in my backlog. When you're setting productivity goals, abstraction time studies are always going to be your first stop. Until you understand the average time per case, it's difficult to set an achievable goal. If you need help with a tool, again, you can find it in one of our workbooks on the NCDR resource sharing site. But this can be just as easily done by jotting down on a piece of paper, you know, a number of cases you do and how long it takes you. So let's use the same scenario from the prior slide. I currently have 40 cases in my backlog, but I want to get it down to 20, which represents a 14 day turnaround time. I'm 20 cases away from achieving that goal, which I'm going to refer to as our gap volume. We also know that on average, 10 cases are discharged every week. That means that every week I have to abstract 10 cases so that my backlog doesn't continue to grow. That value becomes our maintenance value. Now, I have to figure out how to reduce the gap volume of those 20 cases over time. At first, I think I want to I want to achieve that goal within four weeks. So I'm going to divide those 20 cases in my gap by four weeks. And then I learned that I'm going to have to abstract an additional five cases over my maintenance value to achieve that goal. And I realized that's not realistic, given my competing priorities and the time I have allotted every week for abstraction, taking into consideration how long on average it takes me to abstract each case. But if I recalculate that and I give myself 10 weeks to eliminate the 20 in the gap, I now only have to do two additional cases every week to achieve my goal. Totally possible. Let's move on and talk about some of our project outcomes. After implementing the strategies I've discussed in the prior slides, this is where we ended up. Our volumes have continued to grow, as you can see on the trend chart on the left. The yellow box highlights the years since we implemented the project. We've added an additional campus as well as some new registries. But despite that growth in volume for our registries, we've continued to improve our productivity over the years. We went from an average abstraction turnaround time of 51 days in 2016 to less than 30 days in 2017. And we've continued to reduce that value annually and now maintain an average of seven days from patient discharge to abstraction. That's allowed us to continue to improve the age of data that we're reporting to our medical executive committee. We improved from that 146 day old data at baseline in 2016 to 46 day old data in 2017. The average age of data we now report is 28 days old. So we do have access to timely, actionable data for our clinicians. As part of our productivity maintenance plan, we perform an annual analysis of each person's registry workload. When necessary, we rebalance the volume of cases assigned per person. We also provide ongoing productivity feedback for everybody. In each productivity report, we're going to provide an average abstraction turnaround time by registry, the abstraction volume by person, and the overall backlog volume. By routinely analyzing these trends, we have a built in early warning system for productivity that might be getting off track. And that allows us, again, to quickly course correct. Another key process we've implemented is contingency planning. We hold weekly team huddles where this is a standing agenda item. When team members have time planned away from the office, the team can then brainstorm ways to support them. It might be one person is going to anchor cases for them and another person is going to abstract a set volume for them. It's a give and take system. And the team has developed trust. They know if they support someone else, they're going to receive the same in return. So contingency planning is also helpful for unplanned absences, which I'll kind of review in the next slide. This is a snapshot of the routine productivity report we distribute every two weeks. On the left, that chart represents the average abstraction turnaround time to that point in the fiscal year. In the middle, you'll see the total volume of cases in backlog status at the time the report was created. And on the right, you'll see how many cases were abstracted by each person during a specific quarter. So let's focus on the middle graph. As you can see, there was a higher level of backlog cases than usual at the beginning of our fiscal year. That was caused by a planned but extended absence by our registry site manager. So to signify that data point, I have a little life preserver. And you can see it was significantly impacting our abstraction turnaround time by the arrow leading over to the graph on the left. Move all the way over to the graph on the right and you're going to see the action we took. So that first bar, the cases in yellow are the number of cases I abstracted into the registry to support the registry site manager during that time. And if you move back to the middle graph, you'll see how our backlog very quickly returned to normal. However, later in the fiscal year, the registry site manager transitioned out of our department back to a clinical role. At that time, we also implemented a contingency plan to cover the role while we're seeking and onboarding her replacement. Hopefully, this provides some insight on how critical it is to have a plan in place to quickly react to both planned and unplanned events. Let's run back to one of our baseline concerns we had with the accuracy of our data. In the ICD metric scenario, please recall, we were concerned with the accuracy of data abstracted by one of our newer team members. Look at the arrow in the middle of the graph. The only action we took to move from 63 percent compliance with the metric to 94 percent was implementing that quarterly outcomes validation and iterator reliability auditing plan. We started our validation plan with a very structured set of processes and tools. You can access that outline and all those tools on the NCDR participant resource sharing site. But we did find over the years that we needed to maintain flexibility and adaptability for these processes. The plan that fit a newbie abstractor five years ago doesn't fit that same abstractor today. As we grow in our roles, our needs change. And so for this reason, we do that annual review for learning needs as well. We then balance out the competing priorities of both the primary auditor and the abstractor to set up a goal for the year. These are processes, though, that we all look forward to is the feedback we both give and receive is meaningful. We found that these conversations have helped standardize our approach within the same registries across three campuses. In the interest of time for this session, I've only shared a small subset of the work that was made possible by improving our productivity and the confidence in our data. Each of our registry site managers are now working to their full potential. By being efficient with the time we spend abstracting cases, we now have more time to devote to process improvement projects, which is the most exciting part of our jobs. We're very proud that we're actively contributing to optimizing patient care at each of our campuses. Please don't hesitate to reach out with any question. Our team really is fully committed to exchanging best practices and to networking. We definitely want to hear from you. Thank you for your time.
Video Summary
In this video, Christy Burchelden, a registered nurse and process improvement specialist at Baylor Scott & White, the Heart Hospital Plano, talks about using data to drive improvements in healthcare. She discusses her experience with the NCDR suite of registries and the importance of providing clinicians with timely and accurate data. She introduces the concept of data validation and explains how her department uses focused audits, metric audits, and registry adjudication to ensure data accuracy. She also discusses abstraction productivity and the importance of monitoring abstraction turnaround time. Burchelden shares the outcomes of their efforts, including improvements in data accuracy and timeliness, and highlights the use of contingency planning to maintain productivity during planned and unplanned absences. She concludes by emphasizing the importance of ongoing learning and improvement in the field of healthcare data analysis.
Keywords
data-driven improvements
data validation
abstraction productivity
contingency planning
ongoing learning
healthcare data analysis
×
Please select your language
1
English