false
Catalog
ACPC Quality Network - QI Science Training (2017-2 ...
Lesson 2: Understanding and Learning from Variatio ...
Lesson 2: Understanding and Learning from Variation
Back to course
[Please upgrade your browser to play this video content]
Video Transcription
If you've been to one QI training session, you've checked the box, but clearly that would make this talk obsolete for many of you, which is not good, but also it's complicated. Improvement science is complicated, and getting operations to change is complicated, and there's more to it than QI 101, so keep learning. The other comment I would make is ... Let's see, I need to get to my slides, I guess. The other comment I would make is that this idea of coming to a learning session is most powerful when it is live, and so one thing I've learned over time is that the reason this stuff works is because we're together and we share with each other, we learn from each other, so as many people to be able to come to these live sessions and interact with each other, the better, so ... Great, so when I was deciding what to do with the follow-up to QI 101, there's a pattern to this, and so this talk for the next few minutes will be about variation, which is one of the key parts of Deming's profound theory of knowledge, which I will start with again, but the bulk of the talk is really about variation, so understanding the role of variation in our practice in our day-to-day life, giving some examples of variation, and then some ideas for some tools to help us understand and help us learn from variation, and I'll use an example that's going on right now in our little part of the world in the Heart Institute at Cincinnati Children's to teach you about this, so as I talked about when we first introduced quality improvement and how you do quality improvement, we base a lot of what we do on Deming and his concept of the theory of profound knowledge, and this you might remember is a microscope, no, magnifying glass, thank you, that you look at your system through, and the different areas are all important in making changes to your system, and the handle to the magnifying glass is the values that are inherent to the system that you work in, but the four important parts of this improvement method are number one, the theory of knowledge, which is really what we talked about in QI 101, and I'll review that briefly, number two, understanding variation, number three, appreciation of the system that you work in, and then number four, maybe the hardest part, which is psychology, dealing with the psychologic makeup of the people that you work with in the system. So the theory of knowledge really revolves around some key tools, including this model for improvement, where we're asking the question, what are we trying to accomplish? How will we know that a change is an improvement? What change can we make that will result in improvement? And that's where we really start a quality improvement project. On the right is the key driver diagram for one of the outpatient metrics, and it's another tool within the theory of knowledge framework. And that's what we really talked about in the first short lecture, the QI 101 lecture. But today, I'll talk about variation, and I think it's actually a very good point in this quality networks journey right now to talk about variation, because I think we're finally getting enough data among all of these centers on these first two projects to really look at the data in a different way that we can learn from it. So there are a couple of important points when you talk about variation, especially variation in healthcare. There are two types of variation that exist in the things that we do every day. There's what's called intended variation, and if we put that in the framework of healthcare, this is an important part of effective patient-centered healthcare. It's also called purposeful, planned, guided, or considered variation. And I always give an example with this. I usually use an ER example, use a cardiology example, if you had a baby with Tetralogy of Fallot show up in your clinic, your decision about what to do with that baby would depend on the degree of Tetralogy of Fallot that you're working with. So someone with severe right ventricular atrial tract obstructions, very different from a pink Tet. And so if you gave five different Tets of varying degrees to the same cardiologist and five different decisions were made, that would be very intended variation. That would be guided and purposeful. Unintended variation is due to changes introduced into healthcare that aren't purposeful, planned, or guided. And sometimes those are based on medical decisions that we make, but sometimes they're based on the system itself. So if there are variations in the medical record or there are variations in how you admit a patient from the ER to different units, they can come from equipment, supplies, the environment, the way we measure management practices. And the corollary example here would be if you gave a new baby with Tetralogy of Fallot with a certain RVOT gradient to five different cardiologists and they made five different plans, that would not be good variation. That would be unintended variation. And we know from healthcare literature but also from all manufacturing literature that reducing unintended type variation in a process usually results in improved outcomes at lower costs. So when you talk about variation and when you start talking about standardization, healthcare people can get kind of antsy about this because they don't want their autonomy to be taken away. And we're not talking about taking any autonomy away. We're talking about reducing the unintended variation that typically is a wasteful part of the system. This is a problem that exists widely in healthcare. And so just a couple of examples. I don't know if anyone's been to this website, the dartmouthatlas.org website. This is a project from Dartmouth that really has looked, using billing records, looked at variation in care across much of the Northeast but then across the United States as well, looking at areas to target for reducing this kind of variation. This is an example that's an adult example but this comes from Dartmouth and then I'll drill down for you. This is the annual percentage of diabetic Medicare beneficiaries who receive hemoglobin A1C testing in different regions of the country. And the lighter regions are in the lower range and the darker regions are in the higher range. So you can see if you're a 67-year-old diabetic living in New Mexico, your chance of getting that proper healthcare test is much lower than if you live in Seattle, for example. And this Dartmouth website is actually very interesting. Go and play with this. It has some pediatric stuff on it too. But you can drill down. So this is using that same data in this particular region that I'm in, in Ohio, and then you can actually drill down to Cincinnati and look at the variation that exists among practices in Cincinnati as well. So there's a lot of variation. Again, nobody would argue that there's a reason why one area of the country should not be doing hemoglobin A1Cs as much as another area of the country. Which is part of the practices that we participate in. Same is true in pediatric care. This was a study that was published on quality of care in ambulatory general pediatrics practices. It was done on 12 large metropolitan areas where patients and families were interviewed and then charts were reviewed. And in preventive and in well child care and in acute care, these different domains of care, acute care, chronic care, and preventive care all showed that there was a lower than expected adherence to sort of standard practices in the general pediatric population. And we see this in our field too. So I didn't pull up a whole bunch of examples of pediatric cardiology, but there have been a number of examples of variation that exists, especially when you look at these large registry projects that whether it's IMPACT or PC4 or MPCQIC, these have all shown wide amounts of variation in our care. So variation does exist. So how do we identify and understand variation? Well, I have an example from one of the books that we use to teach quality improvement, the Healthcare Data Guide. This was a study that was done on four different clinics that was looking at average wait time for patients in clinic and the correlation between wait time and patient satisfaction scores. And this was done as a sort of classical data collection, and they presented it in a table like this in the way we typically would look at descriptive statistics, and they looked at 11 weeks in each clinic and their average patient rating and their average wait time. And what they found was that over 11 weeks, there were very similar average wait time and patient rating. So average wait time was 36 minutes, patient rating was 4.25 in all of the clinics. And so if you stopped here, you would maybe surmise that these clinics were all very similar. Well, one argument I will make to you today is that there is a lot of power in visualizing data down to the single data point, if possible. So the more we do to combine data and not look at every individual data point, the more we lose. So this is the scatterplot of those four different clinics and their correlation between the wait time and their average score that week. And so you can see there a very different pattern in each one of the clinics. For example, this clinic had a pretty linear pattern with regards to wait time and lower patient scores, except for this one week where the patient scores were really low, despite the wait time that you would expect to put them here. I'll give you a highlight. Oh, am I not – are you guys not seeing my arrow here? I think that's the one. There we go. Is that better? So you would expect this clinic to be here, except what you have to do is move it to the next slide. So you would expect this clinic to be here, except what you have to do is move it to the next slide. So you would expect this clinic to be here, except what you have to do is dig down to find out that on this day the parking lot was closed and everyone had to park across the street. And, you know, you can see a clinic like this where the wait time got longer, the scores got worse, except right here when they started to go up and you find out that – did we lose it again? I'm going to have to use my hand here, laser pointer. Well, sorry. So the clinic on the top right, what you found out when you dug in here was that when they got to a certain waiting time they started handing out, you know, popsicles or some treats to the kids in clinic and then everyone got happy again. So there's more to the story if you don't dig down to what the data actually looks like, you might lose some of that. So how do we talk about visualizing variation? Well, we use two things primarily, run charts and control charts. And I'll show you the anatomy of both of those. So this is a run chart and the anatomy of the run chart is that you have your data, which are the blue dots. Those are charted over time. It's important that we visualize data over time so we can understand what's happening as we make changes to the system. In a run chart, as in a control chart, there's always a center line, which is typically the median but in some control charts is another measure of central tendency. And then there are control charts. So control charts add to the run charts statistical process control limits. And these were developed in industry by Walter Shuhart, so sometimes called Shuhart charts. They're statistical tools used to distinguish between variation in a measure that's going to be due to what we call common cause and variation due to what's called special cause. And if you look at the anatomy of the control chart, in addition to the central tendency line or the center line, we have upper and lower control limits. And those are typically three standard deviations above and below that line of central tendency. Again, some control charts are a little more complicated in how those are calculated, but that's generally what they are. So in addition to unintended and intended variation, we have to talk about common cause and special cause variation. And these are important to understand because they will help us understand when we should react to something. So common cause variations are variations that are inherent in a process over time. They affect everyone working in the process and they affect all the outcomes of the process. They're due to chance. We call a system stable when all of the dots, all of the data points are within the control limits. That's common cause. It doesn't mean the system's good. It just means that everything's within the control limits. Special causes are those that are not part of the process. All of the time they don't affect everyone, but arise because of specific circumstances. If you go and look at special cause points, you usually can assign a cause to them. When there's a lot of points that are outside of the statistical process control, it's said to be unstable and the process isn't in control. So if you look at two charts, this chart on the left shows data points, a central tendency line, and the control limits. And these points are all within the control limits. They are all within common cause variation. Nothing is different than anything else. As opposed to the chart on the right, where you start to see points that are now above that upper control limit, that tells us a special cause has happened. And hopefully what's happened is you've introduced something into the system that's led to those special cause. At least you can go back and look and typically find what's happened that's led to that special cause. And we have a number of rules for control charts that help us determine when a special cause has happened. One of them is if it falls outside of those upper or lower control limits, but there are some other rules as well, like having six points in the same direction up or down, like having a lot of points in the upper and lower third around that median control line, or points that are sitting very close to the outer control limits, or seven or eight points below the median line or above the median line. These are all rules for special cause as you're looking at your system. So that's how you'll know if something's changed in your system versus something's just the same as it always has been. So that's how we identify variation. But the next step and the last step really is what do we do with the information once we've started to look at the variation. And I like this slide because I think this is what we typically do. These guys are looking at this graph saying, I don't have any idea what it means, but I love the action. And even more than that, we often get charts that are shown to us in healthcare, this happens all the time, where you'll see a chart and people will say, oh, this is really, really bad. We need to make a change to the system. But in reality, it's not any different than it was a month ago. And so we introduce changes and it's all very confusing because changes are being made all the time. So one of the things you can do as you learn this is be able to ask that next question. Well, can we look at that a different way so we can see if there really is something going on? So how do we respond? Well, if you walk through this control chart, here's how you might evaluate what's happening in different parts of the chart. So if you look at this highlighted area here, the current state of this system is that it's unstable. There's a point outside of the control limits. There's a special cause that's present. And we want to go down. We see that by the chart that good is down. So the process average is also too high. So what do we do? Our action here is to identify and eliminate bad special causes and identify and build in good special causes, which may move us to here. So current state is that the process is stable. There's common cause variation, but the process average is too high. So the variation we think is too high and the average is too high. Well, we can work on common causes to reduce variation, which might get us to here. You can see that the median hasn't changed, but now the statistical process control limits are more narrow, which is great because it allows us to be better at predicting what's going to happen next. But we're still too high in the process. So we're going to take an action to move the entire process down. And you can see that happened with the next step there. So there are a number of organizations in our field, including the Quality Network now, I think, that are using these principles to try to improve processes and outcomes. Long ago, the Northern New England Cardiovascular Disease Study Group really pioneered this idea of variation among centers, trying to identify best practices or best outcomes and then learning from each other. The Pediatric Cardiac Critical Care Consortium is doing this. NPCQIC is doing this. The Society of Thoracic Surgeons is at least providing us the data to start to think about working together as centers. So before I tell the story, does anyone have any questions about variation? And I'll tell you a story from Cincinnati Children's Hospital who shouldn't be acting like they did with regards to our data, but we do. So any questions? I think this is pretty important right now because as Kathy showed you, I think we have enough data. Clara, you said we have nine quarters of data from the two groups. So we should be able to start looking at the data over time to make sure that we're actually improving these processes as a step towards thinking about are we improving the outcomes that we're trying to affect. Yeah, Kathy. So the variations in that data, though, my group may be introducing different things into the PDSA cycles than Matt's group, than Boston's, than another group. What are those common causes or special causes? If my intervention for the BMI counseling was first to educate the doctors and then to create handouts or whatever, and that's why I get better and better at it, but you did it the other way around. How are we going to look at that? Well, I think you'd need to look at each center and each center needs their own control chart to see who's actually improving and who's not. The direction of each control chart. Exactly. Yeah. And we would learn, like, if your center has had special cause now and you're outside the control limits, then we would want to learn from you for sure what are you doing. Now, it doesn't mean that what you're doing is going to work at Boston, but it does mean we should know, you know, very specifically what you're doing. But what we talked about in November with the BMI project, and I don't remember whether we also did TED at the same time, is people showed their run charts and how they would get improvement and then they would fall off the scale and nobody would report BMI counseling or something at all. So we already started to do that, but just anecdotally. Yeah. Yeah, I think we should be taking the data now, nine quarters in, and really systematically looking at all the centers involved. And it's not as simple as it sounds. I mean, some centers have started later, not everyone's been involved for nine quarters, but there are ways that we can look at this. So my story is about patient and family experience. So we're very interested in patient and family experience at Cincinnati Children's Hospital, and we started collecting outpatient scores on all of our doctors. And so a sample of every doctor's clinic every month is given a, families are given a fairly simple tool, it's a few questions, and we're starting to track this. And what we've decided, an important metric that we've decided to track is the families are asked, how would you rate your provider? One to 10, 10's good, one's bad. And we've tracked the percentage of doctors that are not, that are given nines and tens. So if 10 people rate me in the month of January, and nine of them give me a nine or a 10, and one person gives me an eight, then my score that month is 90%. Does that make sense? So this happened for six months, and we get an email. We get an email from our division director that said, congratulations to these four doctors, they got 100% every month for the last six months on their patient family experience scores, which is fine, like I'm happy celebrating people. Problem was my next door neighbor, to me, not getting 100% on his scores, you know, and some other people were really upset that their scores were like in the 85 range or whatever, and so they started calling all their nurses together and having clinic meetings to figure out what was going on, and I kept saying, well guys, just hold on, like let's, can we actually look at this? So it took a while for the hospital to give us the data to start looking at it, but I'm going to show you what we found. And I get mine every month too, and my mood goes up and down just like everyone else's, depending on my score. So this is all our cardiologists who are doing outpatient clinics from month to month to month. So this is July 2016 to September 2017 when I went on my rampage here. And this is a center line here, so a cardiology division's about at 87, 88%, and we're in common cause variation, right, where every month we're the same. Now this doesn't show you what's happening with individual cardiologists, but we're not getting better, we're not getting worse as a division. We're stuck, right, at 88% or whatever, which actually isn't too bad in the hospital, but, you know, it's not like we're getting better or getting worse. So then I said, well, what about all of the cardiologists, the individual cardiologists over that period of time? Because my friend next door, he's really upset and he thinks he's really bad, and all these guys are out, you know, they've gone to beer night together because they're all 100% and they're awesome. So tell me what actually is going on. Well, so if you plotted everybody across those 13, 14 months, this is their average score for that entire period of time. This is a control chart, a special kind of control chart called a funnel plot, which we don't have to go into right now, but same principles apply. There's an upper control limit, there's a lower control limit, there's a central tendency line, which is about the same, 87, 88%, and everybody except one cardiologist is within control limit. So there is one special cause, and I will tell you it's not my next door neighbor, it's someone else. But everyone else is the same, even the guys that got 100% for six months in a row, they're the same as the guy who, you know, is sitting here, you know, at 87%, because when you look at the variation that exists and put physical process control around it, everybody's the same. So then I said, well, maybe it's the early career, mid career, late career, let's look at all of those groups as individuals. So I combined all the early career folks, the mid career, late career. And it wasn't to be so that the early career got worse scores than the late career, or the late career, early career, everybody was the same. And then I looked at the types of cardiologists. So general cardiologists, preventive, heart failure, EP, adult congenital, and interventional. And except for one month for the adult congenital, everybody really was the same. And another way to look at this is to look at them by year. And this also showed that all of the practices were all the same. So I showed this to our cardiologists, and it didn't help at all. Still, the people whose scores were lower, like having special meetings and try to. So even though we have the data, sometimes we don't do with it what we should. So that's the end of discussion on variation. I hope when we come back in May and have our next webinar, we'll be able to show you some of our data in this way so that we can start understanding what's going on. And my last slide is just a plug for you all working on quality network stuff, but also go back to your centers, because you guys, I'm sure, are connected to some of the other quality work that's going on at your centers. Because for the AP National Conference and Exhibition in the fall, in our section on cardiology and cardiac surgery, we're going to have a session on quality improvement that's going to focus on where we are in the field. But also, more importantly, we're going to have an abstract session for trainees focused on improvement projects, which will give MOC4 credit for the abstracts that are presented, hopefully. And then we're going to have a fellow workshop on improvement workshop that's going to give them the credit they need for their ACGME projects that all of them have to do. The abstracts are due on April 13th for this session. So thank you, and probably time to move to the next session. Gerard? When you did early and late career... Yeah. You're one cardiologist... You want to know my cutoff? You're one cardiologist that was on medication. That is struggling. He fell within the confidence limit. Is that because the way that you broke it up, your confidence limits just got wider, so he fell into the... He or she fell into the... Right. Exactly. So in this type of chart, the confidence limits are determined by the N. So then for each group... The N is smaller, therefore the confidence limits are bigger. That's right. That's why it's kind of a gratuitous plug for the more centers we get, the more tight our confidence limits can get so that we can find good versus best performers versus those who would need more improvement. Yep. That's right. I'll just say another sidebar benefit to understanding what's outside is being able to speak to issues at home that come up. I used to... I believe that my entire job is to use every drop of statistics or quality improvement science I knew to prevent our CEOs from freaking out about the wrong thing month to month and jumping the whole institution down a rabbit hole for noise. So that will happen, and that's why this is so important. Yeah. I mean, I think we're trying to train enough people so that there are people everywhere who can ask those right questions. And I kept saying that after I came back, I said, you know, you've not given us any confidence limits on this data. So I don't know whether that month was a bad month, you know. Did they change? We now have confidence. There had to be a lecture at the leadership table. Are you talking about my team at Cincinnati? So Cincinnati, first of all, I will say is a unique place because while we may not actually believe what we have or do it the right way, we have the infrastructure in place. For this particular thing, this was all done on my computer because I couldn't get anyone else to do it for me. But we have an infrastructure, we have an analytics team in the hospital that understands these kind of charts. And so a lot of it, but a lot of it's built into sort of what people know too and we'll build it on their own. If you're working at an institution now with a quality department or something, it sometimes feels like you can't reach out to them. But it's always welcome to reach out to them for more expertise or tools. You also just said there's a new software package. Yeah, if you wanted to look into something, I actually have been impressed with this so far, although we're just starting to use it. There's a company from the UK called LifeQI, who has a...
Video Summary
The talk emphasizes the complexity of quality improvement (QI) in healthcare, underscoring the importance of continual learning beyond basic QI training. Live learning sessions are deemed vital for shared learning among participants. The speaker stresses that change in operations is challenging and improvement science is intricate, going beyond elementary QI concepts. The focus shifts to understanding variation through Dr. Deming’s theory of profound knowledge, highlighting variation in practice and using healthcare examples.<br /><br />Two types of variation are discussed: intended (purposeful and planned, such as varying treatment based on patient needs) and unintended (arising from inconsistencies in system processes rather than deliberate choices). The importance of visualizing and understanding data over time through run and control charts is emphasized to identify variations in processes. The talk highlights the concept of common cause (systematic variation) versus special cause (unexpected factors) and the importance of distinguishing between them to enable effective process improvement.<br /><br />Examples from surveys at Cincinnati Children’s Hospital illustrate how data visualization can help identify patterns and focus improvements. The speaker encourages participants to engage in QI initiatives and highlights opportunities for sharing and learning at upcoming conferences.
Keywords
quality improvement
healthcare
variation
Deming's theory
data visualization
process improvement
Cincinnati Children's Hospital
×
Please select your language
1
English