false
Catalog
IMPACT Data Can Unleash Program Excellence - 2020 ...
IMPACT Data Can Unleash Program Excellence - Sutto ...
IMPACT Data Can Unleash Program Excellence - Sutton
Back to course
[Please upgrade your browser to play this video content]
Video Transcription
Hi, my name is Nicole Sutton. I'm a pediatric cardiologist at the Children's Hospital at Montefiore in Bronx, New York. And I'm gonna be talking to you today about how we can use impact data to unleash program excellence. So the objective for today's talk is to learn how we can use our impact data to improve our congenital heart disease programs. For our study, we're gonna use an example of using data from another cardiac registry for quality improvement. And we're gonna talk about next steps for impact. So a little bit of background, and everyone knows impact stands for Improving Pediatric and Adult Congenital Treatment Registry. The purpose of the impact registry is to improve patient care by promoting quality improvement at the participating institutions. At the ACC, in the Adult Congenital Pediatric Cardiology Council Breakout Session on Quality Improvement, the need to have a national registry to better understand clinical practice and patient outcomes came to discussion. And from this discussion, a working group was formed, and they designed a paper outlining the concept and feasibility, and this was approved by the council in 2007. After this, this was worked on, and by 2010, pilot centers started to collect data, and then in 2011, they actually started entering data in the registry. That was version one. The number of centers grew quickly, and version two launched in 2016. Currently, there are 106 centers and over 246,000 procedures in this database. And version three was starting to be in discussion just before COVID started to make some problems for everyone. This is a map of all the sites that participate in the impact registry, and you can see how this covers the entire United States and Canada, and on the very bottom, you see a little Australia down there. So, what are the benefits for being involved in the impact registry to your site? So, you get to be involved in the ACC Quality Improvement for Institution programs. You have access to all those programs. You get involved, you can be involved in quality improvement projects that are specific for the impact registry. You can do research on data that is in the impact registry, and thus far, there have been 16 published papers, and as you can see, there are three more papers in preparation, five in analysis, and nine being in feasibility right now. Another big benefit to your center is that you can use this on U.S. News and World Report. Being a participant in the impact registry actually gets you a specific point in the U.S. News and World Report section on quality improvement. I'm going to use an example of another registry that I am very involved with at my hospital called the NPCQIC, and how they use their data in quality improvement methods to make significant changes in the outcomes that they were looking at, and I hope that we can do something similar with our data. So, the Joint Council on Congenital Heart Disease and Quality Improvement was formed in 2003 as a leadership alliance to enhance communication and improve coordination amongst the various societies representing cardiologists, congenital heart surgeons, and ACHD specialists, and one of the main goals was to develop a QI project that would allow pediatric cardiologists to satisfy the ABP requirement for recertification, but utilizing a QI project on the care of a specific subset of children with heart disease. In 2006, a set of guiding principles was adopted that outlined these goals in details. I'm not going to read this all to you, but basically, they wanted to focus on children with cardiovascular disease, and this would be through multiple quality improvement projects to address the spectrum of pediatric cardiovascular inpatient and outpatient care, and they specifically wanted to look at diagnosis, treatment, recovery, discharge from the hospital, and that critical period of follow-up when you leave the hospital, like handoff. They wanted to set up a national, multi-institutional database for supporting these QI projects, and they wanted to involve parents, families, and patients. They also wanted to look at including other subspecialties, so it wouldn't just be pediatric cardiology, but it would be cardiothoracic surgery, critical care, anesthesia, nursing, social work, child life. Since then, it has actually grown to involve more people even than that, so they came up with this mission, and they said their mission is to dramatically improve the outcome of care for children with congenital heart disease through a national quality improvement network of providers collecting longitudinal data and conduct QI research intended to accelerate the development and transition of new knowledge into practice. It's a very big mission, so this is the timeline of that, and you can see this started in the fall of 2005. They set up the charters and the quality improvement task force. They had meetings. They designed it, and then by 2008, they were testing some pilot measures, and by 2009, they had their first centers and were doing what they call their first learning session, which is kind of like this. It was a summit of all the centers getting together and talking about what was working and what wasn't working, so they decided to focus on a population that was clearly in need of improvement, and they decided to choose hypoplastic left heart syndrome because this still has one of the highest risks of morbidity and mortality. It's the highest among all of our patients in pediatric cardiology and surgery, and what they focused on was patients who had survived their Norwood operation but were still at risk of dying during what's called the interstage period. This is the period between your stage one surgery and when you come back for your stage two, and during this period, the mortality had been 10 to 15%, so that's quite high, and what they saw was that even the people who made it through this had significant morbidities. They had poor feeding. They had chronic cyanosis. There was a very high incidence of recurrent laryngeal nerve or phrenic nerve injuries. They also had delayed growth and development so that when they showed up for their second surgery, about six months of age, they were usually quite small and had fallen off their growth curve compared to where they were when they were first born. They required numerous unscheduled visits and readmissions during that period. Phase one of the study focused on improving this interstage mortality, and what they were able to do, and I'll show you how we did this, was decrease the mortality in the interstage from 9.5% to 5.1%. That's a relative reduction of 46%, and this was all through QI processes. They also showed that the cumulative aggregate growth failure center line, and we'll talk about what this means in a little bit. I'll show you what these pictures look like, was also reduced from 18% to 13%, which is a reduction in growth failure of 28%. So these are significant gains for our patients. They have on their website, this is their public website, I can't show you any of the private data, I can show you the public data. On their website, they have this beautiful graphic that shows what their goals are and what they've achieved and who participates. So you can see they have 67 care centers in multiple countries. So this is the United States, there's a few centers in Canada, and I believe there's one in England now. And you can see that there's over 1,400 patients in the registry. This is a patient-based registry as opposed to IMPACT, which is a procedure-based registry. But you can see that they have published 37 papers and they're reporting transparently and that they now have their goal, their new goal. So we decreased in the original phase one, it was to decrease the mortality in the interstage. Phase two is a little bit different and I'll say that, but they've changed their goals. Now they're talking about who survives to their first birthday and who is eating birthday cake on their first birthday. Other thing that they did is they came up with what they call the interstage change packet. And this was a packet of tools that your hospital could use to improve the mortality during the interstage based on the interventions that were done in phase one of the study. This included things like weekly visits to see your cardiologist, or if you couldn't do weekly visits, weekly phone-in weight checks. Some centers were very far away from their patients. So the patients would do tele-visits every week and show the patient's saturations and weight and blog it that way. In some centers like mine, where patients live close to the hospital, they felt more comfortable coming in and seeing us. But there's a whole list of things that were done in the centers that had the best outcomes that other people started to do and everyone's outcomes got better. Phase two launched in 2016, and now the scope was increased from the time of diagnosis to the first birthday. And you can see, this is an infographic of how this study plays out, that this is now from the time you're diagnosed, which could be in fetal life or after you're born, through your first surgery, through all of your interstage care, through your second surgery, which is the bidirectional gland, all the way to your first birthday. And now they've brought in the goal, and the goal is now to improve survival and optimize the quality of life for infants and their families with a single ventricle requiring Norwood between diagnosis and first birthday. So a bigger, loftier goal. And the way they do it is they use this structure of learning labs, and you can see there's a research committee, there's a quality improvement steering committee, and then there's the learning labs. The learning labs focus on different parts of the data that we're working and collecting. So there's a fetal lab that looks at how many patients get fetal counseling and how complete that is. The surgical and ICU looks at the post-op and pre-op care. The interstage still focuses on the period between the two surgeries. Neurodevelopment was new in the second stage. They focus on making sure that patients are getting the neurodevelopmental screenings, and that if they need therapies, that they're getting them. The transparency group worked on making people report those things transparently within the collaborative. And then the nutrition and growth worked on optimizing nutrition and growth at all the different phases. And what we did is we did something called learning through variation. Each lab would find a specific measure to focus on for the year, and these measures can be process or outcome measures, and then these would be reported at the meeting. So let's talk about variation. So since this is a mixed group, some people will have more or less experience with some of these quality improvement terms. We're just going to go through everything. So there's two types of variation. There's what's called common cause variation, and these are causes that are inherent in the system over time. They affect everyone within your system, and they affect all the outcomes of the system, versus what's called special cause variation. These causes are not part of the system all the time or do not affect everyone, but arise because of specific circumstances or events at one specific location. Special cause variation can be due to many different things. It can be a case mix of patients. So different groups or sites may have a different case mix. You can have changes in personnel, changing in staffing level, unusual volume. You had a very busy month or a very quiet month, different equipment between centers, equipment malfunctions, different supplies, different processes, different sampling or measurement methods, or are you using a different operational definition of a measure so that what everyone is reporting back is not the same? So now we're going to talk about control charts. And for those of you who are not new to QI, you will know a control chart, but I'm still going to explain. So the control chart measures an upper control limit, lower control limit. These are like your fifth and 95th percentiles. People are used to those when they talk about statistics. In that area of the upper and lower control limits, everything in there, that variation, is considered common cause variation, meaning that it's within the range of those two standard deviations. You'll see an average. Something that goes above or below the control limits would then be out of that control area and would be special cause. And you can see that marked over here with the green arrow. Any variation in the blue area is common cause, as you can see with the blue arrow. It can go up and down each month, but it's within the error bars. And what you see in the middle is the process average. Now, over time, you can follow the control chart to see whether you're making changes. And that's what we did in the NPC. So the baseline data we were working with was interstage mortality. And we formed our teams, and then we started to do changes. And that's what you see in this middle area. This is like your PDSA cycle. You pick a new change, you test it out, and you see what happens. And then if there's a change in the center line, you have a change in your outcomes, which is what you see here. We made a change, and then in the new center line, there's a new mean. And that's when we say that we move the center line from mortality from almost 10% to 5%. In that case, we made the center line come down. And for growth failure, we made growth failure come down. There were fewer patients. What we saw change was the center line of all of the centers over time. And that's how you know that your changes are working. Now, when we see a stable process, all the changes are common cause variation. So you're studying your system in place, and you test a new idea, such as we're gonna give people, we're gonna schedule the first outpatient follow-up visit with the pediatrician one week later for everyone. We're gonna make the appointment for them. We're not gonna let patients schedule their own appointment. And we see if that works out better. And 100% of patients have an appointment scheduled because we did it for them. And 95% of patients have that visit versus say we didn't make the appointment for them. And only 80% of them remember to make the appointment because they were overwhelmed going home with a new baby. That would be a way to do common cause variation. But sometimes you see special cause variation, and you wanna learn why is this one different? So positive changes, things that that center, or I use center because in both studies, it's center-based. If a center has a specific change that they made and their effects are better, you might wanna see what did they do that I can incorporate it to my practice. Just like if someone has a negative change, you wanna know what happened. How do we bring them back to the center and prevent them from having that recurrence in the future? So one of the ways to look at this is what's called a funnel plot. And in a lot of QI, this is a similar way of looking at the data, same data, you're just changing the way you look at it. So in this case, we looked at several different outcomes this way. And what we looked at was why were people outside of the control limits? When we do it this way, it's a little bit easier to see by center. This is not over time now. This is at right now, this is our data. And what we're seeing is the variation between centers. So you can imagine that what we looked at, this is a generic one, in that study was an outcome, say how many patients were being fed at a specific day or by a specific time of life. And then what we did, or as you can see, one of the goals was to have everyone eating by mouth all their meals by age one. So if that was our goal, we would take this data and say how many people were reaching that goal? And if you look, Ward A or Site A is above the line. So that means they are above this. Now, depending on what you're measuring, being above or below could be better or worse, depending on whether you want a positive variation. That depends on what you set up. So if we set this up where 500 is medication errors, the way this is set up, medication errors in Ward A is worse than medication errors in Ward B because Ward B is below the average. So they're having fewer medication omissions and Ward A is having more. And what you do is you go and look at those two centers to see what are they doing differently than everyone else is doing to see if we can make things better. And we did this with several of the measures in the different learning groups so that we could focus on them. And what's really important about that process when you go through seeing where you are in this chart is if you're Ward A or Ward B and you're better or worse than what everyone else is doing, you have to figure out why. Now, some of the centers discovered that their numbers were off the mean because they had misinterpreted what the data meant. And so they had selected yes or no or put in the wrong data. And so all they had to do was clean up their data and they moved into the center line. Other groups realized that the reason they were doing poorly was because they had not realized what the goal was correctly. I.e., let's say the goal is to have everyone have a follow-up visit within seven days of discharge. But you thought the goal was 14 days or 10 days. And so you made all of your appointments too far out so you look like you failed that measure. But then when you reviewed it, you said, oh, we were supposed to do it within seven days, that was the goal. And then you start scheduling your appointments between within the period that you wanted and suddenly you move into the center line. Now, the other thing it brought up is are those goals the goals that you want? So there was a lot of discussion. Did you want the goal to be X number of days? Did you want the goal to be something else? The really most important part of that re-examination of the data and the definitions is that everyone then has to re-evaluate how important those specific measures are, whether they're important enough that you're gonna change your process or whether you think that the goal is wrong and you might wanna change the way that we've defined the goal. Other things that might come up is that there's actually a problem or something that you could do better at your site. Some of the sites have nutritionists that can meet with the patients at every visit and go over the feeding, and make sure that the feeds are properly optimized. Other sites don't have nutritionists. And maybe one of the sites without a nutritionist available sees that they're not doing as well, and they use their data to go to their higher ups, their admins, and say, you know, these other sites are doing better than us. They have less growth failure. Their patients have better outcomes after their surgery because they're gaining weight. We need access to a nutritionist X number of hours per week, and we will be able to get better outcomes too. And that's another way of using your data to your benefit, because you want your patients to do well. And there is data in this study that the better your weight is going into your stage two surgery, the better your outcome from your stage two surgery is. If you're one of the centers that's doing really well, you then can become a mentor for another center. And I feel that this process of sharing what everyone was doing well and not doing well was very good for all the centers. And we did this with a chart like this that had all the centers numbered. You could actually see which each center was because it's a transparent program within the collaborative. And you could go to another center and say, hey, your center is the same size as mine. You do the same number of cases that I do each year, and your outcomes are better, different. How are you doing that? Because if you're a small center that's doing 100 cases a year, going to a place that's doing 500 a year may not be as helpful to you. Whereas if you're the place doing 500, some of the things that the 100 place center is doing, you might go, oh, you know, that's a good way to save some time and energy. I have a lot, you know, you learn from each other in new ways. So that's kind of how we would use a funnel plot. And we looked at all of our measures in these different ways to see different sides of the data. Now, this is how the impact report looks. So this is what comes on the top of your impact report. And this shows you a percentile. It says, you know, 90th, 10th, and then it actually says where the median is, and it gives you an arrow that shows you where your hospital is. So it's nice because you see where you fall in relation to all the other hospitals. And this is actually one of my reports. And I chose these measures on purpose because you can see again that some of these measures are process metrics and some of them are outcome metrics. Now, the reason I chose these ones is not just because I did well in them or my hospital did well in them, but because there's so little variation in them that they actually are not as helpful. And the reason I say that is if you look at the numbers, so my hospital had zero without a device post-procedure. Zero was reported by everyone in the country. And I think it has to do with the way that that particular measure is actually reported. You don't report ASD closure if you didn't close the ASD. So it's always gonna say zero because if you tried and didn't do it, it doesn't ask you to fill in that spot. Whereas here, if you say how many patients had no shunt at the end, 100% of patients had none to shunt, and that's because of the way we do the procedure. You're not supposed to leave the procedure with any shunt. So that might be why that that has shows no variation. And then the last one, also device embolization had no data or not enough variation that there couldn't even be a distribution. So maybe we need to rethink some of the measures that we're collecting. The radiation data has a lot more variability. Now I didn't choose the radiation data for outcome, meaning what my radiation numbers were. I actually chose the one that says that the measurements were recorded. And my hospital has a very high ratio of recording all the numbers. That's because we take radiation precautions very seriously. And the way that our data goes over automatically takes all that data automatically out of our report. So I don't have to manually put it in. Now, we also get the other variation of the data is an Excel spreadsheet that looks like this. And this is really hard to read. This is our Excel spreadsheet. And you can see that it gives you several quarters worth of data. But the way it comes to you in this format may or may not be the most useful way of looking at it. But if you look all the way at the end, you see that it has all my patients. So it says my hospital rolling four quarter average. And then next to that, it has all the patients in the registry. So it actually does give you a lot of data, but you have to play with it to make something from it. So where do we go from here with this impact data? Because we have a lot of data. So how do we leverage our impact data the way that maybe the NPC did to improve outcomes? How do we show participants in the impact registry and their leadership at their hospital the benefits? Because each one of these registries costs money and hospitals wanna save money. So they wanna see that what we're doing with these registries is of benefit to the hospital and to our patients. Can we change the way we output the data to see trends the way we do with the NPC? Because while this way of looking at it is helpful, it might be nice to see over the last four or five years that I've been in this registry, have things changed? Have my outcomes gotten better or worse during those years that I've been participating? And can we identify special cases? Are there sites that are doing better and should we copy what they're doing? Or are there sites that are underperforming and should we figure out why? And then how do we use the data that we're collecting to do further new QI projects on all of our sites? So I'm gonna stop there and leave some time for questions. Thank you, Dr. Sutton. That was a fabulous presentation and I thank you for all your work and sharing what the other initiative is doing. I wanna ask a few questions about specific to the impact registry. But before we get into that, we do have a question that came in asking a little bit more about the phenomenal drop from, I think it's something close to 10% to 5% in the mortality rate with your initiative. And that seems remarkable and quite impressive. Can you expand upon that maybe a little bit before we start talking about impact? Sure, so that change happened over multiple years and was a stepwise progression. So when I showed the example of the control chart, the one I showed was going up, but we have one that was several years in the making and it slowly went down. And basically that was a lot of small changes made using those PDSA cycles that you learn about in quality improvement. So they were changes like seeing the patients extremely frequently. See, and in my center, we see the patients weekly. We would check the weights every day at home and give the patients a list of what's called red flags. If any of these things happen, call immediately. And we would give them a, we actually, we still do, we give them a little speech of what they're supposed to say. So the parent calls and says, my baby is a single ventricle and I'm calling because of a red flag. And when they call in the middle of the night with that, it triggers a specific response in each hospital. And depending on your hospital, it might be come into the ER or come in, depending on where your hospital is, you might have different protocols for what that triggers. We also have a, we check the nutrition very differently. We make sure that the calories are optimized. If any weight gain falls off, they might get admitted to figure out why they're not gaining weight. If there's any change, small changes, even on echocardiogram, they might get a cardiac catheterization to make sure that there's been no change in their anatomy, that they don't have a coarctation. So those changes mean that they have a lot more interaction with the medical system, but they're doing much better at the end. Other things that we've looked into is the use of different medications. One of the papers that came out of our registry was that the use of digoxin seems to be protective. And we're not sure why, but more of the patients on digoxin did better than patients who were not on digoxin. So that's kind of how we did it. And when that got better, we started looking at other changes that we could make. Thank you. So you're obviously a facility that takes quality improvement very seriously. And I won't say you didn't need the impact registry, but what type of a discussion for sites that perhaps are thinking about getting involved in the impact registry or deciding to stay and continue in the registry, what is it about the registry that was the selling point or the value add to your hospital, to your administrative C-suite to participate? And was that a big ask? Or obviously again, you are doing quality work without the impact registry. Yeah, so my hospital is involved in several other registries and obviously our adult cardiology colleagues are involved in multiple of the ACC registries because we do TAVRs and all kinds of stuff. So we are very involved. We, myself, before I even got here, was involved in one of the sites that was a pilot site. So I had already known about this project and we had been involved in quality improvement. And the hospital here had decided that quality improvement was important to them and that this was one of the things that defines the programs that are interested in quality. The other thing that does help, I'll say here, and I'm sure this is the case in a lot of other programs, is that you can be involved in research. So you can access the database to ask to do research on the data that's been collected and you get points in the US News and World Report. The US News and World Report is becoming more important to patients and families when they do their research. And I think that having that be one of their checkoff boxes that says that you're interested in quality is important and it makes people participate. So this might sound silly, but a successful program usually needs a champion. Are you that champion or did you have to muster up some support across the aisle, if you will, with your surgical colleagues? Who takes lead? Who's the champion at your site and how does that work? Does it have to be collaborative to be successful? So at my site, I'm the lead and the champion for this project. We have coordinated it pretty well into our cath lab flow so that the collecting of data and the sending of data has become so much part of our regular flow because we've had it for many years now. When we first started, it was a little bit of a learning curve to get everyone used to it. We bought, there's a program that you can add on so that you can send the data more automatically from your, we use PedsCath, a lot of pediatric programs do. And so we bought the add-on for PedsCath that saved tons of time. And as new people join us, fellows learn how to, there's a few boxes to fill out and they get very good at it. And they explain to them why it has to be done and it becomes part of their routine. As far as getting buy-in, it wasn't very hard here, as I said, because when we were getting it, it was one of many different projects that the hospital was taking on. And for pediatrics, especially, we were very interested in having our quality improvement program grow. We have a pediatric head of quality improvement and they wanted to see it, so that was helpful. And several other people higher in the hospital administration were very interested in it. As is the case with most surgical centers, we participate in all the surgical databases. And so with STS, surgeons are used to that. So they're like, oh, so you're gonna do the cath one like I do the surgical one. And they kind of assume that it gets done kind of automatically. I'll speak for many people listening, that's a gift, if you will. Do you routinely share the information? Are there team meetings? Or how does the information that comes out of the impact registry get shared with all the team members in the cath lab and the physicians, both invasive and non-invasive cardiologists? Does everyone sort of get an equal share as to, hey, how are we looking? Or do you pick a metric? Or is it shared in a team meeting? Yeah, so what we do here at my hospital is we actually have a monthly heart center quality improvement meeting. And we share different measures of all the different parts of cardiology and surgery. And then I don't do the impact numbers at every month because they're quarterly numbers. But every usually quarter or so, I show kind of where we are in impact. So I'll show our impact data. We keep very similar, I keep a lot of data locally about big complications. And we discuss in our quality improvement meetings, any complications or morbidities or things that we just think could have been done better. And that's for our whole program. And then we intermittently will do the, like every quarter or so, show the data from impact and where we fall in relation to other programs. Any type of, I know the impact registry, as well as STS and the other ACC registries do auditing. Do you do any type of self auditing or checkpoints along the way in between our audit period, just to make sure kind of everybody's on the same page? We, as I said, we do our data through PEDS-CAF. And so because it's going in through our CAF reports, it's actually a lot easier to check for accuracy because the CAF attendings have to sign the reports. And so they actually have to look at and kind of sign each impact. And then what happens is we, one of our CAF lab secretaries is the one who sends over the, you know, submits it. And we get this final report telling us if there's any data that's questioned or concerned. And then I will go through with them any problems that arise. You know, something, someone will put like a one instead of a zero somewhere, and it might come up and say, this size is out of range, are you sure? And then we'll go back and check the data to see if it was a typo or if it's the real data. And we do that as the, you know, when we submit the data, which is quarterly. So every quarter we have to go through and take a look at the data. And the check in PEDS CAF and the check when you submit the data is a good time to look at it, which is what we do. Most of the data has been very accurate, as I said, because the fellows put it in in real time. And then the attendings check it as they're checking the reports. So it tends to be pretty accurate. Great. I have one last question, but I have to say your passion and dedication to the attention to detail and the beautiful presentation you gave us today. I would be not surprised if you would get some calls to say, hey, could you come help us? So I know you're happy in New York. I know you're a New Yorker, but I'm just- I'm a New Yorker. I'm just saying you might get some calls because this is not what we hear at some hospitals. It's a struggle and a high lift. I'll end with the obvious question. How has life changed during COVID or have you had to change procedures, policies? Do you see volumes changing one way or the other? And is there a plan for kind of the reboot, if you will, whenever this settles down? Any words of wisdom or policies that you've been doing there? So as you know, New York City was hard hit with COVID, especially here in the Bronx, we were quite hard hit. We have a children's hospital inside of an adult hospital. And our children's hospital became all adults during COVID. Myself and the other cardiology attendings did, we were the attendings on one of the pediatric floors running a COVID unit. So our patients were 50 to 80 years old with COVID. And so our pediatric numbers did go down as I'm sure everyone else is, as everyone knows in the whole country, everyone's numbers went down. We were slowly able to go back and we never completely shut down in pediatric cardiology because we couldn't. We always had patients coming in in heart failure and with congenital heart disease. So our clinics were always open. Our echo lab was always open. Then we had a rush of MISC kids in New York. And we had a lot of those patients here. So we actually saw that after that, that has now quieted out again. And we're slowly getting patients back in. One of the main issues that we've had is that patients are scared to come in. They're worried about coming in. We do here at our hospital routine COVID screening for all of our procedures. So anyone who's gonna have cath or surgery or any procedure with anesthesia gets a COVID test two to three days ahead of time. If it's negative, they come through, but we do have daily screening at our doors. We do temperature checks as people come in. And we are slowly getting back up to normal. Our plan hopefully is to just continue to convince patients that it's safe to come in. We've started already giving out our flu shots to our cardiac babies to try to keep them from also getting the flu. And the hope is that slowly we convince people to come back. We have not made any big changes in clinical studies either in our sites. So we have clinical studies that are keep going because they're clinically indicated. And that has continued throughout all of COVID. And so we're open for business and we're trying to make sure that people feel safe coming in. We are trying to stick with all of our prior QI. Definitely things change during the height of COVID, but now we have protocols for if someone comes in who's COVID positive, how do you run their cath? And my cath lab is actually positive pressure room, but it can be switched to negative pressure if we need to do someone who's COVID positive. And we've done that for emergencies or for patients we don't know the answer, we have to do it, we do it. So the cath labs here and the ORs have new protocols for how to deal with COVID positive patients. And when we don't know. Lots of N95s and masks and lots of hand washing, but we're getting through it. They're lucky to have you. I think this concludes our presentation. And again, I'd like to thank Dr. Nicole Sutton for the great presentation and thank you for joining Quality Summit.
Video Summary
In this video, Dr. Nicole Sutton, a pediatric cardiologist, discusses the use of impact data to improve congenital heart disease programs. She explains that the Impact Registry, which stands for Improving Pediatric and Adult Congenital Treatment Registry, aims to improve patient care and promote quality improvement at participating institutions. The registry was approved by the council in 2007 and has since grown to include 106 centers and over 246,000 procedures. Dr. Sutton highlights the benefits of participation, such as access to ACC quality improvement programs, involvement in impact-specific quality improvement projects, and the ability to conduct research using the registry data. She also mentions that participation in the Impact Registry earns participants points in the US News and World Report section on quality improvement. Dr. Sutton goes on to discuss another registry called the National Pediatric Cardiology Quality Improvement Collaborative (NPCQIC) and the impact it had on reducing interstage mortality and improving growth failure rates. She explains that the NPCQIC used control charts and funnel plots to analyze the data and identify variations among centers. The collaborative then shared best practices and worked together to improve outcomes. Dr. Sutton concludes by discussing ways to leverage data from the Impact Registry, such as identifying sites that are performing well and learning from them, or identifying sites that are underperforming and working with them to improve. She also highlights the importance of sharing the data and the benefits of participating in the registry on a hospital level.
Keywords
Impact Registry
patient care
quality improvement
registry data
NPCQIC
interstage mortality
control charts
best practices
×
Please select your language
1
English