false
Catalog
ACPC Quality Network Learning Session 2024-Q1 – Pr ...
ACPC Quality Network Learning Session 2024-Q1 – Pr ...
ACPC Quality Network Learning Session 2024-Q1 – Program Updates, MOC Part 4 and Metric 028 Overview
Back to course
[Please upgrade your browser to play this video content]
Video Transcription
Hi, Jen. How are you? Hi, good. How are you? Well, thank you. I don't know that the two of you. Can you hear me okay? We can hear you great. Absolutely. Okay. I don't think you and Dr. Park have met, but Dr. Park, I think both of you have heard of each other from me. Dr. Park is our new chair, so welcome. Thanks for being willing to present, Jen. I'm going to give everyone a couple of minutes to join, and then we'll get started. Hi, Dr. Roberto. Hey, Michael. How are you doing? Thank you again everyone who's joining us. We're going to give everybody another minute or two to sign on and then we'll go ahead and get started with our presentations today. Thank you all for joining today for our first quarter learning session of 2024. We're going to go ahead and get started, and I will pass it over to Dr. Park in just a moment. Let me go ahead and share my screen. All right, can everybody see my slides? Yes. Excellent, thank you. All right, again, welcome to our first session. This is being recorded, so if you have any colleagues or if you need to drop off at any point, you will be able to view this in our ncdr.com QII Learning Center. I'll show you a little bit later in the presentation where you can navigate to get to that. So for today, we'll go over some program updates in the beginning. Again, I'm going to pass this over to Dr. Park in just a second, and he'll walk you through the rest of this. But we'll do a quick review of login procedures and how you navigate ncdr.com, which is the portal that's available to all of our participants. We'll also have an overview of the AVMS ACC-sponsored MOC Part 4 activities and a bit of time for a Q&A. We'll also get into a metric discussion. We're going to focus on one specific metric. That's metric 28, application of the Pediatric AUC to initial patient outcome or outpatient echocardiogram orders. And then we'll have a more broad discussion about shared best practices from one of our superior participant sites that's highly active with QNET. Again, we'll have a little bit of time for Q&A and then an open forum at the end. Dr. Stravostova is traveling at the moment, so she'll try and join. If not, between Dr. Park and I, we'll be moderating the last half of that session there. So first update I want to give before I pass this over to Dr. Park. I had been led by Dr. Ariane Morelli during her term. We were very, very excited for Dr. Morelli to learn late last year that she had been appointed as the scientific director of the CIHR Institute of Circulatory and Respiratory Health in Canada. So she unfortunately had to step down from her role with QNET effective February 1st in order to set up that new institute. And we wish her well. She left us with a beautiful vision for QNET and really calling for us to look at adult congenital heart disease and expanding our involvement in that area. So congratulations. She was not able to join us today, but we wish her the very best. And she will certainly continue to be involved with QNET in a different capacity. So on that note, Dr. Park, who has also been someone who's been engaged with QNET and was around during its inception, has stepped into this new chair role as of February 1st. So Dr. Park, I'll go ahead and hand it over to you for you to introduce yourself and give us a little intro before we move through in the presentations here. Perfect. Thanks, Jen, for the introduction. I know a lot of people on the call, not only from my work on the committee for the past six years, but also through various other national initiatives. My name is Matthew Park. I'm a pediatric cardiologist up here in Tacoma, Washington with Northwest Congenital Heart Care. And just some quick background on me. Our group up here in Tacoma is part of the Pediatrics Cardiology National Network. There are 20 cardiology practices around the country that we share an affiliation with. And since 2016, I've been the co-director of quality and safety with Dr. Ken Schaefer from our Austin practice for some of our national quality initiatives. And it's through that affiliation that we became involved in QNET at a very early time. Ken and I have both been on the QNET committee since 2018, so six years. And it's been a great opportunity for us to meet a lot of you around the country to work with other committee members. And this new role as the chair of the committee came as kind of a surprise to me, especially with Dr. Morelli's needing to step down for her exciting new position. So I'm still stepping into the role and learning and look forward to getting to meet all of you and work with you in the future. If you have any questions or anything that I can do to help you with, please feel free to let me know. And Jen and I, we've already started meeting together. And I think we have a lot of exciting things coming for QNET this year and beyond that we'll share a little bit of that here. And then we'll talk more at the ACC in-person meeting as a steering committee for some of the really exciting things that we hope to develop and make part of QNET for the future. Thank you so much. Like I said, you just jumped right in. So we're excited. But I think you'll hear a lot from Dr. Park. He's got great firsthand experience with QNET. But certainly as he's stepping into this role as well, we really want to hear from all of you, right? What's working, what's not working, what you'd like to see with this program. So more to come, but certainly give us that feedback and that'll help us shape our operations and our strategic plan moving forward. On that note, so one thing I wanted to highlight, we've had this conversation come up a few times on our steering committee as well as among our participants during learning sessions. So when we migrated over to ncdr.com, it provides a wealth of resources. You can share documents on that site as participants. You can access your reports, some things I'll go into in a bit. One piece though that it does not do, and I know there's been conversation both between the section and QNET, is having an online forum to discuss cases, to share best practices in a discussion format. And so I wanted to highlight, we do actually have a resource available for all ACC members and that's DocMatter. So if you log on to DocMatter, you'll find that there are topics. And I don't think Dr. Sachdeva, the ACPC council chair was able to make it, but there are discussions that are already going on in the adult and pediatric cardiology CHD topics on that site. So I would definitely encourage you, if you have any questions about how to access this, you can go through ACC member services, but it really is a great, great place to connect with community. And then we'll be checking this site as well. So if we see that there are any topics that are coming up that would be applicable to include in some of our learning sessions or to share resources, we'll certainly be doing that as well. Also wanted to highlight the ACPC section and ACPC QNET presence at ACC 24 in Atlanta next month. So ACPC section will host a community day at the Hyatt Regency Atlanta. That'll be on Friday, April 5th from 1 to 6 PM. I won't be there, but Dr. Park and other members of our steering committee and QNET participants will be. So certainly if you have any questions about our program, go ahead and approach them. I'll be there on Saturday for the section meeting. That's going to be Saturday evening from 545 to 645 at the Omni Atlanta. We'll also be presenting some quick updates on QNET, announcing our participation award winners for 2023 being one of those highlights for us. And then on Sunday, we'll also be hosting a moderated poster presentation. This is an abstract that we had submitted, essentially highlighting ACPC QNET. So we welcome all of you. That'll be on Sunday from 11 a.m. to 1230 is the congenital heart disease population health session. Our presentation time is 1145 to 1155 in the moderated poster theater. There will be a five-minute Q&A, so if you have any questions, certainly encourage you to attend that as well. And I'll be around at ACC, so feel free if you're going and you want to connect, send an email and I will be happy to meet up with you. All right, so ACPC, and feel free, I'll be monitoring the chat as well. So if at any point through this session today, if you have any questions, go ahead and post them there. So quick program update, ACPC QNET has a presence on ncdr.com. So previously, right, everything was done. We were emailing reports out, all of the resources that were available on our external website. We've now pulled these in as part of a multi-step optimization process that we have here. So this is step one, right? We've got all of our sites able to log into ncdr.com and view not just resources that are available for QNET, but also for other registry participants, which I'll get into in just a second here. So I'll pull up a live demo just to show you how to navigate the site, but most important thing I want to highlight here, you should have received months ago an email with information about your participant ID. I am going through and looking at, you know, what sites have not gone in and updated their profile and added users. So you'll be getting an email directly from me highlighting, you know, here's your participant ID. You can email, you know, if you go in, ncdr has their, you know, sort of centralized customer service team as well. So you have resources beyond just me with QNET. What's really critically important here and the reason I'm highlighting this start here menu is we really need you to log in, make sure that you have your site info up to date, that you have given anyone at your site that you would like to have access to this, that you've entered their information. The next phase coming for our presence on this site is actually setting up dashboard products more similar to what we have for the registries. We need to make sure that, you know, you're in there, everyone's able to access this information before that launches. So best place to start once you've logged in is to go to the start here menu. And there are a couple of, you know, brief videos here, shouldn't take you more than 20 minutes to go through this and that really gives you a helpful, very specific overview of what you need to do to get started on this site. And then everything else here is specific to our program. So if you're familiar with Impact Registry, for example, they have some different resources that are available because they're a registry. So ours is slightly different to Navigate than the other registry programs, but we still have, you know, a lot of NCDR specific resources that you can access here. So let me, I'll stop sharing this for a second and I will go in and show you. And if anybody has any questions while I'm doing this, please feel free to ask as well. Just logged me out, so one second. So when you log on to ncdr.com, you're going to get to this landing page. You can see here Adult Congenital and Pediatric Cardiology Quality Network. Any announcements that are critical will be here. And we do update these, so you'll see this, you know, 2023 Q4 call for data will be taken down next week as we've closed out that data collection period. Currently, we're still using the same data collection process where you're going in and you're using that unique survey link that you've been sent to submit data. We're very close to being able to do that here as part of the multi-step process to move into having that NCR dashboard product available for QNED as well. So again, I just highlighted the start here. Great place to begin. Under Administration, this is where you're going to see you would update your individual profile. So any of your preferences in terms of notifications that are coming from the system. The site profile is specific to your center. What's also really important to remember about this is this information is then used to populate Find Your Heart Home. So if a patient is looking for information about where to get care for specific conditions, that's where they would go. So make sure, really important here, that your site profile information is updated so that it's reflected accurately on that website. And then Site User Administration, this is where you're going to be adding users from your team that you would like to access this information. Reports, this is where currently you would get your quarterly benchmark reports under file delivery. This piece will stay, but those reports, once we migrate to a new dashboard, you'll be able to actually pull those up through that dashboard as opposed to those reports that we're pushing out as files currently. And then number of resources available here, I won't spend much time going through these, but I did want to highlight any of our prior learning resources, documents, PDFs, anything like that are housed here under QNAT Learning, Documents, anything that highlights data collection and sort of those, you know, user guides are all there. You're able to access a participant directory for any other sites that are enrolled. You can share resources through this Participant Resource Sharing link. And then finally, the QII Learning Center, this is where you can access recordings and other resources, not just from QNAT, but from other registry products as well. And then I also, you know, wanted to highlight here, you can click down at the bottom, Contact Us, and that takes you to the NCDR customer support team. So again, not just me. You can open a case here if you're having an issue with access, if you have a question, you definitely want to do it through this site. So that's ncdr.com. It's a very quick, quick overview. Again, if you have questions, if you're having a problem logging in, just reach out to me and I can help you. Or if you're in and having some other issue, submit a case here. So that's that piece for you. I wanted to highlight one other quick bit here. So when we talk about these new dashboards as part of our optimization process, currently, again, you're receiving reports as a document that's sent out. The other registry products are going through a multi-phase process to launch those dashboards. The impact registry launched recently. And so I just wanted to highlight here quickly, I won't get into what all this actually means, but so you can see visually, QNET will have something similar. So an initial launch will include the histogram that you see now, right, those charts where it shows you, here's your performance, here's the benchmark. We will do away with the site code. So you'll just see, you know, the bars and you'll see yours highlighted. You'll also be able to access data over time, which I know everyone is very excited about that. But as we're going through this process, I would definitely encourage you, you know, if you're interested in seeing your performance, the number of metrics that you submit by category, things like that, even if you want to just, you know, draw, I would love to see a report like this or, you know, make sure you're not sending us anything with patient information, but you're welcome to show us, you know, this is a report that we're generating internally. You're going to see this later in the presentation from Intermountain Primary. They're actually doing that at their site. And that's something that we want to incorporate into this new dashboard product. So certainly if you have questions, if there's something you would like to see, do share that with us. On that note, I am going to go ahead and turn this over to Dr. Park to give us an update on MOC Part 4 and share some exciting news about this as well. Thanks, Jen. And just for everyone on the call, again, I need to highlight the importance, if you have not accessed your profile on NCDR, please work on doing that. Work with Jen to make sure you have access because as we develop those dashboards, I think everything we do in QNET will be enhanced in the next few months as those dashboards are rolled out and we can start using those tools. What I'm going to talk about is one of what has been the most popular reasons to join QNET that we get a lot of member participation with, and that's the ability to earn MOC Part 4 credit for the physicians on your team. As you know, ACPC QNET is a portfolio program of the American Board of Medical Specialties, or ABMS, and what that means is that we have projects that have been approved through ABMS for the granting of MOC credit. We currently have five approved projects that are highlighted in yellow by their general category. Projects in BMI, chest pain, Tetralogy of Fallot, DeGeorge screening, and non-invasive imaging that has two different projects associated. One of the things that right after I was asked to be the chair that Jen and I had meetings with was with the ABMS organization about how we can further enhance our portfolio of available metrics. Jen and I are very excited that the ABMS approved for us to migrate all 33 of our quality metrics into projects that can earn MOC credit. All of those metrics are in the different categories that you can see there from infection prevention, Kawasaki's disease, arterial switch, and fetal echocardiography. The metrics will be grouped and a MOC project template will be made so that any site can essentially choose whatever metric interests them of the 33 metrics and have that be your metric to earn MOC credit. We've heard from a lot of people, well, we've done BMI before and we don't have any interest in repeating that same project or chest pain or we have interest in doing a project for Kawasaki's disease or some of the other metrics. Jen and I think that this is really exciting that any metric available will be available to earn MOC credit. That is something that we will have finalized with ABMS by the end of the year. For 2025, all 33 metrics will be available for MOC credit. Next slide, Jen. With that, I just wanted to quick review the process of earning MOC part four credit. This is a common question that Jen gets and we felt that it would be appropriate to review that process in this forum to answer hopefully all your questions, but if you have more, please ask Jen or me about that. The first step is to select which activity and metric you're going to work on and then complete the team enrollment form and email it to ACPC QNET Jen to tell her that you are starting a MOC quality improvement team activity. The form is right there in front of you on the screen and is available through the NCDR website that you can download that form and fill it out or you can email Jen to get the form. It's important to say what is your timeline. As you can see there, what time will you collect your baseline data and your improvement data as well as identifying the members on your team that will be participating to earn MOC credit, their emails, and their certifying board. One change that was made in the last few years, we were a portfolio member of the American Board of Pediatrics. We changed the American Board of Medical Specialties because that covers not only the American Board of Pediatrics, but also the American Board of Internal Medicine so that anyone in your team, whether they're pediatric, cardiology, adult, congenital can earn credit and so that's why you need to list the certifying board on this form. Next slide. So a key part of this is that the team lead and team members need to work together to complete meaningful participation requirements as outlined by all those medical boards. These MOC Part 4 credits are not free. You need to make sure that everyone is participating in a meaningful way. Next. So the first key is to review and submit your baseline quarter of data. So you told Jen what project you want to do, what metric, and you need to have a baseline quarter of data that this is how we're doing at the metric at the present time. Next. Then you'll discuss with your team to design and implement your change strategies. What are you going to do to improve performance on that metric? Obviously, there's the PDSA cycle of what we are. Are you trying to accomplish? How will you know that you will improve and what change will you make to make a result in improvement? Then there are resources available from ACPC QNET to fulfill those meaningful participation requirements that the team lead will need to track, including project orientation and meetings of your team. There are QI science trainings as well as participation in these learning sessions. And those are some of the resources as well that Jen highlighted from ncdr.com in that your team can go in and get some of those meaningful participation requirements fulfilled by watching a video on a science training or participating in a learning session, if not live, then reviewing a previous learning session. And then you need to submit two consecutive quarters of data after the baseline quarter. So, your baseline and two additional quarters to gain credit. So, three total quarters of credit of data must be submitted. And this is something that a lot of people need to recognize because Jen gets a lot of calls of information in October and November of someone needing credit by December. And if you haven't started the process, we will not be able to give you credit through ACPC QNET. You need three total quarters to be able to submit for credit. And then working with AVMS, starting in 2025, when all of the metrics will be available for MOC Part 4 credit, one additional requirement that we agreed upon and we think will be a wonderful addition is that you need to do one of these three activities to get your MOC credit. The first and what we hope is the most common is to present during a learning session. A commitment that we have from the steering committee is to make these learning sessions every quarter without fail. And we hope to hear from a lot of you of your MOC credit projects because it's an excellent way to learn from each other. One of the original goals of QNET that is still one of its highest priorities is that we all help each other learn and get better together. That's why we're having primary children from Utah present because they are our highest performing center that submit a lot of metrics. Maybe they have some tips and tools that we can all learn from. And we're hoping that by including this as part of the MOC Part 4 credit requirements, it will give all of us an opportunity to hear more from each other and more of how we can better improve the quality of care that we provide to our patients. The other two ways are to submit an abstract for the ACC Quality Summit or present at the ACC Quality Summit. And those are wonderful opportunities for your staff or fellows or for anyone else to present a project for quality improvement to be seen by their peers as well as meet the requirements for MOC Part 4. Then the final step that everyone knows is the most important for this in many ways is how do I actually get credit? As soon as the ACPC Quality Network team confirms the submission of three quarters of data. So again, cannot stress that enough. You need to have submitted three quarters worth of data. Then all team members hoping to earn credit will have an online attestation form to attest that they participated in the project. What was their participation and met all those criteria. Then ACPC QNET will submit the credit request to the ABMS within five days of receiving those attestation forms and your credit should be reflected on your diplomat's board record within 30 days after submission. So again, with that process and here Jen highlighted the deadline for 2024 physician completion applications through ABMS is December 1st. And so that's why again this is something that if your team is hoping to do a project in 2024 for credit, you really need to get on it right now or you will not be able to meet the time requirements of three quarters of data in time to meet that deadline. I hope I was clear on all of that. It is a common question that Jen gets. Are there any questions that Jen or I can answer in regard to the MOC Part 4 credit? You can raise your hand. You can type it in the chat. Any questions at all about that process? I did want to mention one other thing as well. Another question that comes up sometimes with this is we have had sites where they submit three quarters of data. I send an email and say, you know, how's it going? Are you ready to submit those credit requests? And they'll come back and say, we're not quite there yet, right? We need a little bit more time. And so I also really encourage sites if you are struggling to find ways to improve, if you have questions to reach out. We've had many instances where, again, we'll have those situations where a site will say, you know, we've implemented these interventions and we're not seeing any progress. Is there anyone who can help with this? And most of the time then I can go back and look at who's submitting data, who's presented during a learning session, who are the authors on a specific metric, and reach out and say, hey, we have a site that has questions for you. I've never had anybody come back and say, oh, no, I don't have time for that. It's always a wonderful sharing of information. And in fact, as part of the next section we'll get into, that's what is driving us presenting some of this info. But I would certainly encourage sites, you know, if you find that you need additional quarters, that's okay too. And just to add on to what Jen said, from an MOC credit standpoint, you don't have to show any improvement, which I know sounds strange, but it's more about the process that you attempted and submitting that effort to make an improvement that matters. While we hope everybody improves, it is a learning process. And like Jen said, if you're having trouble improving, you can still get your MOC credit, but then that's where we hope to provide a better forum for everyone to work collaboratively so that we can all improve together. But don't let the lack of improvement stop you from submitting for MOC credits. All right. I don't see any questions in the chat here. So, if no one has any other questions, we can go ahead and move on to the next session, but section of the agenda here. I know we will have lots of time to discuss and present. So, give me one second here. I'll go ahead and share my screen. All right. So, the next section here, right, we're going to get into a more specific QNET quality metric discussion. We'll focus a little bit on metric 28, and then, you know, hand it over from Dr. Shathar with Children's Hospital, Children's Healthcare of Atlanta, and then invite Jennifer Marietta from Intermountain on to talk about what you think to talk about what's working at their site. A little bit of background before we get into those two presentations, and Dr. Park, I will also hand this over to you a little bit, but we do track every quarter what our metric submissions are looking like. We, you know, anyone who's submitted data knows when you go into that data submission survey, it does ask you for metric non-submission reasons. We're no longer requiring that every quarter. We're only doing that for the fourth quarter each year, but you do have the option, and we do encourage you to submit that information every quarter because it is helpful. We do look at it, but this will kind of give you a sense of what we're seeing every quarter metric by metric, right? So, if you look at the chart here, the spreadsheet here, you can see, as an example, metric one, complete aortic evaluation for adults with repaired COA. We had, and these are looking at 2022 numbers, we are just finalizing the data for Q4 2023, but looking at 2022, you can see here we didn't have enough centers. This is also before we adjusted our site and patient volume minimums, but you can see, right, five sites in Q1, four sites in Q2, seven in Q3. If you see seven sites, that may mean that there's a site that had insufficient patient volume, and that is not a combined data quarter. So, unfortunately, we kept seeing that trend, and then all of a sudden, in Q4 of 2022, we had enough sites, we had sufficient patient volume for those sites, and every quarter since, we've been able to include that metric one in the quarterly benchmark reports. We've seen that trend among a few other metrics, but one we've not seen as much movement as we'd like to is metric 28, and that's what we'll focus on again with Dr. Suthar here. Dr. Park, do you want to add anything about this? I've got it about halfway down the screen there, where you can see, you know, Q1, Q2, Q3, an average of two or three sites each quarter that we're submitting data. I'm hopeful this quarter we're going to change that. I have one more site I'm waiting on, and then I think we're going to get there. Yeah, no, this is something that, you know, as each of you and your individual sites consider what you're working on and what metrics matter to you, we hope that providing this information will encourage some of you to look at these metrics and say, hey, that might be a metric that not only interests us, but by having just one or two more sites submit data, we'll have enough data to give it on those quarterly reports. Part of the process of improvement is seeing how your performance is compared to your peers, and there are sites working on some of these metrics that don't get the benefit of seeing what their peers are doing just because we don't have enough peers submitting data. And so, please take a minute. I hope everyone on this meeting thinks about what you're currently submitting and looks at maybe a few more metrics to try to submit so we can meet those thresholds. Thank you. Yeah, and I'm highlighting here as well, and again, I'll pass this back over to you, Dr. Park, and then we'll move on to Dr. Sitara, but I did want to show everyone here, right, so this is metric 28 in the first three quarters of 2023. These are the data non-submission reasons we gathered from sites during that data submission process. So, you can see here, wanted to collect but did not have enough time, not interested, and other, which were free text fields that we were able to gather. If anybody's on the call today and, you know, is interested in giving a little bit more insight into, if you're one of those sites that said we wanted to collect but we didn't have enough time, why was that, right? Is there anything that you can share here? Is there anything that you'd like to ask? Because as we move into Dr. Sitara's presentation, that site specifically is no longer submitting data because they've gotten good at this, right? And so, we invited them on because they have some best practices to share with us, but if you have any questions, any, you know, clarifying questions, anything like that, please feel free to add it in the chat and I'll make sure that we're sharing that. So, on that note, Dr. Sitara, I have not met you. I know you work with Dr. Sachdeva. Welcome today. If you have your slides, you're welcome to share at any point, but if you would like to go ahead and introduce yourself and thank you again for taking time today. Sure, thank you so much. Can you hear me okay? Yes, we can. Thank you. Okay, perfect. I'm Divya Sitara. I'm an outpatient pediatric cardiologist with Children's Healthcare of Atlanta, and I'm happy to be here to discuss the application of the Pediatric Appropriate Use Criteria for initial outpatient echocardiograms. Throughout the presentation, I'll be referring to these—to the criteria as AUC, just for ease of understanding. I'm hoping most of you are familiar with this AUC document that was published in JAG in 2014. The rationale for this document was that the AUC has been designed to guide providers' decision-making at the time of ordering a test. Unlike guidelines that are very broad in their scope, the AUC provides indications with more specific clinical scenarios, and the primary intent of the AUC is to evaluate patterns of care by physicians and improve resource utilization, and not to restrict resources. To give an example, this is a very common scenario that we see in clinic. A 16-year-old athletic boy presents with chest pain during football, has a benign family history, a normal physical exam, and the ECG is normal. In the AUC document, you can see that the exertional chest pain is an appropriate indication to order an echocardiogram. So in Phase 1, we collected the baseline data prior to the release of the AUC document, and as you can see, 71% of the echoes that were ordered had indications in the appropriate category. 12% of these were in the maybe appropriate, 12% in rarely appropriate, and 5% unclassifiable. Of these, 85% of the ordered echoes were normal, and the majority of the abnormal findings were in indications rated as appropriate or maybe appropriate. So this shows that the AUC document does not limit resource utilization, but acts as a valuable decision-making tool for physicians. And then in Phase 2, we looked at the effect of the passive release of the AUC document on the percentage of rarely appropriate echoes ordered. So on the y-axis, you can see that we have depicted the percentage of rarely appropriate echoes ordered. The red bar depicts the percentage prior to the AUC release, and the blue bar after the passive release of the document. The passive release of the pediatric AUC document did not lead to significant change in practice, and it did not change physician ordering behavior, and thus your percentage of rarely appropriate echoes did not really decrease as much. So in Phase 3, we implemented educational intervention, and some of it was as a feedback after Phase 1 and 2 implementation. We also did a dedicated lecture on how to use the AUC document to all the physicians. We incorporated the processes so that the ordering physicians can use the AUC document in their clinical workflow. In addition, monthly feedback was provided by the site PIs to each ordering physician. Following this educational intervention, there was a significant decrease in the ordering of the percentage of rarely appropriate echoes, as you can see over here at all the centers pretty much. But despite this, there is significant variability that persists between different centers, and this may be related to the level of participation. As you can see, Site 1 and Site 2 have the most improvement, and these sites were primarily contributing a lot of the data to this project initially. So to discuss this metric, and the purpose of this metric is so we can assess the proportion of echoes that are performed for indications rated as rarely appropriate related to one of these four categories based on the AUC document. So it's palpitations and arrhythmias, syncope, chest pain, and murmur. Majority of these indications were related to one of the four categories for the rarely appropriate indications, and that's why we chose these four categories. The denominator is 20 initial outpatient echocardiograms ordered for AUC indications, as mentioned above, and the numerator for this metric is the number of echoes that were ordered for AUC indications that were rated rarely appropriate. For the design, we want the participating center to be submitting 20 studies per quarter. So the exclusion criteria for this is any of the studies for which details of the clinical indication is not available, those need to be excluded. If patients have history of a previous evaluation with an echocardiogram, patients referred from the inpatient service, and if specific patient scenario is not available in the current AUC document, then those patients need to be excluded from this particular collection form. So this is a screenshot of the data collection form that we used. It's an Excel sheet. The section one is just the center characteristics, and by filling out the center characteristics, such as the institution, the lab director, what is the order characteristic, if it is an EMR-based application or a paper application, if there is any kind of pre-echoscreening process for the appropriate use criteria, if it's an open type of lab or a hybrid lab, how do you identify these studies, a manual search, EMR data mining, or echo data mining. By filling out this form, it helps us collate data across different centers. And this section two is just for internal tracking, and as you can see, it gives us a little bit more information in terms of details about the patients, what was the study location, the ordering provider, and the type of ordering provider, because sometimes you can have primary care physicians ordering these echoes as an echo only. So this section is optional, but it may be helpful for internal tracking by the centers and for giving feedback to providers that order echo routinely. And section three, this looks at the AUC indication and rating, and this is mandatory. So you select the specific AUC indication from the drop-down menu, and once you fill this, section four is auto-populated, and this will give us automatically the percentage of appropriate, maybe appropriate and appropriate from the list of patients that have been entered. So as you can see, this is an echo lab-based metric, and the data will be assessed every quarter by the lab director or their designee and reviewed with the physicians that are ordering the transthoracic echoes. The lab will be responsible for developing and instituting their own processes for improving the appropriateness of the echo orders. The suggested processes that we recommend would be that improving the lab intake process with detailed indications. So if we have those available a little bit easier, that improves your chances of decreasing the number of percentage of fairly appropriate echoes ordered. In addition, providing the AUC lecture to the referring physicians, and this could include information on how to use the AUC document and the role of the AUC in clinical practice and quality improvement. Providing feedback on appropriateness rating of studies to individual providers, which can be done manually, either by a letter or an email from the lab director, or integrating it into the EMR and using point-of-care self-directed tools. So we looked at the integration of the AUC within the EMR, and it was interesting that we noted a significant decrease in the percentage of fairly appropriate echoes, but also the transthoracic echo ordering rate decreased from 47 percent to 40 percent, which is the trend in the right direction. I guess we shouldn't tell the financial folks in the hospital about it, but this was definitely very encouraging and also points towards the fact that we highlighted that the AUC is not meant to decrease resource utilization, but actually just be more careful about what we are doing and more mindful about it. So some of the things we've also done is we have posted these laminated appropriate use criteria tables in the clinics, which are available to clinicians and the sonographers, so that it is just, and it's ordered in EPIC, so it's very easy to incorporate into our workflow. There is a similar pocket card that's available on the ACC website, which has some of these, which can be easily downloaded at the site that's mentioned at the bottom of the slide and can be used by any of the centers. I'm happy to take any questions anybody has about this particular discussion that we had, but in the meanwhile, I would like to thank Dr. Ritu Sachdeva, who's the chair of this project, and all the team members that helped with this project together and for giving me this opportunity to present all this information to you guys. Thank you. Thank you so much for that presentation. I wanted to ask you, can you define more for the end user how you put this into play in a regular working day so that this information and data can be collected real time? So the way we do it is that we have it integrated into our EMR, and every quarter, we just download the, you know, it just, it's an EPIC download where we manually, the EPIC data just mines the number of echoes that are ordered that are appropriate and rarely appropriate or maybe appropriate categories. And then from based on that, then we just enter it in the Excel sheet, and we are able to get all that information. So it really is not very time intensive, the way we have incorporated it. And, you know, we are happy to share the EPIC build with any of the teams that are interested in how we incorporated it into our usual workflow, because I think that was the biggest key for us to be successful in incorporating this into our practice. Because in a day-to-day, you don't have to take out any separate time, but it's just part of your huddle. So every morning when we are doing our huddle as to what each patient is, orders that the patient is getting, the clinician assigns that, oh, this patient has chest pain or exertional chest pain, and we anticipate possibly doing an echo. But once the patient has been seen, we inform the sonographers, and the sonographers are very good about asking us, hey, which criteria you want us to use, because it's part of our ordering system. So you have to enter it into the order system. So I think that has made a big difference on how we have done it, and I think that would help the end user. So the physicians define the criteria, and that practice cannot be standardized, because some people just order an echo before they see a patient. Some people get the history and EKG before they see the patient. So in our system, because you have to order the echo and EPIC, so they would still have to assign a number based on the AUC criteria. And as I'd shown you, when we passively released the AUC, we didn't see that much of a difference. But once we did the education, and I think the biggest thing was the education helped physicians be more mindful about using the AUC document, and then the next step was incorporating it into the electronic medical record. I think that made the big difference for us, because without entering the AUC appropriate category, you cannot order the echo. So the echo would not be potentially done. Thank you for that excellent presentation. That was great. And I think the EPIC ordering tools, if there's a document or process that you could provide to Jen after this meeting, that would be helpful, because I know a lot of people across the country use EPIC. My question has to do with your little joke about the hospital financial people being happy about this. You may not know this, but just a question, has your system in Atlanta been asked by payers to provide that information as far as adherence to the appropriate use criteria as far as contracting and reimbursement goes? The reason why I ask that question is in our national system in pediatrics, one of my other roles that I'm involved with is contracting, and it is something that I can tell everyone on this meeting, payers are starting to pay attention to. And somehow, I think we as pediatric cardiologists need to be mindful of that, as it may affect our ability to be reimbursed for the work that we are doing. That's really an excellent question, Matthew. We actually do have to submit some of this document to some payers. There are some of the, I think it's mostly private insurers that are asking for this to be correlated. It's not like across the board, but we do know that it is helpful like for reimbursement, like supposing a claim is denied, like the financial department has told us that they submit like, oh, look at this, this appropriate use criteria has been used, and they are able to get the reimbursement. So, it definitely helps the case in getting the reimbursement back. And you're absolutely right, like this is going to be adopted by the insurance companies nationwide, and then it's going to become like kind of standard of care that that's expected. So, I think this is just the first few steps that we are seeing right now, but we are seeing some of the private insurers requesting this information. Thank you. Thank you. On that note, I'm going to turn to the next portion of the session, but remind everyone we do have additional QA time at the end. So, if you have questions, please go ahead and post them to the chat or raise your hand and we'll get to you. All right. So, I wanted to highlight here, and then again, I'm going to turn it over to Dr. Park to lead into Intermountain Primary's presentation. So, on average, each quarter, we have approximately 38 sites that are submitting data. Of our participant sites, we then have 10 that are not submitting data each quarter. We wanted to highlight that to say, you know, we've got roughly three quarters or two-thirds of our group that we are collecting data from. So, we certainly have an opportunity for additional sites. Even if it's just one metric to start, we certainly encourage you to do that. We also average about four or five superior participant sites each year. Those are sites that are submitting 10 or more metrics each quarter. I'm going to let Dr. Park go through the statistics, you know, by quarter and sort of lead up to why we asked Intermountain to present today, as they are one of those highly engaged sites. So, Dr. Park, I'll turn this over to you with these statistics here. Just so everyone knows, you know, like Jen said, while about 10 sites do not submit any data in a particular quarter, for the 48 sites that are currently QNAP participants, the average number of metrics per quarter is five, which is good. But again, we want to improve upon that. That's one of our goals as a steering committee. But of the sites that are considered superior participants, so in other words, the few sites that submit more than 10 metrics, that average is 17 metrics per quarter. And it's something that my site here in Tacoma, we take a lot of pride in that we've been a superior participant. We work really hard to submit as many metrics as we, you know, physically can gather data and submit. But despite all of our efforts, I can never beat the University of Utah. I can't figure out how to get as many as they are. They average 23 metric submissions per quarter. And as we prepared for this learning session and getting everyone together today, we really thought that we would like to ask Jennifer and her team from Primary Children's to just tell us what their process is, because 23 metric submissions per quarter is fantastic. They are every quarter the leader of QNAP as far as the number of metrics they're submitting. So, Jennifer, applause to you and everything you're doing. Just tell us how we can all do better. Okay, thank you very much. I'm excited to present to you today. So I am Jen Marietta. I am the co-director for quality improvement here at Primary Children's in Salt Lake City, as well as the STS data manager. I'm grateful for the opportunity to share with all of you our process in completing data collection for most QNET metrics each quarter, as Dr. Park mentioned, and I'll share a little bit about our involvement in QI. So I hope you find it helpful and maybe even walk away with some ideas to implement at your center. So to get started, this is a graph depicting the number of metrics we've participated in each quarter since we began in quarter four of 2015. As we were getting started, we slowly added on more each quarter with a goal to eventually submit for every single metric. And as you have heard, QNET has a total of 33 metrics. And over time, as we started diving into each of them more, we decided some weren't worth participating in for us, and we continue to evaluate that each quarter. So I wanted to give you an overall idea of the process of gathering data. So I am the primary abstractor for QNET. And so the first step I do each quarter is email our data analysts and other data collectors one month prior to the due date to remind them of the upcoming submission. When we started collecting data on more than 10 metrics, I met with our analysts to really nail down how they were going to write their queries so we'd get the most accurate data in return. This was super helpful and needed and has significantly cut down the time I spend digging through the charts. In addition to that, I have support in data collection from both our echo and fetal cardiologists, our sonographers, and our genetic counseling assistant, and our neurodevelopment program coordinator. They all contribute to the data collection. In that email that I send them, I ask for a reply by the first of the month so that that gives me at least two weeks to make sure all of the data is collected and ready for submission. Once I receive the file from the data analysts, I copy and paste that into a separate file that I keep on an annual basis, which I have an example I can show you. And a couple of the metrics, we have figured out how to almost completely automate them. It's not much, but it's something. And then I'll go through the manual data extraction metrics that we participate in. And once all of that data is collected, I then review any questions or discrepancies that come up with my QI director and then submit. So, here are the two automated metrics that we participate in. The BMI measurement, it's the most straightforward metric. It's just a matter of pulling the total number of patients seen in clinic that quarter along with BMI measurement. So, it's just two numbers that I receive from the data analysts and just submit that. The ECG for chest pain is pretty close to automated. Every quarter, I get about two or three where the system wasn't able to pull an ECG within the specified time frame. So, I just have to go into a couple charts to see if that was actually completed or not. And then the metrics that we participate in that require manual data extraction are listed here. And on the next slide with those highlighted being the most difficult to collect for us, which I will explain coming up. So, as you can see, the appropriate counseling for those with BMI greater than 85 percent. The other two chest pain metrics, all of the KD metrics we participate in, genetic testing and tetralogy of flow, two of the three ASO metrics, and then all of the echo metrics besides the TEE ones. So, the most difficult ones to collect are the appropriate counseling with the elevated BMI, genetic testing and tetralogy of flow, and then the neurodevelopment assessment for ASO patients. These are the most difficult because there's multiple steps involved in each of these. In each of these, we do a random sample for the BMI and the genetic testing, which I feel adds another layer of complexity. But once I have my list randomized, I then have to look up each patient to verify diagnosis for metric 20, then look in the chart for genetic testing or if education was given for metric two. And then look up documentation for a patient being seen in our neurodevelopment program, which all of these things can be found in multiple places in the chart. So, it's quite difficult. So, these ones are just more time-consuming than the others. So, I have enlisted the help of our genetic counseling assistant and our neurodevelopment program coordinator because they know exactly where to go to find this information. So, once I have my final list, I just send that to them to verify, make sure I've gotten everything. And it's been extremely beneficial to involve the experts wherever I can. And like I mentioned earlier, we are constantly reviewing the metrics we participate in and questioning whether we should continue to participate in each of them. So, these are the metrics that we have participated in before but are no longer. So, the first one, the complete aortic evaluation for adults with repaired co-arc, it was decided by our ACHD team that we do not follow the same guidelines as written in the metric. So, they didn't find value in participating in that anymore. Number four, critical results, reporting, and echo, we found that we were the only reporting center at the time. And when we implemented a new EMR, it was no longer tracked, so it was difficult to collect. Number 10 and 12, the infection prevention ones, we consistently had low patient volume, but we were also always at 100%. And so, because of that and with not enough participating centers, we didn't think it was beneficial to continue to participate in those. Number 11, the flu vaccine, this is tracked and required at our hospital level. So, we were consistently at 100%, didn't find value in submitting that one. Number 21, echo performed during the first year of life for ASO patients, we had poor compliance locally for this one because it was not part of our defined guidelines for our ASO patients, so we decided to just stop doing that one. And then the TEE ones, these were somewhat difficult to track. STS does track some of this, but we didn't feel like we were capturing all of it accurately. But we also did not have any discrepancies in the year that we did participate in both of these, and so we decided to stop collection on both of those. So, here's an example of the data collection form that I use. I list the definitions for the numerator and denominator on every metric, as well as some small notes that I've taken over the years, as well as exclusions. So, I have it all right here in front of me while I'm going through each patient chart. And then for each year, I just add a new tab for each quarter, so I can just copy and paste the definitions, as well as our internal data tracking. So, once we have a total for each one, I just add those numbers here and update our charts. So, after the reports are released from QNET each quarter, I provide each of these groups an update via email. And this is an example of what the document looks like. I include our local data, as well as the chart from the report, along with the definitions for each metric. In the documents I send out, I highlight which centers are, so they know how we compare nationally. It provides regular updates to attendings, managers of outpatient cardiology in different programs, and the sonographers to remind them what we are tracking and what metrics we continue to participate in. And most are great to reach out if they have any questions or concerns about how we're performing or if there's any areas that we need to improve upon. Which brings me to the chance to somewhat showcase how our heart center is so QI-driven. I was hired as a QI nurse in July of 2015, which was the year we started participating in QNET. And my hiring was to really be able to participate in QNET and other registries, as well as to help lead and manage QI projects throughout the heart center. Then our cardiology program went through some transition in the next few years, and we were able to bring in Dr. Judson Moore from Texas Children's in 2021 as our QI director. He has formal training in quality improvement and brought with him a desire to build a better healthcare system that identifies sources for improvement. And we have participated in a few QNET projects that gave providers MLC credit, like Dr. Park mentioned, which is always a nice driver for participation. We participate in just about every registry within Cardiac Networks United, which are all very heavily focused in QI. And with our ever-growing program, we hired a second QI nurse in March of last year. And to date, we have a total of 17 projects completed, 13 that are ongoing, and 11 QI publications. And I just wanted to quickly highlight the KDQI project that we participated in with QNET. In the next few slides, I'll go through our data and performance for each of the KD metrics. The first PDSA cycle happened in quarter 2 of 2018, where you can see that black arrow. You will see that before the intervention, there was quite a bit of variability. And then after, we have consistently stayed near 100% in just about every single KD metric. So, here's the first one, aspirin therapy upon discharge. This one is cardiac evaluation within three weeks of hospital discharge. Evaluation of fever at the first follow-up visit. Documentation of no restrictions on physical activity. That one still had some variability, but more consistently, 100% in the last few years. And documentation of complete echo evaluation on every echo. And that's all I have. I'm happy to take questions, or feel free to email me if you'd like. And thank you so much. Thank you very much. Dr. Park, I'm sure you have questions, but if anyone has any questions, again, I don't see anything in the chat, but feel free to reach out to me. And we'll continue this discussion, either for Jennifer or for Dr. Suthar. Jennifer, that was fantastic. It's obvious that your entire team is very involved, and you put in a lot of work for that. I love the follow-up emails that you give everyone on the team after day one, and I think that's really helpful. And I think that's a great way to the follow-up emails that you give everyone on the team after data is submitted. I think that's a great idea. And just for everyone on the meeting, a lot of those data-by-time graphs that Jennifer obviously spent a lot of time in her own work developing is part of what we hope to do with the future dashboard. So, hopefully, those kind of tools will be available for everyone going forward. I could talk and ask questions forever, but I think we should allow other people to ask questions. And I do see a question here in the chat from Dr. Shaw. Did you have IT help out for the automation of data collection? Yeah. So, we have specific data abstractors for the Heart Center. So, I guess I'm not quite sure the difference between that and IT, but yes, we have support from IT and data analysts. And like I said, meeting with them to define every single metric and what is required for each of them was super helpful and beneficial, like identifying ICD-10 codes, timing, time frame of each metric. Can I ask what electronic health record do you use? What system? We're on Cerner right now, but we are moving to Epic next year. So, that'll be fun, trying to convert everything. Well, and that goes back to, as I've said before in this meeting, part of the purpose of QNET is a lot of sites are already on Epic and might be able to give you ideas for how to convert everything over, or as you learn ways to use Epic to share with the group. Because so much of what all of this data collection is, is using the tools available and ways that each of us can help each other develop, like was mentioned by Dr. Suthar earlier with the Epic ordering system for ECHOs to track the appropriate use criteria. Those are the kind of things that we need to share with each other and help each other across the country. And maybe at some point, we can get to the point that we can tell Epic, this is what we want you to build for everyone, instead of us having to manipulate at a local level. Since no one else is asking questions, I'm a little curious, why did you decide to retire certain metrics again? And what's the motivation behind not reporting on those metrics? For the majority of them, it was because there weren't enough participating centers, so we couldn't compare ourselves to others. And then we were consistently at 100%. And so, without missing one patient ever, it's just like, well, we don't need to spend time on this. Dr. Suthar, do you, sorry, I put you on the spot here a little bit with this, but, you know, with this metric, I know we highlighted that you're no longer submitting data for metric 28, right, and part of that. Do you want to talk a little bit of that with your site? If, you know, what did that look like internally? Were you part of those discussions that may have predated you? I'm not sure without Dr. Sachdeva on. Yeah, actually, I'm not privy to that discussion, so I'm not sure. But, you know, we have been doing this, like even just at the outpatient level, we have been doing this for a long time, like just incorporating the AUC into our EMR, and gosh, I think it's been like at least five or six years. Like, you know, we've been just doing it very consistently. So it's, you know, we were providing feedback up till like three years ago, I think, to all the physicians. You would just get your quarterly emails, like these were the patients that you ordered, like a rarely appropriate echo or this. So it was just more for you to see that, hey, am I, like, you know, where did I miss this? Like, was that a reason, or was that just an incorrect entry by a sonographer, or like where things broke down? So I think, I mean, like, I don't know what exactly, like what happened with this specific metric, I'm not sure. But, you know, even for the, just including the adding, like giving feedback and all of those things, like we've scaled down because, you know, we are less than five, probably even less than 5% at rarely appropriate echoes that are ordered. So there was really not a lot of changes we could make. It's interesting for both of you. This goes into something that's been on my mind a little bit, in that there's quality improvement, which was the original design of QNET, but then there's quality assurance. And if you look at some of our adult colleagues in NCDR registries, they still report on metrics, even if they're performing at a very high level because of the need to provide that quality assurance within the registry. And I don't know what that will mean to the future of QNET, but it is a thought that I've had as I've learned more about what NCDR is and how the adult registries work within NCDR that are very different than the way QNET operates. Dr. Shah, I see you have your hand raised as well. Yeah, thank you. Question for Dr. Sattar or, you know, others who, you know, we have Cerner and wondering if the AUC, other centers have been able to use or incorporate the criteria in Cerner order entry. And then another follow-up question for Dr. Sattar as well is you said, you know, there were only about 5% that were nearly appropriate. And then how has there been, you know, pushback from the parents or others for some, you know, needy patients or parents in a way? That's interesting. So I'll just talk about the EMR incorporation part. So we use Epic and I'm happy to share the build with Jen and because, you know, we have a very, we have an amazing Epic person that who works with us and is able to like help us with some of these metrics that we've been able to incorporate. So I'm happy to share that with Jen and maybe you can see, go take it back to your Cerner folks and see if they can do something similar because I'm sure there are some ways they can kind of match it up. Now to go back to like, you know, needy patients, I mean, and that's probably why you get some of those very appropriate ones where you have parents, they're just like, they have something in their mind and no matter what you tell them, they are not, you know, they feel like you're restricting them from getting an echo. And so that's sort of like somewhat of a cushion that you can't avoid those. So there's going to be a small, and I think that's the part of QI that most of us, you know, we are trying to, you'll never get to like a 0%, right? Like really appropriate. It's going to be extremely rare. You're going to get to a point where you can, most of the times you're doing the right, like, you know, doing the appropriate criteria, using the appropriate criteria. And that's the goal. So that's why you can never say the 0% is your goal. So I think that's challenge we all get into. And one other thing that, you know, sometimes we come across is, you know, that unclassifiable scenario where you have like, oh, this seems like you should do an echo in this patient because of this bizarre history or whatever, but there's not a specific AUC criteria you can use for it. So I'm sure like, you know, they have more reiterations of those things coming up. And that's what the cushion I would say that we have for the percentage of ones that are in the very appropriate or the lower unclassifiable range. Thank you. And Lisa from Children's National made a note in the chat here. We've retired around 10 metrics since we started participating in ACPC QNET after maintaining 100% performance for more than four quarters. So again, it's helpful too, when you're completing that data survey submission, you know, there is that free text line as well. We really do look at that. I know that that's, you know, additional time, but it really is helpful for us to have that context and we'll be exploring more of that throughout the year. So certainly if at some point, and you're also, you know, more than welcome, I've had sites that reach out and say, we're going to stop reporting this metric for this reason. That context is very, very helpful. And again, we hope that through learning sessions like this, right, we can have more of a dialogue and give a bit more context into why you're not seeing certain metrics on those reports. And again, to Dr. Park's point, you know, it's really very helpful to have participation, right? To participate in different ways, to be on presenting during learning sessions, to be submitting that data, to, you know, give us feedback. All of that is very, very helpful because, you know, it helps other sites as well to improve the quality of care that they're providing. So thank you very, very much. Any other questions? Again, anyone, you're welcome to unmute yourself or raise your hand. Or anything else you'd like to add more generally, Jennifer, Dr. Suthar, Dr. Park? I was just thinking if you guys end up getting enough participating centers in some of those metrics that we used to submit but aren't any longer, I'd be happy to resubmit that data. Excellent, thank you. Yeah, and again, that's why we do recognize, particularly if we have new sites that are starting, right, that would like to, and I think part of us expanding our MOC offering as well is that, right? It's giving up sites an opportunity, but they're not necessarily gonna get the benchmark unless we're collecting data from a sufficient number of sites. So that's really helpful feedback. And we'll, you know, as we're pushing out information and updates to centers, we'll make sure we incorporate that as well. Lisa also made a note here too. We've had a hard time with data collection of TUF. It's very time-consuming. Jennifer, have you found a way to narrow the return data query to get rid of PS and known genetic diagnosis? We've heard this question from others too, right? No, no, that's why I put it as one of the most difficult to collect. It's so time-consuming to go through all of those patients and then read that they've got TET absent valve or, you know, MAPCAs and it's like, oh man, I got to take him out, take that one out too. So yeah, it's, we have not found a way to get around that. Well, I can say that at my site with Tetralogy of Fallot, it is very time-consuming. We have a, we're the only people in the country, I think, that use GE centricity of our EHR. So I don't know how much our bill would help. But one thing that I've done that's made that a little bit easier is once we have incorporated a system that once you've recorded the positive or negative genetics, that one is kind of eliminated and easy. So when I get the report of patients to review each quarter, it's just the new ones that haven't been seen before, if that makes sense. So there's less patients that I have to sort through to look at the, do they have MAPCAs? Do they have other genetic abnormalities and things like that, instead of the whole list of, at our group, it's usually around 30 Tetralogy of Fallot patients per quarter. But once they are a Tetralogy of Fallot patient that we've acknowledged their genetics, it doesn't come up in the search and it does limit the amount of time that I have to spend on those chart reviews. And we're, we have- Hi, Matt. Sorry, go ahead, Michael. It's Michael, I apologize, I've been in clinic. But one thing that we did regarding our TOF genetic screening, we created an ad hoc forum where we addressed all those exclusions upfront. And so that made it very easy for data collection. So all of the exclusions were essentially listed as a checkbox format so that those could be addressed right up front. You didn't have to do it in some sort of retrospective manner. And our project manager then was able to identify all those exclusions very readily and pretty efficiently. So that was done in Cerner. So we're transitioning to Epic also. So we're going to have to readdress that. Well, you've all made my job much easier because this is a perfect transition into the last couple of minutes to close out the session here because our next learning session will be the first week of June. I will be getting a save the data out next week but we're going to have an ACHD focus for that entire session and TOF and Fontan will be very much central to that discussion. We're inviting two community engagement representatives, one patient and one cardiologist from Fontan Outcomes Network on to present and share a little bit about the work that they're doing. And then we'll be welcoming our two new adult cardiology steering committee members to our group as of April 1st. So they're also going to be on presenting and answering questions. So more to come and hopefully again, we can carry on some of this discussion. I want to thank everyone. Thank you, Dr. Suthar and Jennifer, Dr. Park, everyone who had questions today and contributed and also to Dr. Stravostova for jumping on and ask you a question as well. Look forward to seeing many of you at ACC24 in just a month now. And again, always welcome if you have any questions, if you would like to connect with other centers, you can certainly do that, get contact information through ncdr.com but please always feel free to reach out to me with any questions that you have. And thank you everyone for your time today. We'll get this posted on the site next week for anyone who wasn't able to join live today. And welcome Dr. Park, officially. Thank you everyone. Everyone have a great day and thanks for all your participation and support of QNAP. Have a good one.
Video Summary
The meeting introduced a new chair, Dr. Park, discussing updates on QNET and MOC Part 4 credits for physicians, emphasizing the importance of updating profiles on the NCDR website. Dr. Park outlined earning MOC Part 4 credits, focusing on data submission and meaningful participation. Metrics, particularly metric 28 on Pediatric AUC, were discussed, with Dr. Sitara presenting findings on its utilization. Variability in ordering practices was noted. The importance of using AUC as a decision-making tool was highlighted. Data collection process for QNAP was explained by Jennifer, covering metrics participation, data extraction automation, and retiring high-performing metrics. IT support and EMR integration were discussed, along with challenges in data collection. Quality improvement projects within the heart center were described, emphasizing feedback impact. The next learning session on TOF and Fontan patients was previewed, inviting community representatives and new committee members for collaboration within the Quality Network.
Keywords
QNET program updates
new program manager Jen
education updates
participation awards 2020
quality improvement metrics
MOC team
change management
barriers in QI initiatives
image quality metrics
fetal echocardiography
Dr. Anita Moongrady
Dr. Luciana Young
Pediatric Appropriate Use Criteria
outpatient echocardiograms
educational interventions
resource utilization
clinical workflows
data collection process
QNAP metrics
echocardiograms
retiring metrics
participation
ACHD focus
Fontan outcomes
data collection challenges
tetralogy of Fallot
collaboration
meeting
Dr. Park
QNET
MOC Part 4 credits
NCDR website
metric 28
Pediatric AUC
ordering practices
AUC
QNAP
quality improvement projects
heart center
Fontan patients
×
Please select your language
1
English