false
Catalog
1121 - ACPC Quality Network Learning Session - MOC ...
19 November 2021 ACPC Quality Network Q4 Learning ...
19 November 2021 ACPC Quality Network Q4 Learning Session Recording
Back to course
[Please upgrade your browser to play this video content]
Video Transcription
Everybody, I'm excited that we were able to have a learning session today and we do have a critical packed agenda. I'm actually in clinic, so I might get interrupted, but I think we'll just go ahead with with the program. Next time, Jen, and this is just our agenda for today. We're going to be focusing after we have some updates on education and then introducing some of our new measures. And actually, the 1st thing we need to do is to welcome Jen. She's our new program manager. She's the 1st time we've had a program manager with a lot of clinical experience taking over for Q net. So, I'm working with her so far has been spectacular. Jen, do you want to give a brief introduction of yourself? Absolutely Thank you so much. It's it's been a pleasure. I have been with the and this program for 3 months. Now I came from 9 years in emergency medicine. So, primarily on the administrative side, so my team and I managed all of our medical education programs, faculty support. So credentialing onboarding all of that. I also had experience with medical education programs in India and contract sites that we had in the Middle East. I went through an EMT program while I was there to get some clinical experience and so this is my 1st foray into cardiology. But, of course, I'm, you know. Have worked with a lot of cardiologists and really my time here. So, thank you. I look forward to working with all of you and I've engaged with some of you already and it's been. Lovely, so thank you so much. Yeah, thanks Jennifer. And I really enjoyed working with you so far. I would just encourage anybody who wants to to reach out to Jen with any questions or any issues that you have. And this is our latest roster of the 45 sites that are participating in our Q net project, at least for 2021. And as I understand, we're like, way on track to have everybody renew with just a few groups that will be dropping out in 2020, 2022. So, welcome everybody and it's just really been great to see how much participation we've had from so many programs. So, I can give a brief update with this. So these are the participation awards that we can. Can you see the participation awards? Yes, okay. I can't so go ahead. Jen. Okay, great. So the, we kick these off this year. So this will be the 1st year and these are. Uh, for 2020 participation, we're going to be posting to our website on Monday. And also getting emails and certificates out to the 20 sites that are being awarded for 2020. For 2021, because now we have this established process. We will be identifying the 2021 participant awardees. As we finish up the queue for data analysis, so that should be coming out April or May of 2022. if you have any questions about. You know, the eligibility requirements where you stand for the year, don't hesitate to reach out to me. Um, but essentially it's a tiered I'll go to the next slide. So I can show you know, this requires at least 1 metric submission per quarter and then based on your submissions throughout the year. That's how we determine what your designation is. So if you're submitting 1 to 5 metrics. All every quarter for 4 consecutive quarters in the year that that. Sets you at the participant level active participant is 6 to 9 metrics submitted consistently over 4 quarters and superior is more than 10 metrics. So, again, on Monday, we'll be announcing the 2020 awardees. And the certificates will be mailed out to sites as well. And as I mentioned, we'll be finalizing this for 2021 participation. Once we finished our queue for data, or maybe they didn't tell you Dr. Jenkins, I'm happy to pass it back to you if you want to talk about the metrics. Yeah, sure. Sorry. I dropped off briefly, but I'm back. We're going to be initiating finally a whole new set of metrics they're going to be starting. We're going to be helping to pilot them in the spring with a full launch shortly thereafter, and it's going to be a set of measures focusing on and 1 set on tetralogy of flow. Some of this is going to be led by Fred Wu and a couple of other people, but we are looking for volunteers to pilot these measures and to just give us feedback on how they work. So, if any of you or your sites are interested in doing this, please let let Jennifer know so that we can get you going on this and really start a whole new phase of this project where we really have, you know, a much more significant presence in the ambulatory space. Thank you and to add to that, you know, I added to this slide what the participation entails for a site that's interested in piloting and thank you. I think a couple of you were on the call. Your sites have reached out to say that you were interested in being part of the pilot. If you have not, and you're interested, please email me. I'll include my email address in the chat as well. But essentially, we set up a 45 minute pilot kickoff call. You then have 4 weeks to gather that data and then we have a 1 hour feedback call after you've submitted some documentation and we reviewed it and then we're hoping to launch. You know, early in in 2022, just as we'll, we'll get into talking about some of the, you know, a new metric that's going to launch in Q4 of this year. It's the same process, you know, we'll have a learning session and then in the subsequent quarter, we'll go ahead and implement that metric for general. Data submission that's everything we had on the welcome portion. Was there anything else? Dr. Jenkins before we move on to the education that you wanted to add. No, not really. We have a lot a lot to do. So I think we should just move forward. Thank you. Dr. Saseva, if you'd like to go ahead with our MOC presentation. Can you hear me now, Jen? I can hear you. Okay, great. So I'm Ritu Sastev from Emory University and Children's Healthcare of Atlanta, a member of ACPC QNAT steering committee. On behalf of the MOC team of the steering committee, I'll be providing you an update on the QI activities that are approved for MOC part 4. Okay, there you go. So, the MOC team and our steering committee includes Jen, who you just met, Sarah Chambers-Gerson, Susan Salib, and myself. Our team met earlier this year and revised the metrics available for MOC activities. As you have heard from Kathy in previous learning sessions, ACPC QNAT has partnered with ABP and American Board of Medical Specialties, ABP MS, for obtaining approval of these QI activities. The process for obtaining MOC credit is detailed on the QNAT website, where you'll find an updated flyer with six QI activities with details of the steps needed to obtain the MOC credit. These are the processes listed on the website for each of these activities. Needless to say, these MOC credit is only available for the sites that are part of QNAT. You can form teams at your own site to participate in the QI activity, design and implement changes or interventions that are recommended in the key driver diagram, participate in reviewing reports for two PDSA cycles, participate in at least two QI meetings at your site, as well as one quarterly collaborated QI learning session meeting from the QNAT. Starting this year, we are excited to share that now you can submit the MOC for MOC Part 4 credit electronically rather than uploading paper files to the system. So, I would like to thank ACC staff for making this process much easier on us. So, I'll now dwell into the QNAT activities that are currently available for Part 4, but before we do that, I just wanted to bring up that there were certain metrics that were retired after discussions at our steering committee meeting calls. Either if they were outdated or the metrics were just not being used enough. So, there are six metrics that are currently approved, including some new ones that are listed here. Sorry, these are the old ones. The one for Tetralogy of Fallot and doing 22Q11 testing on these patients. And the two metrics for BMI, measuring and appropriately counseling these patients in theiatric cardiology clinic. I wanted to highlight this publication from earlier this year that showcases the impactful work that can come out of multi-center collaborations through QNAT. This article reports implementation of the BMI metric through QNAT in 32 centers. They had more than 27,000 patients included in this study. They reported a wide practice variation in reporting and counseling for BMI, ranging anywhere between 10 to 100 percent. They described the timeline and process used for implementing this metric and QI activity and reported increase in counseling rates from 25 to 54 percent. The key driver diagram used for this activity delineates the various interventions that can be used to implement the metric. This work is a testimony to QNAT's effort to promote shared learning for implementing QI activities and we hope to see more work like this through other QI initiatives. Now, these are the non-invasive imaging metrics that have been in place since January 2018. And these are also approved for MOC credit. The key driver diagram was provided for this by Terry Tacey, Ken and Stern, and now Sasaki, and it's listed right here. Three new metrics were launched this month, including that for appropriate use criteria and two TEE metrics, one for accuracy of pre-cardiac surgery TEE, and the other for any adverse events related to TEE. In addition, we decided to add the previously created chest pain metrics that are built around the documentation of family history, obtaining ECG for chest pain, and ECHO for exertional chest pain. These are somewhat of an overlap with the AUC metric that is focused on initial outpatient ECHO and four chief complaints of chest pain, syncope, palpitations, and murmur. As you can see here in the chest pain table in the AUC document, several indications do include family history and ECG, as well as there's a separate indication for exertional chest pain. The key driver diagram for these metrics is shown here, and one could consider implementation of interventions, including education, audit, and feedback to your physicians for appropriateness of indications they use for ordering ECHOs. You can improve access to AUCs by having nominated cards of the tables in the document, as well as integrating it with your EMR. And finally, we have the COVID-19 metric that focused on appropriate use of metrics to identify and address gaps in care during the pandemic. We conducted a survey of QNED sites last December to study the perception of impact of any changes made during the COVID pandemic on various imaging metrics. The questions revolved around compliance with the imaging metrics, any perceived barriers, protocol modifications, use of sedation, and perceived impact on diagnostic errors in use of AUC. The goal of the survey was to lay the foundation for any interventions that may be needed to sustain quality amidst the pandemic. Results of the survey were shared in another session that we held in March this year, but overall, the perception was that the compliance with imaging metrics was unchanged for most imaging-related metrics. Some perceived that the diagnostic accuracy, quality, and completeness of initial exams may have decreased. This was the key driver diagram that was proposed at the time for intervention with a global aim for continuing to provide high-quality care during the pandemic. I just wanted to share that this year, this has been the overall use of MOC Part 4 through ACPC QNED. I would say that there's certainly scope for more people to use this great venue that OCC has provided us to avail MOC credit. I would like to thank my MOC team and the QNED steering committee, and thank you all for joining us in QNED. And if you have any questions around how you could avail the MOC credit, please feel free to reach out to us. Thank you. Thank you so much. I'll pass this to Dr. Roballero for his portion. Let me go ahead and pull up your presentation. If you want to go ahead and give an introduction. Yeah, good afternoon or good morning. My name is Michael Roballero. I'm a pediatric cardiologist at Le Bonheur Children's in Memphis. Just wait for the slide to come up. All right. I appreciate the opportunity to speak with you all today, and I'm going to talk about something that I'm sure you all are familiar with, but maybe you've never heard a formal talk on. That is change management. I know that many of you lead QI initiatives at your individual institutions, and it's very important to understand that managing change is one of the most important aspects of any QI initiative. And I've seen many QI projects result in short-lived improvements, but because the change was not really anchored in the culture, it's not sustained and simply fades away with time. Next slide, please. So John Carter from the Harvard Business School, he's now retired, is a leading authority on change management. Even though this was written more than 20 years ago in Harvard Business Review, it remains the definitive work in this area, and it's part of the must-read series if you read that journal. The article which previewed his book, Leading Change, where he outlined an eight-step method to bring long-lasting organizational change. And as many of you know, the largest impediment to organizational change is culture, and you simply just can't change it. You have to go through a disciplined, systematic approach to anchor change in the culture. Next, please. So establishing a sense of urgency is the most important step, and more than half of organizations will fail in this first step because complacency levels tend to be high. And you have to create a sense of urgency so that your staff or doctors or trainees will make the many sacrifices needed for the effort. And this requires a hard look at the competitive environment and the macro trends. Maybe for us it would be U.S. News and World Report ranking or even something like a Sentinel event. There are many potential pitfalls. You know, some executives will underestimate how difficult it is to change the prevailing sentiment, but the status quo is fine, and you need to convince leadership that the status quo is worse than the unknown. Another example which could increase the sense of urgency in our QNET universe may be something as simple as MOC completion timetables or quarterly data deadlines. That's what seems to get people excited and motivated. Next, please. So the next step is to create a powerful guiding coalition, and this is a crucial step. Remember, an idea for change can start with a few people, but it needs to extend throughout the organization to be successful. And by developing a large guiding coalition, this is the way you will develop the inertia to overcome the tremendous resistance forces. And if this is not accomplished, the opposition will gather itself and stop the change. So, for example, say a department chair and maybe a weak committee, you know, that's not enough. That's not a broad enough guiding coalition. You need to include, you know, key players, broad expertise, high credibility, change agents. So all of this is essential in setting up your guiding coalition. And sometimes in order to kickstart an effort, I'm talking about for more, you know, broader transformations, you know, you may want to consider an off-site retreat. And this is a common way to get everyone on board and helps to, you know, create trust and foster teamwork. Next, please. So, while a sense of urgency in establishing a guiding coalition are necessary, they're not sufficient for successful change. You have to have a sensible vision to direct the overall effort. A well-crafted vision is a clear direction towards the future, which aligns people in the organization. And you have to develop strategies to realize that vision. You should be able to explain it in five minutes or less. And this is one of the situations where less is more. It can't be too complicated, you know, buried in strategic plans, or it may actually turn off the staff. And of course it needs to appeal to all the vested stakeholders in the organization. Next please. So this is why it's so important to follow the steps in order. If you have not developed a sense of urgency, you're a powerful guiding coalition, or you have an unclear vision, you're not gonna be able to effectively communicate the vision. So under communicating by a factor of 10, 100, or even a thousand is a common error. So say for example, a single meeting, single email, a single announcement, uses only the fraction of the communication potential. And you have to use every possible vehicle for the communication effort with lots of repetition. Another possible pitfall here is to behave in a way that is antithetical to the vision. So I'm sure you've heard the expression, you have to walk the talk. So that's extremely important, and that will be perceived negatively if you're not behaving in concordance with the vision. Next please. Next you have to empower others to act on the vision. And this can also be phrased as removing obstacles to change. And I know empowerment sort of is common buzzwords thrown around a lot, but it does have real meaning here. Failure to removing obstacles such as over siloed organizational structures or middle managers that are just interested in the status quo can derail the change process by making employees feel disempowered. And occasionally you may have to, to empower your staff, you have to sometimes provide some additional skills training as well. Next please. So next planning and acknowledging short-term wins or results are essential. So short-term wins help to motivate the staff for the long-term goal so that momentum is maintained. And it helps to keep the urgency level up. You should recognize and reward employees that contribute to the change effort. And ideally these short-term wins, they should occur early in the change process, usually around 12 to 24 months. If it takes too long and you're only focusing on the long-term goal, the staff actually may begin to lose interest. And it's said that change management is a combination of leadership and management. And this is where good management is required. Next please. Next it's important to consolidate improvements to produce even more change. And this can also be phrased as, do not declare victory too soon. It will stop all momentum. Remember that improvements that are not hardwired will disappear because any improvement at this stage is usually very fragile. And change leaders should use their increasing credibility, you know, to tackle bigger problems along with giving some increasing responsibility to managers for new projects as well. Next please. So the final step is to hardware improvements into the organizational culture. And you know, things have changed when people say, oh, this is the way things are done around here. Hardwired means your change effort has sunk into the DNA of the organization and people have to see the connection between the new actions and behaviors and organizational success. And until these new behaviors are, you know, deeply rooted into the new norms and shared values of the organization, they will remain fragile. And another way to say this for you gardeners out there is that shallow roots require constant watering to prevent them from drying up and fading away. There should also be a concerted effort to prioritize hiring, leadership development and succession planning, which are in concert with the vision. Next please. So with this short introduction, I hope I've made the point that cultural change comes last in this process. Let's just run through the steps very quickly. You have to create a sense of urgency, create a powerful guiding coalition, develop the vision, communicate the vision, empower others to act on the vision, plan for and acknowledge short-term wins, consolidate improvements, and finally hardwire improvements. You'll have to consider these steps as you go through your QI initiatives or broader change efforts. You know, I was speaking with one of our Cincinnati colleagues at one of the IHI meetings some time ago, and we were talking, you know, basically saying that, you know, although there may be pockets of excellence, it's difficult to maintain sustained improvement. There are often too many priorities, variation may continue. Remember, quality is everyone's job. And I want to send home the point that transformation initiatives are really a process rather than an event. And given the pace of change in our specialty and medicine in general, expertise in change management is more important than ever. So next slide. These are my references, and thank you for your attention. If anyone has any questions, I'm watching the chat. You're also welcome. If you have any questions regarding the MOC credits or change management, please feel free. Jen, while we're waiting on questions to come, maybe you could just highlight what the process is at your end for submission of the MOC credit. Absolutely, thank you. So when you go through the link to submit your credits, what'll happen once you've submitted that, it'll go to the PI at your site so that they can see that the attestation has been submitted and they'll, I'll get a copy of that as well. Generally the way it works, we submit monthly to ABMS. So that typically happens between the 1st and the 5th of the month following your submission of an attestation. So for instance, if you submit it on November 1st, that attestation would go to ABMS on December 1st, typically takes them just a couple of days to process. Technically between ABMS and ABP, they have 30 days to get that posted to your individual record, but it generally happens within a couple of days. So don't worry if you submit something early in the month, you won't see it immediately reflected. And you're always welcome, I'll include my email address as well as the ACPC QNET email in the chat, but don't ever hesitate to reach out if you're checking on credits. This is a good time now because we're coming to the end of the year. So I would certainly encourage everyone to check if you were waiting and go ahead and reach out to the ACPC QNET email address and we'll get back to you quickly. But just double check that the credits that you've submitted are reflected there. Looking at the chat, I would like to say Seda has put the great comment here. Yes, culture can eat you for breakfast, lunch and dinner, hardest to change. So Michael, thank you for such a great presentation. You know, it's such an important topic and it's so important to talk about. I was just wondering if you could just share some examples of some barriers you may have faced in implementing these things at your center and you know, how you got over those, just some practical tips and tricks to do it. Right, so I appreciate that question. Yes, I mean, again, you know, I guess the real point of the short introduction was that, you know, this really is a process and if you skip the steps, it's at your own peril because you will not, it will eventually, you know, fail. But, you know, really the sense of urgency I think is the most important. You know, then, you know, having a vision, you know, like we talked about and then empowering others to be your change agents because sometimes, you know, there's so much complacency and there's so much resistance, you know, you really have to empower them. You know, sometimes, you know, some coaching is required or some, you know, additional, you know, training, like just for example, our Q, we don't have as big a QI department as like say Cincinnati, but we have some very good data analysts and we have, I've had one of our data analysts, you know, work with our lead QI sonographer to coach her, you know, in QI skills and data analysis and data presentation and so forth. And here, I'm really referring to develop to, developing a QI culture within the Echolab, for example. And again, getting people on board, creating a sense of urgency and having a vision and empowering the staff. And I think those are all, those are all critical steps. That's great. So Michael, just, you know, like urgency and understanding the need and communication, like are three things that go hand in hand. And I really feel that a lot of centers and places struggle with even like convincing their leadership that we need to be part of QNET or like this would help. And like the quality in hospitals, like just focuses on CLABSI and CAUTI and, you know, big identifiers. So these things like, you know, Echolab, you know, like you guys like stay in the corner kind of mentality. Right, right. And we face those same issues, of course. And, you know, our marquee programs consume most of all the QI resources, you know. So search for program, Cath Lab, CVICU, those, you know, to swallow up all of our QI resources. And you have to make the case, you know, that some of these under the radar, these I would consider, you know, maybe the Echolab may be slightly under the radar at times, take it for granted, unfortunately. And it really, you know, you need to make the case that this is an important part of the overall QI portfolio, you know, of the Harvard student. And that does take a lot of selling. And I can say, well, you know, I understand that's, you know, the QI resources are being gobbled up by some of the marquee programs. And that is important, but you can't neglect some of the other services. So for example, you know, Echolab, you know, EKG halter, you know, all the bio telemonitors and so forth. So all of that, and all the ambulatory issues, ACHD, Fontan metrics, you know, all the ambulatory metrics. It really takes a lot of talking and a lot of just what Dr. Jenkins, you know, has been talking about, you know, creating return on investment for network participation. So we have to show them the key driver diagram and highlight all of that. I wonder. Yeah, Dr. Morelli, go ahead. I wonder if it might be interesting or useful. Michael, I thought this was such a great presentation to kind of have QNET develop a little bit of a guidance document. I'm not gonna call it a guideline, but it could be, you know, like we have so many guidelines that help us manage clinical care. We take them to our hospitals and we say, look, these are the guidelines for managing this that are the other thing. And I wonder if it would be useful to have kind of a guideline document or a roadmap or whatever we might wanna call it that the group would develop in conjunction with ACC so that then at the individual level, you go to your hospital and you say, look at this is, what's been developed at ACPC and this is the roadmap and let's, you know, choose a domain of quality indicators. Let's sequence, like if we're gonna do ambulatory, echo surgery, but then go to the institution with a very clear roadmap, almost like a checklist and roadmap to say, this is what we need to have in place to kind of standardize a little bit what you're talking about. I wonder what you would think of something like that. Well, you know, I guess my personal feelings, you know, you know, I wanna have a robust, you know, QI portfolio that showcases, you know, the vast diverse QI efforts that are going on, you know, we belong to more than 13 registries. Our data analysts are very overworked. We probably need two or three additional FTEs. You know, I personally would like to see, you know, more real-time dashboards, you know, for, you know, as we, you know, we have case conference and we've sometimes been accused that we're a little bit far behind once the data finally gets analyzed and there's some trend that's been going on for several months. So yeah, I'm all in favor of more structure. You know, this is, there's a lot of research has been done in this field, of course, and, you know, sure, I would be happy to discuss offline in more detail with you. Great, we have some great comments coming up in the chat, but I think we'll segue into the second half now. Jen, over to you to introduce the second half of the session. Great, thank you both so much. And we'll send out after the sessions today, any presentations, so, and if you have questions, of course, reach out to me or continue posting in the chat. And I'll pass this on to the metric presentation. Thank you, Jen. Jen, is that my cue? Sorry? Oh, do you want me to pull up your presentation or you should be able to share? Yeah, I'm just trying to. Jen, can you pull it up? I think I'm having a little technical glitch here. Sorry. Thank you, Jen. Good afternoon and good morning to everybody. I'm Shubhika Srivastava and I'm really privileged to be part of the QNET Steering Committee and working with this elite group and really intelligent group of people and getting to share the work and knowledge with all of you here today and learn from you. Next slide. As we all know that the imaging quality metrics, the work started as early as 2015 and it was to standardize practice, identify gaps, create a learning environment for continued improvement, process evaluation, evaluation of appropriate use of certain modalities and evaluations of outcomes and critical reporting to affect patient care and safety and quality of care. Next. The number of members that were involved in developing the QNET metrics exceeds over 50s and it continues to grow. And these projects were spearheaded by Leo Lopez who brought the entire group together through multiple meetings and multiple subgroup leads as is shown on the slide. Next. The metrics that were finalized and approved and put into practice were actually approved and endorsed by all imaging societies, including ACE, SOAP, Fetal Heart Society. And there have been talks in working with SCMR and other societies for making multimodality images, imaging metrics. Next. Ritu has already shared the list of all the metrics and the ones that were put into MOC use, but here is a list of imaging metrics. And you can see that there were three metrics that were released in 2018 and we have some data that I'll share about them. Then most of the other metrics went through the pilot phase and were released starting earlier this year. The enrollment of sites into those metrics has been slow as the metrics have been released. And currently the metric number 29 and 30, there are, as you can see, two or three sites and other sites are slowly signing up. So most of the trans-thoracic metrics have been in significant use. TE metrics are starting slowly. And actually the fetal metrics, the ones that were released in the first, second quarter, are higher up to six to nine sites in use. Thanks. I think we can skip this slide as we just mentioned about the metrics. And today we're gonna be introducing the initial, the image quality metric for the fetal echocardiogram. Next. The lessons learned through pilot testing will be shared by Anita and Ujjana Young, and they will go over the entire change and understanding of processes and things that we learned. And Dr. Moongreedy will focus on the education needs that were actually defined during this pilot process. Next. I know this key driver diagram is a bit trying for the eyesight, but this was initially, as Dr. Shashtepa had mentioned, this key driver diagram for imaging metrics in trans-thoracic echocardiography was put together by Dr. Stern, Sasaki, and Tacey. We modified it a bit to include interventions that would be needed to optimize for fetal echocardiography. And these key driver diagrams are just a guide and can be used by different sites to incorporate into their institutional practices and you can add to this different other levels of key drivers in education processes implementation. Next. This is a slide that shows the metric 27 that has been in use since 2018. This is an average of all the sites that have participated. And you can see that over the years, the number of sites participating in the Comprehensive Transfer Asset Evaluation exam has gone up like initially from like six sites to like 14 sites. You can also look at on the right-hand side on the Y-axis, the average points. And you could see like there is a trend. If you look at each year, you start off in the first quarter with a slightly lower benchmark. And as you go up to the fourth quarter, whatever interventions you did in that year or catch, you know, improve your benchmarks and measured quality numeric number that you would use. And then you start, it's an interesting cycle. We'd like to see how it applies to others as we learn more about the processes. Next. And this is the Transfer Asset Echogram image quality metric. And you can see a similar sinusoidal waveform that starts every year with a slightly lower benchmark. And then goes up. And again, you can see the number of sites that have been utilizing this metric. Next. And this is just an aggregate of all the metrics that have been used. And as you can see, the transfer asset ones have been the most in use. Next. And I'm gonna, to save time, I'm gonna pass on the baton to Anita Looch and Anita, the two Anitas and Looch to talk about the quality metric and how the process and in conclusion, basically this process allows us to standardize, improve, assess diagnostic errors, improves critical and adverse event reporting, and it can span across all imaging modalities. And without further ado, over to, is it Dr. Pratibha and Jen who's gonna be taking on? Thank you, Anita. Thank you. Thank you, Shubhi for leading the non-invasive imaging metrics. And thank you to the QNET steering committee for giving me this opportunity to present our metric today. So Jen is going to pull up my slides and drive them. We'll just give her a moment. So I have a short talk here where I will review the fetal image quality metric. Perfect. Thank you so much. So I'm Anita Pratibha from Texas Children's Hospital. We'll go to the next slide, please. So we all know that image quality is key to diagnostic accuracy. At the same time, image quality is quite a subjective assessment. So this fetal image quality metric, number 36, tries to, I'm sorry, can you go back to the previous slide, please? Thank you. So this metric is designed to give some objectivity in order for us to be able to assess the average image quality score for initial fetal echocardiograms. And these are the echos that are the first-hand studies and are complete studies and restricted to the fetuses with structurally normal hearts. So for this particular metric, the numerator is the sum of all of the image quality assessment worksheet scores, which I will go over in detail. And then for that particular measurement period, and the denominator is the number of complete fetal echocardiograms greater than 18-week gestational age that were assessed for that particular measurement period. Next slide, please. So as I mentioned, the inclusion criteria, we will include studies, initial fetal echocardiogram complete studies greater than 18-week gestation, and these will be fetuses with structurally normal hearts. We will exclude fetuses with any abnormal anatomy, rhythm, or function. First trimester fetal echoes are excluded, multiple gestation, repeat studies, as well as studies with poor acoustic windows due to maternal body habitus, fetal position or movement, or otherwise. Next slide, please. So the data will be collected on a quarterly basis, and we try to formulate this metric in keeping with similar metrics, number 26, the initial transthoracic echo quality, and number 33, which is the comprehensive fetal echo metric. So similar to the comprehensive fetal echo metric, we suggest that a minimum of 10 fetal echoes per center should be reviewed for this metric. And if the center performs less than 10 fetal echoes per quarter, then all of the studies for that quarter will be reviewed. And the overarching goal for this metric, like others, is for centers to review their own performance with their staff and assess opportunities for improvement and accordingly implement PDSA cycles to drive improvement in their labs. Next slide, please. So this is a portion of the image quality assessment tool, and a PDF fillable form will be available to all the centers that are participating in this metric. So in the initial part of it is some demographic data and some data for logistics that you may wanna track in your lab. For example, the machines used or the time spent for review. Next, please. The image assessment tool itself is divided into four categories. The first is 2D imaging, which carries six points. The second is rhythm, which is one point. Color Doppler carries four points and spectrum Doppler carries three points. So for the maximum possible points for each echo that is scored is 14 points. Next slide, please. So how to score? Again, I mentioned that assessing image quality is of course subjective. And as we all know, our fetal echoes can be somewhat long and the fetus may move multiple times during the study. And so how do we assess the image quality? Because the same view may look different during various times as a study. So this was a question that was debated quite a bit, but basically we decided that the question is answered yes. If through the study, you find images that meet the stated criteria for quality and for each category. So we do know the fetuses will move and the quality can vary throughout the study, but if sometime during the study, an optimal image, for example, the aortic arch is obtained by 2D color and Doppler, then we should answer yes. Next slide, please. So going to the 2D imaging, first, as I mentioned, there will be six points for 2D imaging. The first is the ultrasound, relates to the ultrasound output settings and Anita Moongrevy will go a little bit more into it, but we want to make sure they're appropriate and consistent with the ALARA principle. The second is the brightness and contrast level so that individual structures are clearly defined. This means the gain, compression, the TGCs and other settings. The third is balanced penetration and resolution. And this translates to appropriate transducer choice and imaging modalities, such as harmonics or non-harmonic imaging, which results in the maximal image resolution for a given depth of imaging. Next slide, please. Number four is the zoom and region of interest. And I do think this is particularly applicable to fetal echo. The fetal heart should fill at least one third of the imaging sector display and furthermore, the focal zone, if appropriate to the machine, should be appropriately positioned to the region of interest. The fifth point for 2D imaging is cine loops. And I think we all probably do this. The fetal heart should be examined as a moving structure and images are to be saved as video clips in the form of cine loops and sweeps. And the point number six relates to the sweeps themselves. The study should have appropriate sweeps of the fetal abdomen and chest with the appropriate transducer alignment so that the visceral situs and segmental anatomy is correctly depicted. Next slide, please. Going into the rhythm assessment, we recognize that labs may have different methods of assessing rhythm. If it is by M-mode, the ideal image should be obtained by aligning the M-mode so that there are clearly identifiable waveforms for the atrial and ventricular contractions. And if rhythm is assessed by Doppler, then the sample is appropriately placed and the Doppler tracings are optimized as I will go into a little bit further in subsequent slides. Next slide, please. Going to color Doppler, this carries four points. The first point relates to the frame rate. So this again is selecting the transducer and the imaging depth, box size, and other settings so that we obtain the highest frame rates possible. And we put that greater than 20 frames per second when possible is desirable. The second relates to the Nyquist limits. They should be set appropriate to the structure being investigated so as to allow for diagnostic imaging. So this means for the inflows and outflows greater than 50 centimeters per second and for venous, appropriately lower scales. The color settings are also looked into for a point and this is adjusting gain, color frequency and other settings that result in the appropriate color fill of the structure being interrogated without causing excessive color bleed. And finally, the color persistence is set to low or none such that the color fill is appropriate to the cardiac cycle. However, understanding that for structures such as the pulmonary veins, which are very low flow structures in the fetal physiology, we may need to change that and add on some color persistence. Next slide, please. Going on to spectral Doppler, there are three points assigned to spectral Doppler. So the first relates to alignment and placement of the Doppler sample. As we all know, we wanna be as parallel to the direction of blood flow as possible, but angle of less than 20 degrees for the insulation angle is desirable. We also pay attention to the sample volume size and position. Now, on some of the Doppler, we are looking more for pattern like the umbilical vein and ductus venosus, in which case this is not applicable. The point number two relates into the appropriate Doppler scale and baseline such that the Doppler envelopes are complete with maximal signal size and minimal artifact. And point number three relates to the sweep speed, which is adjusted appropriately for visualizing the Doppler contours and measuring time intervals if that is the goal of the Doppler. Next slide, please. So the, and that is it for the metric. As I mentioned, we would have, the total points would be tallied up by scoring each of those individual categories, as I mentioned. And the sum of all the scores will make the numerator while the denominator would be the number of echocardiogram scored for this, for a particular quarter. I wanted to thank my team members for helping me put together this metric. And we certainly, there was a lot of robust discussion behind it, and I certainly hope you all will find it very helpful. And that is the end of my presentation. Thank you, Anita. Our next speaker is Dr. Luciana Young from Seattle Children's, and she will be discussing the workflows as to, as we discussed, each of these pilot metrics are put into a pilot phase, and the experience of that will be discussed now by Dr. Luciana Young. Thank you, Shubhie, and thank you, Anita, for that very nice presentation of the metric. I'm sharing my screen. I'm wondering if you can let me know whether or not you can see it. Not yet. It's working on it. Okay. Yeah, it looks, thank you. Great. Can you see it? Yes. Oh, great, fantastic. I did not present this as a slide presentation because, as you see, it's quite a bit of data, and it's a little bit difficult to get it in something visible on the PowerPoint slides. But once this metric was developed, we piloted it at four institutions, and I'd like to thank Texas Children's, Nemours, and Phoenix Children's, as well as Utah Primary Children's for having taken the time to pilot it for us. And there's specifically five different categories that we looked at. One was, what were the biggest challenges that people had for completing the metric? And I have to say that, overall, I was pleasantly surprised that it wasn't as difficult, or people didn't find it as difficult to fill out and complete as, initially, as looking at all the detail that was included that I would have expected would have been an issue. The process was found to be fairly straightforward. I think that one of the areas that was identified of concern was more education on the ALARA settings. And what this brought up was identifying an area for education. And so I know that Anita's gonna spend a little bit of time after this presentation going into it, but also posting some educational materials on the QNET website in terms of ALARA settings for a good review for the different labs that are performing fetal echocardiography. So that was one of the areas that was identified here was not only more education on the ALARA, but also for different vendor machines, different settings are used on the different machines. And these are typically going to be vendor-specific and presets. And so that seemed to be an area of focus for the individual centers in terms of the different vendor platforms that they were using for their echocardiograms. There were some other questions related to interpreting the specifications that were required on there. Some of the definitions, for example, here looking at a left ventricular outflow or inflow outflow and how to determine our rhythm assessments. And these were more just clarifications in detail that were made to the living document. And so those were very easily manageable and not a major concern. I think one of the things that came up here that was identified as important as well was the data collection process. And this was more of an exchange idea of ideas that took place across the different institutions as to how best to use their database, whether it was single or another reading database or image database in terms of identifying images or studies that were initial echocardiograms and were normal echocardiograms and how to mine the database for this information. So this wasn't actually identified as such as a problem, but more of an exchange between the different institutions and how they could use their databases a little bit more effectively in terms of identifying studies. The next section was more into number of patients that were used, which we corrected on the database and also whether or not, rather than using a PDF format, whether we could use dropdown menus. And Anitha, maybe you can update. I'm not sure whether that ended up being done as a database or if we're continuing to do this as PDF. I think right now it's continuing to be a PDF unless, Jen, there is a plan to create a database. Because I know one of the things that came up was that perhaps doing a REDCap database with dropdown menus, that this may be potentially a little bit more time-efficient way of completing this. But honestly, it didn't seem that time to complete the forms was a concern. Jen, do you have any updates in terms of database or dropdown menus? Yeah, absolutely. So one option that I think we discussed briefly was the survey site that we currently use for data submissions, quarterly data submissions. Of course, your final data that comes to the program and goes through the aggregation process, that goes through that survey link, but we have opportunities to do that for other forms as well. So it's certainly something, as we're getting ready to roll this out for Q4 that will be part of the discussion. Perfect. Well, thank you. So really, the feedback from the different sites was very good. I believe it was very positive. The three areas that were identified were more education being needed at sites regarding the ALARA specifications, and that's gonna be rolled out. And Anita will talk a little bit more about that. In terms of exchanging of ideas, it was very helpful for the different sites to talk about it on our call about how best to identify these patients. And not only is it helpful for identifying studies that are normal, but also in doing research and doing other QI, how to identify different pathologies within our databases. And then lastly, just modification of the reporting. If there's a possibility of doing it as some type of a red cap, dropdown menu database or a dropdown menu, it may be a little bit more helpful in terms of time resource management in completing these. So that's all I had. Thank you for allowing me to present this. I thought overall it was a really positive metric and it didn't seem that there were any significant issues that could not be addressed quite easily. Thank you, Dr. Luciana Young. And if people have questions, please feel free to post them in the chat box and we will move over to Dr. Anita Moongrady to give us some, a talk on education and lesson learned. Thanks, Shubhi. I'll try to share my screen. Yeah, I'm stopping sharing. Sorry about that, Anita. Everybody's oversharing. Okay, I've never shared on Webex before. So I got to wish me luck. You can see your screen. Is that coming across okay? Looks good. So I was tasked with lessons learned. You just saw this. I tried to make it as big as I could. This is the same feedback that we just saw. But what I did was I just circled in red every time people mentioned this part of the metric, the mechanical index and thermal index. So clearly this was a problem. It was a problem across all sites. And so we've created an educational module. I'm going to switch slide sets now, hope this will still show. Are you seeing the blue screen? Yes. It says patient safety. Okay. My apologies to people who don't do fetal ultrasound. This is going to sound totally out in left field. I'll try to explain what this is about. There is a lot of data in the obstetric world about the safe and prudent use of ultrasound. Ultrasound being non-ionizing, but there are bio-effects of ultrasound and they haven't yet been studied and are unlikely to be studied in human fetuses. The concept of keeping the ultrasound dose as low as reasonably achievable, has been adopted from the radiology literature to the ultrasound literature. In practice, prudent is what we want, and so it's a combination of justifying the exam, so don't do the exam at all if it's not necessary, and optimizing things when you can. Here's the problem that multiple societies have issued safety statements without any evidence, I must say. But the statements recommend no commercial demos on human subjects, no training on students, and none of this look at the baby just for fun. These are official recommendations from Issuag AIUM, British Medical Ultrasound Society, a bunch of other ones. These are the things you can do if it's necessary for medical diagnostics, people are trained, and you keep the exam time short, and output levels as low as reasonably achievable, then theoretically you're safe. For the non-fetal cardiologists, reminder about what early human development is like, and certainly we are now scanning in this 10-14 week gestation age, when some of the organs are still forming, but mostly we've reached organogenesis. The possible bioeffects include everything on this slide. Basically, your mechanical risks are of gas bubble cavitation, and your thermal effects are thermal effects raised in temperature. The mechanical effects are the 2D ultrasound thermal effects are from Doppler. The problem is, as was noted by all of the people who tried to complete the metric, that the output power from the transducer varies from one machine to another, and varies, is a setting that one can change. It increases as you move from real-time 2D imaging to colorful Doppler and spectral Doppler. M-mode in general, intensity is low, but the dose to tissues is high because the beam is stationary. To avoid these exposures from both mechanical and thermal standpoint, there are these two indices that were introduced, and they have to be displayed on the device screens, and they should be evaluated routinely. They are the thermal index, and there are three different thermal indices, depending on what type of path the beam is traveling through. We mostly use TIB because by 13, 14 weeks, the fetal skull is ossified or starting to have ossification centers, and so this is considered the more conservative of the two. This one is way too conservative. So, TIB is usually what we're talking about. There's only one MI. Let's skip all of the physics. I don't really like that, and we're running behind anyway. The mechanical index is rarely an issue. 0.7 is the value that's been chosen for being below that. You're unlikely to get cavitation unless you're using contrast agents. So, obviously, we're not using contrast agents in children. Otherwise, we set it around one. The thermal index being the ratio of emitted power to the power required to raise the temperature of the tissue by one degree Celsius, and just a couple of degrees can cause fetal damage. If this is dependent on the tissue that you're incinating and the time that you're exposed, so that's why we have those tissue bone and cranium. All right. So, the theoretical damage, when are you going to get there after? This is just a random table that shows that it's not linear. If your temperature elevation is theoretically only going to be one degree, you have to scan for 256 minutes to actually cause any damage. So, again, theoretical, but there are guidelines. There are very clear guidelines and they're adopted across multiple organizations to limit obstetric scanning and to monitor these indices while you're scanning. There are guidelines for both TI and MI. Pretty much, if you're below 0.7 or one, you can scan for at least an hour. Let me see, why did I put this here? So, people in piloting the metric, they did ask several times, well, I don't know how to change it or I don't know where it's displayed. The answer is, it really is something that the vendors are required to do, to make the TI and the MI display, but the vendors are not required to limit it. So, they're required, since 1992, they're allowed to make machines that are theoretically dangerous. They are not required to make them safe. They are required to give you the information you need when you're scanning, be that as it may. So, this is on the physicians now to make sure they're safe. Okay. So, what do they have to do? They have to adjust the power, the exposure time, the probe position, and limit the use of Doppler. In order to minimize TI and MI, monitoring the correct thing, they just have to be careful when these may underestimate the actual exposure time. They need to check the acoustic output power in the manual and check with the applications people and check on the machine. Use high gain instead of high power when possible and start the scans low and only increase as much as you need to get good image resolution. Avoid repeat scans. Consider not scanning febrile gravitas at all. And don't hold the transducer stationary in contrast, obviously, we don't use. All right. So, these are just some examples that I put in for people to look at where it's displayed and the effect of changing the focus, which can bring down the mechanical index, changing the power, which can dramatically bring down the TI, that's the thermal index. These two, I think, are almost exactly the same, except we've brought the power down from 100% to 50%. And you can see now we're in a completely safe range. You could scan for days. So, increase the gain instead is the lesson there. I have some more examples in this slide set for people to look at what they could do to get their numbers down. So, to summarize, the application of ALARA is what we're talking about and scans need to be medically necessary. The MI needs to be kept as low as possible without compromising the image quality on your 2D, ideally less than 0.7, that's okay to go up to 1, as long as you're not scanning for over an hour. The TI should be less than 1. Over 3 should not be used at all. And if it's between 1 and 3, the scan time should be kept as short as possible, 5 to 10 minutes of having it on. And I just want to, again, reiterate that these are not the responsibility of the vendors. You have to look at these things. The vendors can deliver a machine that has a TI of 5. And it's up to the labs to make sure that their machines are safe. These recommendations, though, are for theoretical situations based on modeling and animal work. There's no data in humans, no epidemiologic support for a causal relationship between medical diagnostic ultrasound and adverse effects on the fetus. So, we are admitting that. But nevertheless, these are the recommendations. And so, that's why they're in the metric. And there's some references. So, this slide set is something I sent in. And if the ACC wants to rebrand it, it's fine with me. They're all my slides. Thank you so much, Anita. That was really helpful. And now we are open to questions about the metric, the process. Anybody wants to share their experience, please feel free to ask questions. Hi, this is Maj Makul from UK. So, do you guys see that metric as kind of complementary for the comprehensive of the initial fetal echo? So, basically, you can run both at the same time if you're collecting them? Yes, that was how it was designed. So, hopefully, when you select your 10 fetuses for your comprehensive metric, you can also simultaneously score for this metric. Thank you. And also, like most of the people who are doing the transthoracic metrics, you know, we had chosen the number 20 for that. And we did 10 for the fetal because there are a lot of centers who do not do as many normals or uncomplicated. So, just to make sure that you maintained a process and a level of education, we decided 10 for the fetal metric. Thank you. Does anybody else want to share any concerns or questions with the process identification or anything for Dr. Moongrady about, you know, her comments on how to adjust MI and TI? This is great. May I ask a question, Shibi? Is that okay? Absolutely. Has there been any discussion industry level regarding, you know, making this a standard preset or an industry level or any agreement between the vendors? Not to my knowledge, Anita. Yeah. So, no, they actually went the other direction. When they were limited as far as power output from the machine, they felt like they were limited in the image quality improvements that they could make. And so, that limitation was raised, as I showed on that one slide, in 1992 to allow them more flexibility. The issue is, and I rushed through it at the end, is that, you know, you don't set TI and you don't set MI. It's what you get depending on your depth and your PRF and your power output. And so, you know, the suggestion is that don't turn the power all the way up. Turn the gain up instead because gain is post-processing, right? So, that's the issue. But the answer is no. There are reasons not to limit the vendors because it will compromise image quality. Because this reminds me of the thermistor on the TE probe and how we always, you know, whenever that happens, you know, turn it off or freeze the image or what have you and the thermistor goes down. I mean, yeah. It seems like there should be, you know, yeah, it should be easy to identify the TI and MI and something like the thermistor, you know, goes off when you reach these threshold levels. Yeah, maybe it should shock the person scanning or something. Just something, you know, because, I mean, it's very easy when you're in the OR and the thermistor starts and then we acknowledge it and then we, you know, adjust, adjustment. I think, I know this is being recorded, but I'm going to say it anyway. You know, it's actually probably ridiculous because those numbers are for stationary tissue and a stationary beam and everybody knows that when we're using pulse doppler, we're dopplering the blood, which isn't stationary. And with the 2D, we're insulating the fetus, which also isn't stationary. So I feel like there's zero chance of damaging a fetus during a fetal echocardiogram. But, you know, these are the recommendations from multiple societies and I don't think that we're in a position to say that we don't support them. So I'm not that worried about the fetuses and it is easy enough to, like, you don't ever need to turn the power up to 100%. It's pretty easy to set presets for this and I don't worry about it in my lab at all and I've known about it for years. Anita, there are questions in the chat box. Does the maternal body habit disaffect the TI? No, it's what the operator does to improve the image quality that would. It's probably going to be higher in maternal body habitus because you're going to crank the power up. But no, this is theoretically what the temperature increase would be. So you're going to get the same thing whether you put it on a phantom or on a patient or on a turnip. You know, it's just assuming the average heating for human tissues with a certain water content. Looch, Anita, and Anita, like, in your institutions, have you set up any process that has worked for you to cull the data, identify the patients easily to minimize time? Because we just talked about resources and how every lab and every place is short on resources to incorporate this, even though it's extremely helpful to do this exercise and to impact change. Sure, I can speak in terms of our fetal clinic or prenatal clinic at Springbrook. We have a pretty robust quality improvement program that meets on a regular basis, and we have several administrative staff as well as nurses that are focused and have been tasked with going through this process and collecting the data. So I feel that in terms of our institution, we're very fortunate to have the resources to go through it. And, you know, again, I don't think that based on the numbers of data that we need to collect from this perspective that it's that time-intensive or that much of a challenge. I think we've made it pretty easy in terms of the different metrics that are being collected. So for Texas Children's, you know, fortunately with our volume, we do have a fair number of normal echoes. And so for the pilot project, my colleague, Dr. Batool Ilmas, she led the pilot phase, and she essentially could run through the list for a couple of days and was able to pick up the echoes. But we haven't as a group decided how to systematically, I mean, whether we would use something like a random counter or something like that to pick up the 10 studies per quarter as we go forward. We haven't actually started yet because the lawyers refused to sign the release forms for QNAP, but I think we got that under control, and I think we'll be ready to start in the first quarter of 2021, 2022. I will say that we prospectively identify normal studies in a database, so that will be very easy for us to pull. And we were already doing almost exactly the same metrics that are on here, probably because, you know, I've been involved in developing the metrics. So our sonographers were already doing QA on each other's studies on a very regular basis. We're doing two per person per quarter. So we're just going to, you know, change which form we write it on. And I've assigned a quality officer that's one of our junior people, and he's developing his own REDCap database with the dropdowns and stuff that people have talked about. And I have to go because I'm in, like, this horrible, busy outreach clinic, and people keep coming in going, where are you going to see the patient? So thank you, everybody, and it's great. Thank you very much, Anita. And I think this was, like, an excellent session. Please keep the questions coming, and feel free to reach out to Jen. I think there is a lot of work that can come out of this. We can learn from each other. We can figure out how to create collaborative studies. Encourage your institutions and your friends to join QNET so we have a more robust engagement. And we really look forward to participation from our friends from Pakistan. It's amazing. It must be so late at night there now, Shazia, and you have been such a, like, continued to support and participate actively in this initiative. So, Rita, do you have any or anybody who wants to chime in? Kathy? Jen? No, thank you all so much. And please don't hesitate to reach out if you have any questions. We'll make sure that we get the recording, as well as the presentations from today posted, and I'll get an email out to everyone to let you know when they're up. So I look forward to working with all of you. Thank you. Thank you. Thank you all for joining. Bye-bye. Happy Thanksgiving. Happy Thanksgiving. Bye, everybody. Bye. You too. Take care. Bye-bye.
Video Summary
The video transcript provides updates on the QNET program and introduces a new program manager, Jen. The session covers updates on education, participation awards for 2020, and plans for 2021 awardees. New metrics for quality improvement are introduced, including tetralogy of Fallot, BMI, non-invasive imaging, and COVID-19. The MOC team discusses approved QI activities for MOC part 4 and the process for obtaining MOC credit. The importance of change management in implementing QI initiatives is emphasized, along with an eight-step approach for successful change. Barriers faced in implementing QI initiatives are discussed, emphasizing the need to create urgency, establish a guiding coalition, and communicate the vision effectively. Empowering others, planning short-term wins, and consolidating improvements are highlighted for lasting change. The session concludes with a presentation on image quality metrics for fetal echocardiography, including lessons learned from pilot testing. Key driver diagrams, metrics for trans thoracic echocardiography, and participation rates are discussed. The video presentation is given by Dr. Anita Moongrady and Dr. Luciana Young as part of the Quality Network for Maternal and Child Health.
Keywords
QNET program
program manager
education updates
participation awards
quality improvement metrics
MOC team
change management
barriers to QI initiatives
image quality metrics
fetal echocardiography
pilot testing
participation rates
×
Please select your language
1
English