false
Catalog
Opening Plenary/The Ralph G. Brindis Keynote - 202 ...
Opening Plenary/The Ralph G. Brindis Keynote
Opening Plenary/The Ralph G. Brindis Keynote
Back to course
[Please upgrade your browser to play this video content]
Video Transcription
Please welcome to the stage Senior Director of Registry Services, Barb Christensen. Well welcome to the 2024 Quality Summit. We're so excited to see so many of you here in San Antonio. We are over 800 strong, so that is amazing. So first of all, I want to thank a few folks. I want to thank our faculty, our session planning groups, our exhibitors, and ACC leadership for their time and support to make Quality Summit one of the premier cardiovascular conferences in the world. The support of the ACC president, who you'll hear from this morning, and from the board of trustees. And so a thank you to Dr. Hani Najam, who is here representing the board. It really represents that ACC, how much they value you and the work that you do. So thanks to all of you for being here and being a part of this conference. I know I speak for my colleagues when I say that we love seeing you, we love hearing from you. You energize us, and we hope that you're going to feel that same energy here with all of your colleagues. There is one special person to thank, and that's Dr. Olivia Gilbert. She is our Quality Summit program chair. And Olivia is one of those amazing superpower women who has, she has a young family, she has a busy clinical schedule. She is the professor of cardiovascular medicine at Wake Forest, at Atrium Health. And she is committed to ensuring that our patients receive the highest quality and outcomes that they can. She is a true champion for quality, and it is my pleasure to introduce you to her. So I'm going to ask her and the rest of the team to come up. Welcome, welcome everyone. So excited to have you with us. Who's excited to be in Texas? Who's excited for a couple days of action-packed, inspiring, empowering sessions networking? All right, here's the real question. Who's excited for the prospect of the lazy river? So in all seriousness, we are very excited to have you all with us. We have a fantastic lineup of lectures, of networking opportunities, poster presentations. And as Barb alluded to, this is a time where we can learn and grow from each other's experiences in quality improvement at our own institution and at institutions and share and learn and encourage each other to take those concepts away as we try to promote and improve our cardiovascular outcomes. And while our focus of this conference is on registries and accreditation, we do hope to encourage all frameworks of thought for quality improvement, collaboration, and multidisciplinary care. So as we have this lecture today and for the lectures during this conference, this is a team sport. We do want to hear your questions with our Q&As. You can go into the sessions through your app, enter your questions, and we will address them in the sessions, including at the end of Ami's talk. We would love to hear your questions and comments during that Q&A session at the end of her talk. So to that end, I'm thrilled for our Ralph Brindis keynote speaker, Dr. Ami Bhatt, who's the ACC's Chief Innovation Officer. She'll be speaking to us on the impact of evolving technologies on quality analytics and care delivery. And no one is more qualified to do so. She is incredibly inspiring. So looking forward to her talk. So without further delay, I will pass the baton to our president, Kathy Bega. Thanks Olivia. And welcome to Texas. You know, they say everything's bigger in Texas, and you need to know that you are our second largest meeting within the ACC. So congratulations, and thank you for being here. Many of you may not know much about me, but you are my peeps. This is where I started my role in the college, was in quality and advocacy. So coming back here and being able to address you is an incredible honor for me, because I sat in those seats. And I want to reiterate what Olivia said to you. This is a time for you to engage. It's a time for you to learn from each other. And it's a time for you to question things. The ACC staff used to think I was a pain in their neck, because I would constantly question some of the metrics, and how do we do this in the real world, when we're so busy in our cath labs and our structural heart programs? How do we do that? So when Dr. Steve Bradley assumed his role as chair, he was thrilled to see me right near his elbow, always asking questions. So don't hesitate to ask questions. On behalf of the college and our entire board of trustees, and I'm thrilled that Hani's here with me today. So please feel free to stop us outside, ask us any questions that you might have about the college. Of course, one of my roles and one of my goals as the president and the first non-physician, I might add, of the college, I'm hoping that all of you will help join us as the CV team members of the college, because it really is our professional home. And embracing all of you would be so important to me as we continue to improve heart health for all. So make sure you take all the information that you learned today home to your teams, because you've heard, cardiology is a team sport. We've talked about transforming CV cardiac care for years. It is time that we truly transform cardiac care. And the quality infrastructure is imperative in order for us to do that. So please, as you go home, take everything that you've learned. I used to love when I left these conferences, my to-do list would be really, really long. But actually implement some of those things to make it all very better. Over the next few days, I encourage you to take advantage of this time together. Learn how to make the most out of our registries and our accreditation services. Network with your colleagues. There is nothing like networking. I was honored to ride from the airport with one of our nurses, and she was so excited to meet the people that have been helping her in her new role. This is what this forum is all about, is to help each other figure out how to do what it is that we need to do. And then celebrate your achievements. There's some excellent posters, so make sure you spend some time looking at the posters and taking that knowledge home with you. In case you didn't know, this is a very exciting year for the college. It's our 75th anniversary. It's really a 75-year legacy of transformation, diversity, and inclusion. And I think the college is walking that walk and leading the pathway globally as we try to transform health care. Our new strategic plan just came into place, and quality is a cornerstone. It is a key pillar. Started by Dr. Ralph Brindis, NCDR, and our accreditation services are so very, very important to us. So I encourage you to really engage intently. Generating and delivering actionable knowledge along with advancing quality, equity, and the value of CV care remains central to our strategic plan. Guideline optimization and solving for the workforce crisis, growing that next generation of clinicians is critically important to us. Guideline-directed medical therapy is our infrastructure. Our guidelines and our quality metrics are proof that we're really delivering that care that we really want to do. Identifying and implementing tangible, real solutions. I also talked to a couple of you, and you feel incredibly overworked because there is so much to do and so little time to do it. And our hospitals are in an incredibly challenging time post-COVID. So learning how to do things more efficiently is going to talk to us a lot about artificial intelligence and how that can actually help us do our job. We need to learn to live with AI and truly help it reduce our administrative burden. As you can imagine, NCDR and ACC are critical to all of our efforts. We need to build on our clinical data, our operational data, and accreditation infrastructure in order to provide that high-quality CV care that we're known for. We need to define the best practices, and this is where you all come in, to define those best practices for both reimbursement and governance of our patients. Developing a digitally-enhanced care model is also important. We have got to get rid of paper and pen. We've got to get our EMRs working for us, not us working for our EMRs. And we need to advocate to shape the structure of future payment models. As you know, my second love in the college is advocacy. And it's something that's poorly understood, so I hope I'm going to see some of you in Washington at the end of the month at our ledge conference, because it is where we need to go in order to solve the problems of our healthcare economics. Those people on the Hill and our people at CMS need to hear the voices of our CV care team. Both our physicians, our nurses, our dieticians, our pharmacists, the whole team need to advocate for what we need to do to take care of our patients. The goals are lofty, but I believe they're doable. And if done right, we will have a profound impact on our work. More importantly, the lives of our patients and their families. But this can't be accomplished alone. They require teamwork, collaboration, professionalism, and a shared commitment for improving heart health for all. Improving heart health does not happen overnight. It is a long and tedious process. Our data is not going the right way. We need to decrease that global burden of heart disease. We need to work with our patients where they are. We need to deliver care that they understand and commit to. We need to include their families as we take care of our patients in their journey. This journey has been going on for 75 years. And the differences we continue to make are tangible. I like a quote from a comedian, Amy Fuller, who says, find a group of people who challenge and inspire you. Spend a lot of time with them, and it will change your life. I think these next few days, you are with those people. You're with our peeps. Make sure that you use that advantage to transform your life. Over the next few days, I encourage you to find your people, spend time with them, learn from them, teach them, and then go home and challenge other lives. Challenge people to reach their potential. Allow people to grow and to succeed. Be welcoming. Be optimistic. Be my informal leaders. Each and every one of you are a leader in your organization. Take that superpower cape that you have and use it to the best of your advantage. Thank you again for being here and for your dedication to quality care. And have a great summit. Thank you. Oh, hang on. I forgot to introduce Ami. So Ami, dear friend and colleague, you're going to love her. She makes this sound so simple. It's really not. But she'll tell you how to make it simple, Ami. Thank you. So the last time I was here, and I'm sorry to say it's been a long time, I was a fellow which is a very long time ago, actually, and the impact registry was just starting when I was a fellow, for those of you who have been here long enough to remember. And it was a smaller room. We didn't have a fancy video. But I think more importantly what struck me as I was sitting here is we didn't, we had a female hospital CEO, one of the first, she has many firsts over here, but we hadn't had a non-physician president of the ACC. And we most certainly did not have female cardiologists with two daughters age one and four as full professors who run this summit. So can we give them both a round of applause? So in the next 20 minutes, my goal is to share with you a little bit about what evolving technologies look like. I'll give you some actual concrete examples. And then we'll talk about how important it is that this room becomes digitally AI-enabled and why that matters. So disclosures. Plato said this. I didn't. Necessity is the mother of invention. That is driven a lot over the kind of human race's experience. This is what I learned. Desperation is the mother of adoption. If you don't know what to do, you are willing to try something new. Let me explain why. It starts actually a long time ago, back when I was a fellow. It is an incredible honor to give a keynote that is named after Dr. Ralph Brindis. He said this back in, I want to say it was around 2011. As a professional society, we have a privilege of self-regulation. And if we don't do self-regulation in a responsible manner, other people are going to do it to us. And they will use methodologies that a clinician or expert would not agree with. And when I looked at this quote, I thought this is so true today as we look at the explosion of evolving technologies, digital health, and AI, which is it is happening whether or not we get on board. And therefore, we have learned, and there is nobody better suited than this group here to say we know measurement. We understand implementation. And so we will take this next evolving technology and we will help incorporate it to actually make medical care better. And so I think, you know, it says so much that he was right then and he's right now. It's possible his family wouldn't agree with me when I say that, but I will say he was right then and he's right now. So my objectives today. Convince you that to provide high-quality care today, we must use digital health to improve access and what we call collaborative intelligence rather than artificial intelligence to manage the volume of data. After I convince you of that, I'd like to implore you to demand quality standards as this tech evolves and offer you some real-world applications that hopefully you'll remember and that you'll know where you need to apply some of those standards. And then lastly, introduce you to generative AI, some concerns therein that I have as these generative AI models exponentially enter medicine and really need to be held to quality standards and we haven't figured out how yet. So the current health care challenge, let's start with convincing you. Accelerating complexity, hypertrophic cardiomyopathy, I'm a congenital doctor, you'll hear that when I give you my examples. Hypertrophic cardiomyopathy, right? There was maybe one chapter in Brunwald's book and a paper or two about it in the early 1980s. Now there is more about it than one can possibly understand and that's just one disease. Now think of all the cardiovascular diseases. There is significant complexity because we've learned more. There's also exponential information overload. So we have wearables. We have the EHR. We have patient-reported outcome measures. We then have the textbook knowledge and we have the most recent trials that came out. So there is a lot of information we need. We also have social and demographic information that we know we need to incorporate. We have rapid tech disruption. You see things changing in how we treat patients, what kind of technologies are available to improve their care. And despite all of that, we still have marked disparities in access and quality. Let me give you a concrete example of how much more information there is now. In 1980, medical knowledge doubled every seven years. In 2010, the doubling period was fewer than 75 days. I spent a lot of time trying to ask people who calculate this to calculate it for me now and they all say it is not possible. The doubling time cannot be calculated. That is how fast new medical knowledge is arriving in the world today. I'll give you another example. Pre-COVID, the first three years of med school education reflected only 6% of known medical information at the time of graduation. That was pre-COVID. Just think how much more information we have now in access. Only 6%. So I also question whether we're teaching the right things. Do you need the Krebs cycle? I had a thing with the Krebs cycle. We'll come back to that. So a reframing, if you will. We can do better. Medical knowledge and data have exceeded the human brain's ability to source, retrieve, parse, and apply it in a time-limited situation, which is when you are with a patient. So to provide cardiovascular care with the scientific rigor, avail ourselves of all these advances over the past century, we're not just dependent. We are now desperate for compelling and responsive computing power to take all of that data and get it to clinicians at the point of care when they need it to make the best decision for you, for your mom, for your daughter. So what is the goal of digital health and AI? It's to optimize care. I love my triangles, so I'm going to start with this one. Chronic management is the majority of what we do in cardiovascular disease. People over age 60 have, in general, two to three chronic diseases, of which two are actually cardiovascular. So chronic management is step one. If we can start to think about digital health and AI and get the data out to the patients who need it, get it in from them, process it, get it back out to them, we can allow patients to partner in their care while remaining local. Why is this relevant? Because registry data is not just for the research papers I wrote from the impact thing so that I could continue to publish, et cetera. It is to actually improve the care of my patients in the communities where they live. And we're uniquely positioned in this group to be able to think about that in the communities where people live. Now, what happens when we catch people using digital health or AI? When we catch people, we can identify a potential progression of illness. We can address it promptly. We can address it in the communities where they live by our expanded workforce that is not based only on physicians and nurses and APPs, but that expands further to include our pharmacists, our community health workers, our LPNs. And then we can address it in the community. Or we can say, hey, you need to come in. You do need a tertiary center for this. But I'm going to get you the right next test, the right clinical team, and the right location of care. The number of times that someone comes to a clinical team and they say, you ended up with EP, you belong with heart failure. And now they go back into the list to wait another 60 days to see someone. So digital health and AI, our goal is to optimize care, to catch people in the community, to correct disease in the community, to bring them in when needed, and to do it in an efficient way for the patients. But the metrics we measure matter. So this is the Peterson Health Technology Institute. They are a foundation nonprofit. And this is one of the first papers they put out. I'll tell you the colors in a second. What I want to tell you is they were interested in diabetes because I was always jealous of diabetes. I'm jealous of two people, the diabetes organizations and the oncology organizations. I want to be more like them. For diabetes, I wanted to be more like them because they measure things. Continuous glucose monitors, closed loops where patients can manage themselves. I've always wanted to be able to create those in cardiology. This group studied those companies and organizations in digital health that said, we do this and we do it well. What they measure matters. They were measuring patient engagement. They were measuring how many patients enrolled. But when they were measuring how much of their hemoglobin A1C, the measure of their glucose control over the past three months, how much did that improve? We see the following. The dark green is remote patient monitoring, the thing that I'm jealous of that they do well. The blue is behavior and lifestyle modification, which we are all preaching when we talk about prevention. And interestingly, the purple is nutritional ketosis. Put that aside for a second. Look at the greens and blues, the things I was jealous of. The majority of them did not create a clinically meaningful difference in decrease of hemoglobin A1C. So we were touting the success of remote patient monitoring systems, of digital health systems in diabetes for over a decade, when actually we were not improving the numbers that proved that patient care would get better. We couldn't measure morbidity and mortality. It was too complicated. Nobody asked them to do it. But this was a surrogate metric that probably should have been paid more attention to. And so the metrics we measure matter. And the kind of hard detail that we offer as the ACC is different than what these companies get when they do most likely to recommend this technology to their friend. They are measuring consumer metrics. And we have the hard clinical metrics that they need. And that is an important reason for us to partner in digital health, especially as we move towards hypertension, heart failure, and the other remote monitoring programs that are growing rapidly. I want to transition to artificial intelligence for a little bit. I'm going to do a couple days lecture on this. But let's talk about the basics. Where can we use it? So we can use it in administrative care. We'll talk about some examples. We can talk about clinical support. And then we can talk about population health. And just to put everybody in the same room about this, artificial intelligence is the big word. We like to call it collaborative intelligence. It's not going to do anything right on its own. If we're not in control of the data we put in, and if we're not iterating on the data that comes out, AI is not going to work. It will, in fact, be artificial. We call it collaborative intelligence. Machine learning, neural networks, and chat GPT, generative models, co-pilot. Those all fall within that bucket. So where can we lead in AI? So one is clinical care delivery optimization. It goes back to the question of how much data is out there. We don't have time to get all of that data while we're sitting with a patient. We don't have all that time to get all that data when we're trying to fill out registries. We need to use computing power to be able to get that information, but just give it to us. It's like a cheat sheet. Here you go. Here's everything you need to know about this group of patients, about this individual patient. Now use your clinical acumen to do the work that you were meant to do. The second is patient disease data screening. I'll give you some examples in a minute of how we can use. EKGs, ECHOs, and even electronic health records to find patients who have disease earlier than we would have if we waited for symptoms. That goes into diagnostic EHR data evaluation and suggestion. Can we not only do that, but can we suggest to clinicians, hey, I'm noticing these things in this patient, and my AI algorithm tells me 96% chance they're going to develop AFib soon. Do you want to maybe think about that? You don't have to, but you could think about it. Administrative effort reduction. The number of prior auth companies. Entire companies billed for prior authorization. They heard our pain point loud and clear. Some of them are successful. Some of them don't get it yet. Some of them are frustrated like we are when we do prior authorization. But there are a lot of companies working on decreasing administrative effort. And most importantly, all of these end up decreasing high resource utilization, meaning getting the right patients to the right care at the right time because we identified them early enough using algorithms and AI. So here's some actual examples. And this is in your deck, so you can look at it later. You don't have to be able to read the little words. I'll give you an example from each. So for EKGs, one of our hardest things is if you have left ventricular hypertrophy on an EKG, what could it possibly be? And so I'll give you a specific example. We have our TAVR registries. But at any institution, the number of patients who have LVH on an echo that are evaluated to see whether or not they go down the TAVR route but actually just have LVH, the aortic valve is normal, and they just get put aside is significant. Those patients can have amyloid. They can have hypertrophic cardiomyopathy. There's a number of diseases that those people may carry. And now, using AI, we can actually evaluate that EKG with LVH on it or an echo with LVH on it and say, hey, we know that this isn't going to end up in the TAVR registry, but look how your screening for this process helped us identify the disease you do have and got you to the right care. So AI for EKG echo MRI, earlier diagnosis. Other things, being able to find disease where we weren't looking for it. CT scans is a popular place you'll hear about, where people get CAT scans for a lot of different reasons. And we used to say, if you didn't get a CAT scan with contrast, that was gated. You can't tell whether or not you have heart disease. It's true in a different way. You can't tell exactly how much plaque there is and where it is, but you can tell if there's calcium. And if there's calcium in your coronaries and you're a 32-year-old woman, you want to know. And so these are some of the things we can do for population health. Let me give you two examples of things that I really liked. So this is a study that was published in Cirque last year. And it's AI-enabled intervention to help identify high-risk patients. It's what we talked about. If you have coronary artery calcium on a regular CAT scan, your primary care gets notified. Let me just let them know. Two weeks later, the patient gets notified. We are giving patient agency now. What happened in this study? It turned out that instead of usual care, which is about 7% of people who need to get prescribed statins in this study, by the way, that's really low for GDMT, notification ended up in 50% of those patients, ending up with a new statin prescription. Note to self, there was a halo effect. They also had their hemoglobin A1C checked. Many of them had blood pressure medications that were now altered because they were in care and they were given agency. Hey, something does seem wrong. Let's move on. Let's figure out what's next. Interesting. We can also use this in a very different way, which is if we think about what we do, PET scans, catheterization in the cath lab, there's radiation exposure to our patients. So this is a different way to think about it. It's at the machine level. This is a PET scan. And the standards PET scan you see in the non-circled area, it's a 24-minute scan for the patient. But after you use an accelerated scan that uses AI pixels to say, hey, if the pixels are here, I'm pretty sure there's a 98% chance there's a pixel right here, too, and a pixel right here, in six minutes this person's scan is over, and you have the exact same picture. So lots of ways at the technical level, but also at the reporting level, that we can use AI to actually improve patient care and decrease exposure to risk. So the ACC strategic plan is fully aware of this. There are two big parts of the plan that I would love for everyone here to know. The first is making clinical guidance usable at the point of care. You'll hear more about this over the next year to two years. But we are trying to think of ways that we can get the data to you that I'm talking about. Because you can't each go out there and figure out what system I'm going to use to source, parse, retrieve. This is great, Dr. Bhatt, but how? And so we know that that's challenging, and we are now piloting some different models for how can you best get and navigate through our guidelines, through our papers, to understand what is relevant at the moment you need it, to understand a patient better. We're also working together with MedAxiom to create a best practices framework for care delivery and implementation. We have had workbooks and playbooks that have come out of innovation in MedAxiom before. They were static. Here's some information. If you'd like to go do telehealth, if you'd like to do remote monitoring, here's how you could set it up. What we're now doing is actually making it more interactive, which is, can you evaluate yourself? Are you ready to implement digital health for heart failure? Here are the things we think you need. Here they are in order. Which ones did you go ahead and try? Give us feedback if it worked for your size, hospital, system, clinic. And then we will real time start to adjust how we're giving recommendations based on ACC member experience. And so we're really excited to do that work. The first two will be ambulatory surgical centers, ASCs, and generative AI, implementing things like CHAT-QPT in your daily practice. So let me get to the elephant in the room. By the way, I love her. I feel like her many days. How many people out there sometimes feel like her? Like this is, OK. There are challenges with AI. There are compute challenges. Hallucinations can be dangerous in medicine. AI goes out there and it picks words that are related to one another. If it picks the wrong words, you get the wrong answer. Let me give you an example. This EKG demonstrates that there is blank a heart attack. Definitely a heart attack. No sign of a heart attack. It's just words. So if there weren't context around it, CHAT-QPT would just put in whatever came up most often in its search. Data challenges, biased input can be amplified with AI. And we know a lot of the structural data that we have right now at the hospital level especially is not very diverse. We have more work to do. But if we put biased data in a model, we can amplify that. Human challenges. If we use AI and then we don't look at the outputs, we can do bad care. Hey, I don't know what to do with this patient. Let me put it into CHAT-QPT. Oh, it says give them this drug. Well, no, actually, that's not right for this patient. We can't blindly use AI. So humans need to be taught how to use it. We can't just say trust it. Because again, AI never works the first time. You need to know what goes in. You need to work with what comes out. Global challenges, AI is not as easily accessible throughout the world. So we can't promise that my mom is headed to Allahabad, India, a small village in northern India. They're not getting AI. Not right now. How are we going to do that? How are we going to get them any form of help with data? At least take them from paper to electronic. And then lastly, my fifth grade daughter, she, sorry, sixth grade daughter now, September, she is head of the Planet Protectors Club for sixth grade. And she told me, mom, you can't tell people to use AI all the time for all of health care. It is going to destroy the planet for my generation. So I was like, I think I'm going to include that in my talk. So it's now in my talk here, thanks to Avni. So there are challenges, and we're aware of those challenges. Recently, I was asked to do a debate on AI. I was really excited. It was at ESC. And then I read the fine print closely. I was asked to do the con. I was like, come on, that's mean. Like, I do this for a living. I learned a lot. And I want to share two more things that are a little specific, because the things we talked about on the prior page, they are global issues with AI that every industry faces that we will all work on together and may need to be worked on at a governmental level, right? At a technology level. These are two that are specific to the data that we're going to gather when we start to use it in cardiology. So I wanted to focus on these two. And I learned them just recently. So please challenge me during the question answer session about these, because I want to think more about it. It's misinformation and model collapse. So misinformation, not misinformation like I'm going to tell you something untrue, but an act of exclusion. So nuance is a human skill. AI doesn't grasp any subtle nuances that are critical in complex medical care. There is research trying to get it there. It is not there yet. Edge cases. Edge cases require judgment. We often have atypical cases that don't neatly fit into algorithms. And then we need to use human judgment to identify those outliers and figure out where they go. How many people here have dealt with an edge case in some form in the past week that doesn't exactly fit? And then contextual understanding. AI can analyze the data, but it doesn't understand how it fits in a patient's unique life and circumstances. You can give it a zip code, but AI's not going to understand what is happening to the human in front of you that you need to get certain meds to, that you need to get certain tests for. And so misinformation can be an act of exclusion. AI doesn't have this information. So the answer it will give you will be wrong, not because someone fed it the wrong answer, but because it didn't have the nuance, the experience with edge cases, or the contextual understanding to give you the right answer. The second scares me even more. It is called AI model collapse. My teenage daughter, who's 17, was surprised that I was able to create this image using AI, by the way. She was like, did you do that? Did you get help? I was like, I really can't do this. When AI models are trained on data, nowadays, that AI grabs AI-generated data itself, because AI has been around long enough in the chat GPT general generator of AI realm, that sometimes it's not grabbing something that you wrote for the ACC that is published. It is grabbing something that is summarized already by AI from the ACC that now it is grabbing. And it's going to continue to do this. So that repetition can cause drift, it can cause us to not accurately represent what is happening, and actually can just create unreliable outputs. Let me give you two examples. There's early model collapse. In this, what happens is the following. If you think about content that's out there, any time you use chat GPT, put in a paragraph, it will make it sound very nice, but bland, right? It'll make it sound like it's not bland, right? It'll make it sound like something that anybody could say. And in fact, what it will do is it'll get rid of the outliers, because it'll take the majority of what it hears out there. Now we've lost outliers. We've lost unique scenarios. We're going to lose marginalized patients whose information is in a registry, because it's not as common as everybody else's. And it brings everybody towards the middle, but that's not actually the truth. The more concerning is the following. I put into a generative AI model in a large language model, common mammal seen in the streets of New York City. I think I'd actually written in the subway. And kept asking that question repetitively, again and again and again, running the generative AI on the same content that it was bringing back to me, letting it do it again. And you can see the shift that happens from the blue in the middle to the red. It looks like a subtle shift, not big shift, right? A little lateral drift. It ended up with a picture of the Barnum and Bailey elephants in New York City sometime in, I don't know, the 1970s. That was a drift that happened, just asking the same question 10 times, when you allow it to use its own data. It got confused. This is absolutely wrong. You say, Ami, this doesn't have anything to do with us. It does, because what if I put in something and I'm thinking about restrictive cardiomyopathy? And that physiologically is very close to constrictive pericarditis. A lot of fellows, my whole fellowship was spent trying to figure out the difference between the two using cath lab tracings. You could take the pericardium off somebody's heart, because the AI led you in that direction, when, in fact, it is their muscle that is a problem. Because that's how closely related, in some ways, those two are when you pick up the same data again and again. And so these are the kind of things that we need to watch for. And so we have solutions for it. I'm going to skip this. Oh, I didn't put in my solutions. Fine, I will tell you my solutions for it. That's OK. Luckily, I know them. The first is, don't use CHAT GPT or Copilot when you're doing medical work. We are trying our best to start to create models where the data that you're using, the data we put in, is trusted data. ACC guideline data, expert consensus documents, papers written based on our own research from the registries, where you know where the data came from, you can see where those references are, and then you understand that at least you're not pulling things that you've never seen before. And so we're trying to work with that. It's part of the pilot. I don't know how well it's going to work. We're trying that. The second is, as we start looking at using AI, especially in large data-like registries, really important for us to constantly have human-inputted data in the middle of it. You need to not only have human oversight. What you actually need to do is say, hey, this is fresh data that was just collected. It was not processed again and again and again by a bunch of different systems. This is fresh data that we're adding, and we're going to re-look at this. And then the third thing you do is re-look at your algorithm. Does the algorithm still work? Do our outputs still work? And that's the clinical oversight that needs to happen. So it's a lot of work to implement AI in large data systems. However, it turns out time and again the studies show it is better than doing it as a human alone, because we can't get the data as reliably. We can't get the data as quickly. And the amount of data coming from our patients is growing at a pace that we won't be able to keep up. And so it's work. But if we do it right, if we know what we put in, if we look at what came out and we iterate with it, I actually think we're going to be really good at it. And again, I think this is the group that's been set for years since Dr. Brindis first said, it's our job. Self-regulation. Other people will do it to us. AI will come to us unless we learn how to do it. So I will end with this. What you can do, please. Don't let your community blindly rely on AI. Hey, we have this new AI system. It's awesome. I'm just going to use it. Stop them right there and be like, did you check the answers? Let's go through it together. What's in the AI? Ensure that your group doesn't refuse to use AI. You know, this seems too advanced for us, and therefore we're not there yet. Let somebody else do it first. If we all say, let somebody else do it first, we are not going to do it. And then people who are not clinically expert in thinking about measurement and quality will do it. And it will be very hard to undo. We need a clinical infrastructure to test AI. We need a research infrastructure to test AI. I would love for people to come up with ideas, talk to me about it, see what we can pilot together. Because we don't have one yet that works. And so it's an important part of the next 18 to 24 months for us. And then lastly, some of the research that we're working on with a group at NYU that does pure AI research is determining who is the right person or team to use AI. Because sometimes we're just doing it better as humans. And the AI is going to confuse the issue. It's going to make a young fellow question their judgment. It is going to slow down somebody who's been doing it long enough that they know. It is going to spoil a system that is the right system. There are times where AI is going to make a difference. We need to figure out when that is. And as my fifth grader says, there are other reasons that we need to figure out when to use AI and when not to. And so we're doing some research on when do you lift the veil on AI and give it to somebody versus not. And so that research is coming. So I will stop there. And I will say thank you. There's a large team in innovation that works at ACC. And I'm really super grateful to all of you. I'll end where I started, which is my impact paper was one of my very first publications ever. And I was so incredibly proud that I was able to do that and that such a registry existed for adults with congenital heart disease. Because you can see I'm getting like, it's a thing. It didn't exist when I started. And because of you, that data existed. So thank you for all that you guys do. Thank you. Thank you, Ami, for that fabulous talk. We'll jump into some questions from our audience as they're coming in. And I encourage anyone who has questions just to enter them through the app. And I'm just making sure that we're on time, that I've got 20 minutes for questions. OK. So some good questions here. Someone was looking for clarification on the study that you shared with the advanced notice and how that resulted in the more aggressive treatment for individuals, how that translated into that, what that process was. If I remember, it was Dr. Sandhu's study. And what they did is they took a select group of providers. And any time CTs were ordered in that system, they were not gated, not contrast CTs. They were generally ordered for lung issues and other, if I remember that correctly from their demographics. Once calcium was identified, there was just an email that went through the EHR. It was an EHR-based study that went to the clinician saying, hey, we notice that there's coronary calcium. Link to, here are the guidelines for what you might want to consider doing for this individual. And so it was both a prompt that there's calcium, but also, importantly, the education to the clinician. Two weeks later, the letter to the patient. And again, with a prompt, you may want to get in touch with your clinician. And here's the kind of stuff that might happen at your clinician. I think they also had, here's what the other data you might want to bring to your clinician is, but I don't remember if that was a study or a different one. And so I think, importantly, and thank you for the question, it's not just, hey, somebody's got a calcium score, do something. It's primary care doctor and patient. We see this thing. It is a sign of potential heart disease. There are clear guidelines on what you can potentially do next. Please have a conversation. I think it was that education part that actually made them that successful. Wonderful. This is a great question. Our hospitals are in transition to a program called ABRIDGE to save time on clinic notes. The program listens to visits and uses AI to write the note afterwards. Are there any hospitals who are using this? I can say yes, ours is using this. And any pitfalls to look out for for those who might start using it? ABRIDGE is a great company. There are others out there. Suki is another one who does it. And Microsoft's Nuance also does it. It's what we call voice to text. You talk with your patients. Your iPad listens to that conversation. Yes, you do need to ask for permission before you do this. And it generates a draft note for you. Good thing about ABRIDGE, created by a friend of mine at University of Pittsburgh, who is a cardiologist himself. So he knows what we like in cardiology notes. Number one, that's helpful. The second is, and I don't know how far along they are in this progress. But if you're thinking of starting it, please ask them. Their goal originally when they developed was to have a note that is aimed for, I'm sending it to Kathy Bega. And therefore, it is a clinician to clinician note. But also a version that is eighth grade reading, I'm giving it to my patient. And ideally then a version that is appropriate for the billing people, such that some of the stuff that's important for billing, which is really not relevant directly to my communication with the patient and may get confusing, is separated. So when they originally started, that was their idea, is can we get the right information from this draft note to the patient? I would say, I think go for it with any system that you guys are willing to try, that your C-suite's willing to pay for. But I think the idea of letting us look straight at our patients, like look our patient in the eye, and not do this the whole time, is lovely. I would encourage them, if I can poke you a little bit, and I don't mean a bridge, I mean any voice to text system, I would encourage them to have the ACC guidelines somewhere in there as a link or a dropdown, because it's one thing to have your text written for you, but if we're all the way in there and we're writing your text for you, and one of the diagnoses is AFib, wouldn't it be great if that had a hyperlink to the appropriate AFib guidelines, right? And make the whole process of caring for a patient faster. So I'm hoping to see some of these companies do those kind of things, not just for cardiology, but for healthcare in general. You know, I think the other thing that is critically important as you're here these next couple of days is talk to your colleagues. This to me is one of the key areas that we can work to reduce burnout amongst all of our clinical staff. Being able to utilize the new AI methodologies that sort of tune out, they're fascinating to watch them write this note, but I am so tired of my physicians going home and doing their epic inboxes or doing their notes from a very busy day in clinic. So utilizing this type of methodology and using our team I think is also important because Ami talked about billing, and we all know that we can use our, especially our advanced practice nurses to do that one note of record every year that will capture all of our patient's comorbidities. Just imagine if you could just talk it through with your patient and not have to worry about writing down 10 different comorbidities and having that part of the permanent medical record. I sound like a Catholic school teacher, part of that permanent record. But I would encourage you all to talk to people who are using these AI tools and then convince your hospital systems that they are well worth the investment. Are you using it at Atrium at all, Olivia? We are, on the outpatient setting. We haven't transitioned to the inpatient space yet, but yes, it's in our primary care space, and then actually one of our heart failure providers just started using it on the outpatient space as well, yeah. And I can say that this provider is one of our most empathetic, engaging, direct, compassionate providers to your point of face-to-face contact, but really struggled with charting just because of all that he wanted to include. And so he has tremendously benefited in the short period of time he's been using this just a couple of weeks. That's great. MGH is also trialing a bridge. I'm not sure if it's come to cardiology yet. It's in primary care there. And then for people who need to contact someone who's doing it, Tina Pinto is an ACC member, cardiologist at Inova in Virginia, and she's running a head-to-head trial of a bridge, I think, versus Nuance, which is Microsoft, which is a much bigger, more established, already in some of the hospitals, a bridge a little bit more agile, smaller, to see which one might be better. Is it better to have a smaller, agile company, or do you just go with the big house because they're able to do it better? And so it'll be interesting to see the outcome of their implementation trial. So Ami, knowing the audience that we have here today, we've got a lot of data abstractors, we've got a lot of our nurses, is there something in the cath labs that can capture some of this data so that we can stop abstracting charts and that manual process that will make data abstraction so much easier? What are you seeing on the horizon? You know, I think there's no reason we can't actually pilot these systems in the cath lab. The key is these systems are built around knowing how to get rid of extraneous information, and that's the one thing that has stopped people from doing pilots so far, is you have to have the people in the cath lab, it can be anybody, it can be your catheterizer, it can be your scrub nurse, but you have to have one main person who's gonna probably be there closest to the iPad to be able to direct the information. This was the blood pressure, this was the measurement, this was the LVEDP, patient received this much, and so if you just have one team member in the cath lab willing to pilot with this, I think it's worth trying, and even just doing 10, getting a sense of what it's like, and talking to others. People always say, well, we have to go through so much to do it, and my mechanism over many years has been if we have a license to do it in another area, and therefore it's already allowed, then doing 10 patients in a novel potential mechanism to start getting a feel for it and talking to your friends, I think that's the way to do it. So, in fact, this exact technology I think makes more sense than ambient listening in the room. It's gonna catch too much. You're probably better having one focal point. True. We've got another really great question here that demonstrates an opportunity to reduce physician burnout. There's the suggestion of using AI for prior authorizations. What do you think about that? That sounds like a great opportunity. Yeah, so AI for prior authorizations is fantastic because you know exactly what you're looking for. You know what the insurance companies want. They differ. Tufts is different than Blue Cross Blue Shield, which is different than, right? And so you can actually program all of that in. There was an article in, I wanna say New York Times. I could be wrong, but it was like mainstream media that talked about the war of the AIs. And it was specifically about if we start using more AI for prior auth, will insurance companies start using it back against the prior auth and will it be a battle of the AIs? It was kind of a facetious but interesting article about what you use AI for and how it supports you. But the companies that are using it now could definitely use more people to trial it, to pilot it. I don't know them by name, but I'm happy to help you find them. But it's a growing industry. And I think that one's already taken off and going to take more of a solid hold. When we start to see something, you know, you might ask me, hey, you're the chief innovation officer. Why haven't we picked a company yet? This question comes up a lot actually. So I'll just answer it up front. For all of these things, the number of companies that are arising by the minute and then dying by the minute are so fast that if we said ACC will partner with X, the likelihood that X will be around is not guaranteed right now. It is part of the reason that implementation science, trying these technologies is so important because once we know which technologies make sense to our caregivers and actually improve patient outcomes, that then gives us a better sense of, okay, what do we need to build or support? And so I think we're in that stage right now where whatever you can trial, we have some companies that we've gotten to know better, so they're not favorites. It's just that they've had exposure to ACC and they're willing to iterate with us. But I will say whichever companies you can find, if they're willing to work with you, the two things I look for are the team has to be good humans. If you're not getting good human sense, don't work with the company. And the second is iterative time. Make sure their team's fast enough that if you and your abstractors and your team are gonna do the effort to give feedback to them, then they're gonna quickly take that feedback and incorporate it into their mechanism. If they seem like the types who are gonna say, no, just use it, you'll get used to it, that's not iterating. And so that's probably not the right person to work with. Well, and I think the utilization of AI for pre-auth carries with an enormous time saving. Oh, absolutely. Because the reality is we're human, especially if you're pre-auth in a procedure and you forget one CPT code. Utilizing AI in that instance, that makes sure that it's a very complete, I think the battle of AI to AI is probably very real if you think about pre-auth. But I think it's one of those areas that we need to embrace quickly because the amount of utilization of resources for prior-auth, both for procedures and medications, is unsustainable. And I wouldn't take away the human element of looking at what it did and making sure it's right. Just note to self, I don't think any of these AI prior-auths are ready to just send it off. Eventually they may be if we get it right. Okay, just one quick last question that is a fabulous question and has come up a couple times, is how do we avoid worsening inequalities and disparities with AI? You demonstrated that there is the tendency to sort of eliminate those who are outliers. So how do we continue to broaden and include all? Yeah, this is true for digital health, remote patient monitoring, AI technology in general. I think I'll say two things. So the first is for AI purely, for anything that has to do with data, that includes our registries, right? For anything that has to do with data, we need to ensure that the inputs are diverse. And that's additional work and it needs to be done. So for example, I'm gonna speak for Harlan for a second, I hope he doesn't mind. When we're looking at the Jack Journals and you are showing a new AI, but the new AI was in a very homogenous population, right? Our reviewers should be saying back, hey, homogenous population, all you're doing is data into an algorithm. Go find a matching population of heterogeneous people, right? And bring that in, whether that's location, rural versus urban, whether that's... And so I think we need to be demanding that as we're developing the AI, we're developing it based on more diverse populations. We can't develop it in one population and then say, let me see if it applies. That's just not the way that I think we should be doing things. The second is, and this is a bigger question for innovation in general, we have to do the innovations in the communities that need the innovation. So we have to go there and do the remote monitoring programs in Birmingham, Alabama, UAB's got a great program, we need to do it there. And it needs to start there. We need to think about the Rural Health Collaborative in the US and we work with them quite a bit, but we have to think about piloting with the Rural Health Collaborative when we're gonna run the next remote patient monitoring. If you think globally, global quality solutions and other attempts to create registries internationally, how do we get our registries here? How do we walk people through accreditation in environments that may not meet all the criteria, but then help them understand how to build the systems to meet the criteria to be an accredited hospital in a rural area. And so I think we have to go to the communities where these people live and then we have to devise the plans there. We can't make them in an ivory tower and then give them. Well said. Well, thank you very, very much for a fabulous talk, fabulous discussion. We are next going to bring Steven Bradley, Dr. Steven Bradley to the stage to give our 2024 Bayer Award of Excellence. I want to first start by thanking our panel this morning and Dr. Bhatt for a fantastic talk. That was absolutely fantastic. You know that. I do have visions of Clippy from the old Microsoft Word coming up while I'm trying to write a note saying, are you trying to fool prior off? It seems that you need to say this. It's really an honor and a privilege to introduce today's Ray Barr Award winner. The Ray Barr Award is awarded annually in honor of Dr. Raymond Barr, an individual who demonstrates extraordinary, to an individual who demonstrates extraordinary excellence, vision, and leadership in advancing healthcare. Dr. Barr founded the Society of Cardiovascular Patient Care that later merged with the college to become Accreditation Services. It's important to note that the criteria for the award are a cardiovascular disease healthcare provider who has profoundly impacted the care of patients with cardiovascular disease in one or more of the following areas. Leadership, patient care, education, either professional and or public, research, health policy, and system process redesign or reinvention. We have a wonderful list of folks who have won this award over the past 20 years, and this actually marks the 20th year of the award. You can see all of the esteemed awardees in the past years. And I'm honored to announce that this year's award winner is Dr. Beacon Boskert. You can make your way up. One moment, please take the, Dr. Boskert is the Senior Dean of Faculty at the Mary and Goodin Caine Chair of Professor of Medicine, Director of Winter's Center of Heart Failure, and the Associate Director of Cardiovascular Research at Baylor College of Medicine, and the Medicine Chief at Michael D. DeBakey VA Medicine, Houston, Texas. Those are a number of titles, that's fantastic. Throughout her career, Dr. Boskert has been recognized for excellence in clinical care, education, and research. She was the recipient of the VA Career Development Grant and Merit Research Awards, American College of Cardiology Proctor Harvey M.D. Young Teacher Award, American College of Cardiology Gifted Educator Award, Baylor College of Medicine Presidential Award in Education, Lifetime Master Clinician and Professionalism. She has been listed in Clarivate's World's Highly Cited Researchers, Top 1% of Web Science in the years 2018, 19, 2020, and 2023. Dr. Boskert is the Editor-in-Chief of Jack Heart Failure and has served as the President of the Heart Failure Society of America in 2018 through 20, led the Universal Definition Classification Heart Failure as the Chair in 2021, and is the Vice Chair of the 2023 AHA ACC Heart Failure Guidelines Writing Committee. She has served as the Senior Associate Editor for Circulation, Heart Failure Section Editor for the Journal of American College of Cardiology and actively participates in clinical, translational research, providing advanced heart failure patient care, and has presented at national and international scientific sessions, teach-ins and mentors, trainees, and faculty. As is obvious, there's no one better suited for this year's award, and congratulations to Dr. Boskert. Thank you. That concludes this morning's session. Thank you. I think a few would like to. Come on.
Video Summary
The 2024 Quality Summit in San Antonio, attended by over 800 participants, is celebrated as a premier conference focusing on cardiovascular care. Barb Christensen, Senior Director of Registry Services, expressed gratitude to the organizing faculty, session planners, exhibitors, and ACC leadership. Acknowledgments were extended to Dr. Hani Najam of the ACC Board of Trustees for reinforcing the value of the participants' work. Dr. Olivia Gilbert, the program chair, emphasized the Summit's focus on quality improvement, registries, and accreditation, fostering collaboration across disciplines to advance cardiovascular outcomes.<br /><br />Key presentations included a keynote by Dr. Ami Bhatt, ACC's Chief Innovation Officer, discussing the impact of evolving technologies on quality analytics and care delivery. Dr. Bhatt highlighted the necessity of digital health and AI in managing the increasing complexity of medical data and enhancing patient care, urging the audience to adopt these technologies while demanding robust quality standards to prevent misinformation and biases. Challenges such as data bias, misinformation, and AI model collapse were addressed, promoting a collaborative intelligence approach rather than purely relying on AI.<br /><br />The Summit also honored Dr. Beacon Boskert with the Ray Barr Award for extraordinary contributions to cardiovascular disease care. Dr. Boskert's impressive achievements in clinical care, research, and leadership, including her roles at Baylor College of Medicine and Michael D. DeBakey VA Medicine, were recognized as aligning perfectly with the spirit of the award.<br /><br />Overall, the Summit emphasized engagement, learning, and collaboration to transform cardiovascular care, underscoring the critical role of multidisciplinary efforts in driving quality improvements in healthcare.
Keywords
Quality Summit 2024
cardiovascular care
Barb Christensen
Dr. Hani Najam
Dr. Olivia Gilbert
Dr. Ami Bhatt
digital health
AI in healthcare
Ray Barr Award
Dr. Beacon Boskert
×
Please select your language
1
English