false
Catalog
AI and the Care of the CV Patient - 2023 Quality S ...
AI and the Care of the CV Patient
AI and the Care of the CV Patient
Back to course
[Please upgrade your browser to play this video content]
Video Transcription
Good morning everyone. It's great to be here. My name is Brendan Mullen. I'm Executive Vice President with the American College of Cardiology and I have the great good fortune of working with so many of my wonderful colleagues here to help lead the the National Cardiovascular Data Registry as well as our accreditation services and it's really fantastic to be here. Now if you caught the lead in music I'm pretty sure that was a Madonna Vogue sample. So I'm pretty sure they're stereotyping the demographic of the population here like I'm a little disappointed we didn't get a Taylor Swift lick. I think that would have been a lot a lot cooler so I have to speak to the producers after we're done here. So it is I think we're gonna have a great there we go we're gonna have a great hour here. I do want to let you know in advance that we are going to be using the audience response system so I hopefully you've all had a chance to practice this over the last two or three days. If you have not you go into your app you find this session down at the bottom you're gonna find an icon for the audience response and if you just go you can leave it there and of course when we get to the question that question will populate onto your app so you can just leave it and be ready because we're gonna be asking you very technical questions about the mathematics of computational artificial intelligence so we really want to make sure that we're getting good good answers to that okay. All right so here's the plan for the next hour or so I'm gonna just give a couple of kind of opening comments here to set the stage for a conversation and the most of our time is really going to be our spotlight with my friends Shopa and David. Shopa is a colleague of mine at the American College of Cardiology managing director and innovation program and David runs many of the quality programs as a cardiologist at UCLA and they both have a lot of practical experience with artificial intelligence and I think that's the emphasis we're trying to understand here. Practically how should we be using these tools and technologies and how is it going to affect the way not only medicines being practiced but of course how we're capturing the information to try to quantify and understand medicine which is at the heart of what our registry and accreditation programs do. Now welcome to the cookie hour though and I think we should just sort of admit this up front I'm actually pretty sure they did program us here specifically for this hour it is Friday it is the hour before lunchtime all been in sessions for as I understand you know going on four days now so we decided you know we're gonna have a little bit of fun here because it's really hard to approach the topic of artificial intelligence without laughing a little bit because you see headlines like this all the time AI industry and researchers sign statement warning of extinction risk and this actually isn't a joke so out in Silicon Valley and folks that are working on this full-time going back for the ten years they are discussing this as a very very serious proposition and there are government agencies and government officials that are also talking about this stuff and it's the first time at least that I've had direct interaction with a technology for example that would get associated with those types of questions and when you're talking about those types of questions you think what God how does that actually going to apply to my life I mean I personally would have put you know a nuclear exchange with Russia or China or climate change as being the two leading concerns about extinction but now we're talking about artificial intelligence and now we're talking about bringing that same technology into medicine so so how are we gonna handle that so what we're gonna do is we're gonna go to our first audience response question here we're using really really advanced pedagogical theory here we're gonna do a pre and post test okay this is gonna be your pre test David's question in his presentation is gonna or is gonna be your post test and we're gonna see how you improve here so the question is when I say artificial intelligence you think the Terminator Skynet the end of the world or you think c-3po and RTD are our sort of plucky robotic companions that are loyal to us and they'll help us through anything so if we could go to the the ARS system you have you've two choices here I'm curious to see where everyone comes out all right let's see what we got here that's a pretty even split there that's a pretty even split we're we're gonna see over the course of the next hour whether we can convince some of the the fearful out there that artificial intelligence is is a technology that we can work with in and harness I will say actually I'm really delighted that worked you have no idea how many artificial intelligence calls meetings I've been in where the PowerPoint fails and the and the computer fails and so we're talking all this advanced technology we can't even get our phones to work so this was this is a really really good start here probably because there are real human beings running the technology but but we'll get back to that you know the two comments I wanted to make before I pass this over to set the stage is that this idea of humanity interfacing with technological and scientific change and it being both exciting and terrifying at the same time is obvious it's not a new concept I think that has been sort of a hallmark of civilization and so particularly since we're heading up to Halloween in just a few weeks I was thinking about Mary Shelley's Frankenstein so Mary Shelley wrote that book when she was 21 years old on the shores of Lake Geneva was published in 1818 which is now 205 years ago and most literary critics who think about this book think about it as society beginning to wrestle with the implications of the scientific revolution the medical revolution and the Industrial Revolution things were changing dramatically in Regency England particularly and it scared a lot of people and that manifest itself as the boogeyman the monster in in in Mary Shelley's Frankenstein but if you think about what was in their daily lives in 1818 many of those things that they would have thought is key to their humanity for example horses and candles and diphtheria and typhus right fortunately we don't worry about those things as much and yet we are still a thriving human society so I think on one hand we just need to be keep in mind that these technological changes always felt like this at with the absolute vanguard the other thing I'm gonna kind of challenge you to do as you're thinking about these technologies is to actually challenge yourself to take the Turing test now this is not Alan Turing this is Benedict Cumberbatch because he's a lot more handsome than Alan Alan Turing actually was and Alan Turing was the the the who became he was a mathematician he became generally known as the first computer scientist that developed much of the original thinking that developed the ability of general computing and you may be familiar with his work from the imitation game which is what this picture is taken up now Alan dr. Turing was famous for raising a question in 1950 well before we even had functioning microprocessors and computers so well before artificial intelligence that asked the question how are we going to know if a computer is thinking how are we gonna know if a computer is thinking and Turing was smart enough to recognize that the concept of thinking is actually a philosophical one it is a very complicated one because it deals with emotions and rationality and all these sorts of issues and he said it'd be very difficult to know if a computer is thinking by that human definition so he came up with something called the imitation game which later gets called the Turing test and the Turing test is you put a human being in a room and in another room you put another human being and the third room you put a computer and they're only allowed to interact by a text-based interface so texting a computer and the human being needs to figure out which of the two interlocutors in the other two rooms is the computer and which is the human being and Turing's hypothesis was that if you couldn't tell the difference between who is responding to the fully automated computer or the human being then indeed the computer was thinking okay now what's interesting is that in the whole history of artificial intelligence and most of us and I include myself in this even even though by the way I was so excited to get a data geek thing on my badge I was like I feel so validated in this room I just feel feel great about that but most of the time even for us data geeks that have been working in these fields for you know the better part of the last several decades we've never actually been able to interact and test artificial intelligence because it was highly mathematically complex and it was running in the background but with the advent of chat GPT which kind of burst on the scene and the whole idea of large language models and generative AI we now see them on Google and being as well as on on chat GPT itself we're all potentially interacting with these technologies on a daily basis so you can actually run a Turing test and this is never impossible for you can get on chat GPT and you can start to ask it questions and have a conversation and you have to ask yourself how convinced am I that this machine is thinking the reason why I think it's smart for all of us to do that is because it just increases familiarity with these technologies it makes it less scary just like the monster in Frankenstein or any boogeyman the more we interact with it the more we talk to it the less strange it feels the less artificial it feels and we can start to make then critical judgments about how we want to use these technologies and so critically in medicine how we want to trust these technologies in real life so I'm about to hand it over to Shopa in in just one minute here but here are the types of questions I've done when I've done my own Turing tests with chat I took three topics and there are different topics and one topic was the topic I knew professionally quite well but I also know where the ambiguities are and so I asked it a lot about cardiology and what you should do in different patient scenarios and I was really really impressed by the answers that got back but I could also see the weaknesses where it wasn't able to synthesize or it wasn't able to cross guidelines so I developed a sense of trust there I took another topic that I'm really interested in which is physics but I know very very little about and I asked it questions about say general relativity I was blown away by how smart the computer was it knew so much it could tell me so much answering my questions but you see the relief I'm getting at now I know cardiology pretty well so I could spot where the machine was making mistakes and wasn't fully trained in physics which I think is really cool but I don't know much about I was completely taken in by the machine and I'm sure it was making the same types of mistakes that it was making in medicine but I couldn't spot them then the third Turing test I would recommend you do is interact with the machine on something that you have very very specialized knowledge about could be a hobby could be say your local community or something like that but there maybe isn't a ton of information out there on the internet the machine wasn't trained on it and see what kind of answers you get there so the one I did is my daughter my 12 year old daughter she's a she would require me to say both of these things she's a vicious soccer player but she's also a ballerina and so she just went up on point and I'm the only person in our family can sew so dad sews point shoes all the time now I'm constantly so does anyone like have daughters who dance and you're selling their point shoes oh my god anyway I feel like my fingers are bleeding anyway so I decided you know at some point I need to learn a little bit about ballet because she's taking this very seriously and so I was reading a book on ballet and there's actually very little written on the history of ballet because it's a visual performing art that can't be written down so I started asking the questions about the machine questions about ballet and it didn't have a clue didn't have a clue me after one book was smarter than this machine why is that is because there's not actually a lot of written information on the internet about ballet the machine had nothing to think about to answer me so there's no grand epiphanies in that but that was a little bit of my journey of how I worked through thinking about artificial intelligence to try to discern particularly with the AI that's gonna be interacting with us through through language through natural language how we should be sensing and trusting and building up a sense of our way that we're gonna relate with these technologies so that finishes me so I'm gonna pass this over to Shopa if we could bring up Shopa's presentation I'm gonna pass that to you and Shopa is gonna talk about introduce yourself and then talk about how the ACC is interacting with these technologies and trying to figure out what really is gonna benefit us in cardiovascular medicine hopefully for the NCDR as well and I think you're probably actually I would say one of the world's foremost experts in how we've tried to bring AI into registries through using some of the natural language and processing type technologies that we've worked with so Shopa please please take it away so great to be here with you guys today I'm Shopa Patel I'm the managing director of innovation at the American College of Cardiology I have nothing to disclose so you know I want to talk a little bit about my journey with artificial intelligence you know from a personal perspective similar to how Brendan was talking about it and when I was thinking about this you know a lot a lot of times you know in parenthood there's a saying that I'm sure all of you know that it takes a village to raise a child that's my three-year-old son Ishan in the middle of this picture collage and I don't think I realized how much of a village it would take to raise him and having support from all types of people but what I don't think I realized early on was how much I would start to use technology in parenting and what I mean by that is just recently I have been leveraging artificial intelligence to help support me as a parent so the other about a month ago I asked it you know can you help me I said I think the prompt was develop a toddler friendly meal plan for a picky eater that is vegetarian and the in chat GBT within seconds made me a seven-day meal plan with a grocery list and then I was able to look at that list and basically send it to myself and go to the grocery store saved me tons of time I didn't have to meal plan I could you know adapt some of the ingredients that I did that I knew he wouldn't eat but overall it had saved me tons of time and supported me as a parent so now I would say technology is part of my village to raise a child thinking about that same concept in parenthood as it takes a village to raise a child that is also true in health care it takes a village to take care of a patient and traditionally that village has been the physician and then it evolved to include the entire clinical care team so pharmacists nurses physical therapists all of the people that are part of that care team but now there's a new member of that village and that's digital health that includes artificial intelligence and by using artificial intelligence we can start to instill collaborative intelligence that will support both the patient and clinician if done correctly so we really think about the ultimate care goals which really haven't changed in the last decade and probably won't change in the next decade it's really to improve cardiovascular outcomes clinician well-being care coordination and access to care the differences the tools that we have at our disposal have changed we have digital health now that can actually help us to really start moving this needle on some of these outcome measures so let's talk about the current landscape everyone in this room really knows the current landscape of health care but there's tremendous amount of systematic inefficiencies across the board from communication to things not being done in a standard way to just not ideal GDMT being optimized there's also accelerating complexity of patients lots of patients with chronic diseases and there's too many sick patients and not enough clinicians to take care of them and that's not going to change there's information overload clinicians like all of you are inundated with data not only from your EMRs and claims-based data but data is coming from wearables and devices how do you make sense of this how do you use that to really help your patient what do you do with it there's rapid technology evolution technologies that used to take 10 years to make take months now to develop because of the evolving technology landscape and then there's also mark disparities and access and quality of care this is the current landscape and this is where digital health can help us to solve some of these issues that we're facing so I'm going to take just a step back and really think about you know what are the components of artificial intelligence it's not scary it's not some sci-fi movie it is complicated but we can kind of break it down so artificial intelligence is an umbrella is an umbrella term all of you have used artificial intelligence probably for several years now I'm sure many of your institutions have been doing predictive analytics for years as well in fact I'm sure if anyone here has posted anything on social media today or yesterday and has gotten a prompt to say these are the people you should tag that's artificial intelligence so you're using it in your daily life under that umbrella artificial intelligence is just basically a way for technology to mimic human behavior within that there's machine learning which is essentially techniques that don't have to be explicitly trained to perform tasks that humans do and then there's deep learning deep learning is taking vast amounts of data and adapting that data and learning from it and then there's generative AI and that's where chat GPT actually falls that's the creative side of AI so it's able to generate content on its own in a creative way by interacting with the users and then when you think about what are the components that really go into to AI so there's inputs there's algorithms there's training data models output and feedback loop the input just like just like we need information to make decisions and learn things AI needs data to be able to learn so data can be in the in the way of text images or actual data then there's algorithms algorithms are a set of instructions to help process that data and then there's training data just like we learn from experiences AI learns from experiences so by training the data and going through several examples it is able to recognize patterns and do better pattern recognition what comes out of that training data is a model and now that model is based on all the examples that it is run so a model that is trained on recognizing pictures of cats will actually be able to better recognize pictures of cats then there's an output now that output could be an image it could be a picture it could be a prompt it could be a decision it could be a recommendation and that comes after all of that data has been processed and then there's a feedback loop and this part is really important because that's how AI keeps evolving and learning is by the feedback that it's getting so this slide this is an old slide so some of these companies may not actually exist anymore but this is just to kind of show you how vast the landscape of AI and healthcare is I was just recently at an innovation conference and the number of AI companies has almost quadrupled just in the last two years there's a lot of noise in this industry and it can be really overwhelming to shift through all of that noise and figure out which AI tools should I use what are they doing which one should I trust so the way that we've been thinking about it is instead of like looking at all of the noise and all of the different companies out there start with what's your clinical need what's your problem ask the right questions and then map that to the AI tools and technologies that will help it seem a little less overwhelming so AI is being used in all kinds of industries and now it's becoming even more prevalent in health care there's a lot of different instances that we can point to and how it's being leveraged in health care it's being used in drug discovery it's being used to generate hypothesis for academic research here's a couple of the ways that the college is being seen the AI evolving in health care so this first one voice to text to decrease documentation effort this one I really like for AI because it's really high impact but low risk this is a huge pain point from for several clinicians probably for a lot of you how are how can you leverage AI to help with documentation to save you time to do other tasks like doing more patient care. AI can actually help in creating and synthesizing clinical notes, patient centric notes, billing notes. Early detection and triage to appropriate care. So AI is being used to early identify patients, risk stratify patients, and then get them to the next level of care or the appropriate level of care. Optimized EHR and social determinants of health to improve quality and data. So these two I really like. So I was thinking, you know, the college a few years ago had launched Patient Navigator and the Hospital to Home Initiative aimed at reducing readmissions for heart failure in MI. Several of your institutions probably participated in that. One of the things we often heard for why patients were being readmitted was due to social determinants of health, but that was really tough to screen. A lot of the validated tools that you know, like LACE, only take into account clinical indications. Well through the use of AI, it's able to summarize and have access to social determinants of health to create personalized, better recommendations for patients based on that data at scale instead of having to do assessments one by one. That doesn't mean the clinician is not in the loop. The clinician should be the ultimate person who takes that assessment and then uses clinical judgment to implement a care plan that works for both the clinician and the patient. And then finally, enabling practice efficiency with personalized responses to common questions. I think we saw, you know, during COVID an influx of digital ways to interact with patients, which then meant clinicians had a lot more messages in their Epic inboxes. And I think there's a way for technology to be able to, for us to leverage technology to personalize some of those responses and alleviate clinicians having to respond to every single question. Across all of these areas, I think I'm going to reemphasize this over and over again in my talk, the clinical team has to be in the loop. They have to be part of every part of the process. They're ultimately the one who will make this final assessment and determine how good these recommendations are, how they want to implement them, and how they explain them to patients. Now I have an audience response question for you guys. Has your institution leveraged AI for administrative tasks, diagnostics, treatment planning, or patient engagement? Yes, regularly. Yes, occasionally. No, but considering. No, and not considering. Vote now. Interesting responses. So it looks like yes, occasionally is the one that had the most responses with 108. And then the next one was no, but considering. And the last one that was 20 responses was no and not considering. I think this breakdown is interesting to see where you all as individuals are in terms of implementing artificial intelligence, because it is here. We do have to embrace it. And I think over the next couple of years, it's going to be more embedded into our day-to-day lives, both personally and in health care. Shilpa, if you're out in the broader world talking to all sorts of companies, if we ask this question to a non-hospital health care audience, how do you think those responses would differ? Are most industries ahead of us or the same or behind us, do you think? I think a lot of industries are probably ahead of us outside of health care, just because with a lot of things, the stakes are lower. And in health care, they're so high because they are actually impacting clinical care. So I think that's where we see a little bit of a difference. You guys see the next slide? Okay, so now thinking a little bit about how do we actually get artificial intelligence integrated into the care delivery system, because it's great if it can do all of these things for us and really help and support us, but getting it implemented, as you guys know, is a tough thing. And I think about this a lot in terms of quality improvement. All of you have run quality improvement initiatives at your own institutions, and it's never just about that process or that tool. It's about the systems you have in place. It's about getting the buy-in from the correct people. It's about measuring the output. And that takes a lot of effort, but it's not just about the tool. So in this case, it's not just about the technology. It's about the people. It's about the process. You need an AI strategy at your institution. How are you thinking about this? How should we deploy? How should we be deploying this? What's the governance around the use of these technologies? We need to ensure that they're going to be safely and effectively deployed. This one is an important one. We really need to emphasize the right platform. Do we have the right platform? Do we have the right systems to integrate the AI? Because the AI, again, it's an algorithm. It's one thing. It needs to be more comprehensively integrated into the overall systems of care. Is there operational readiness across your institution? And then finally, education, which is really important. Education at the clinician level, at the administrative level, and at the patient level, because everyone's going to be interacting with these technologies. And education is key to both providing feedback, but also to successfully deploying these technologies. And then again, the clinical teams need to be in the middle of this. They need to be part of every step of the process, because it's not just about the technology. Now you can't talk about artificial intelligence and all of the hype and all the promise without going into this with caution, really identifying the risk, and really knowing what those risks are. So outcomes can't be a black box. They need to be explainable. We need to ensure that the models are trained on representative populations, because bias in means bias out. And there's a lot of uncertainty, so we really need to understand how these models are trained and on what data they are trained. There needs to be adequate guardrails that are put into place, especially right now, because there's not a lot of regulatory framework or guidance as these technologies are evolving, because they're evolving so quickly. And there needs to be consideration of ethical and privacy aspects to ensure that data is handled responsibly. Again, there's a lot of opportunity in artificial intelligence, but we need to implement it with caution by thinking through some of these risks. So the American College of Cardiology has an innovation program. We've been thinking about these things since about 2017. Its focus is on the digital transformation of care delivery through technology. We focus on virtual care, artificial intelligence, and remote patient monitoring. And what we've noticed is we started thinking about these in kind of almost siloed ways, like a singular one, two, three. And what we've noticed is really these are a comprehensive system. They all kind of build upon each other and really need to be implemented in health care. So it's not about digital health, it's just about health care and how we leverage all of the aspects, both current and new assets, to be able to create seamless care for patients and clinicians. The innovation program works across silos because it's an ecosystem approach. We at the ACC cannot do this alone. We work with technology companies, both big and small, those that have been in health care for a while and those that are non-traditional health care companies, venture capital, other professional societies that are thinking about this. And we create, with the goal of creating sustainable cardiovascular care. How we do this is in a lot of instances we are co-developing these technologies. So to date I think our team has assessed over 800 companies. Not all of them are artificial intelligence, but span digital health. In certain instances we are actually looking under the hood and helping to develop these technologies with the lens of the clinical team, optimizing clinical workflow, thinking about the patient experience, making sure that these technologies have our guidelines in them, that we're really guiding how these will be incorporated into care. Mainly so we don't see another you know Epic or EMR that came on board that was built for clinicians but by builders and engineers not really thinking about the clinician workflow. We're building technical frameworks. So what are the features that technology needs for both consumers and for physicians? How do you kind of shift through those noise? What are those features? And then application guidance. So just like in QI we need to share those best practices and those challenges to learn as a community, the same is true for technology adoption. We want to be able to provide guidance but when there's still a lack of evidence in some of these areas, the way we're going to build that evidence is by adopting some of these technologies, sharing the best practices, learning when these don't go the way we think they're going to go, and sharing those stories as a community. This slide just shows a couple of the innovation use cases where ACC is actively engaging with different companies that have AI in their solutions. So CLINT is a technology that is able to look at the clinical trajectory of a patient and be able to predict certain clinical outcomes of disease over a certain period of time. It's also able to prompt optimal guideline-directed medical therapy based not just on claims in EHR data but also taking into some of that social determinants of health data. GE Healthcare has a slew of AI solutions, one of which is a dashboard that looks at atrial fibrillation patients and their rising risk and being able to do a risk-benefit analysis on who should be anticoagulated. AI-DOCT is image interpretation and is starting to look at coronary artery calcification and really looking at that middle ground between severe, low and severe, and being able to triage that to the next level of appropriate care. ABRIDGE is that documentation piece, so being able to leverage technology to do documentation to create patient, billing, and clinician-centric notes, having the clinician not having to do those summaries anymore. And then CARTA Healthcare, which looks at using natural language processing for data abstraction with keeping the clinical teams in the loop for quality checks. So just kind of to reiterate a little bit that we've talked about today, there is an opportunity for artificial intelligence to better support clinicians and patients. But in order to do so, we all need to educate ourselves on what are the authorized uses for technology or any tool that you're implementing, whether it's AI or any other digital health tool. What's the regulatory approval process that it went to, if any? And then what are the demographics of the group which the AI was trained? That way we know if we're using on a certain patient population who it was trained on and if that's necessary. I think overall there's not going to be a one-size-fits- all in technology. We're going to have to have a collection of tools and then pick the right one for the right clinical scenario and then use clinical judgment to determine how that is implemented and used. We need to ensure that AI tools provide value to the patient and the clinician and really go towards creating an ecosystem for a comprehensive seamless care. So going full circle to going back to the village concept, it really does take a village to take care of a patient and we now have an additional supporting member in digital health and artificial intelligence. So the future of AI will be driven by connected care teams that are leveraging technology to improve care. So I challenge all of you to kind of embrace artificial intelligence. Use it. Be adopters of it. If you aren't already, learn about it. Help implement it. Help develop the technology. Test it. Share your experiences and let's change the dialogue from artificial intelligence disrupting health care to artificial intelligence impacting health care. That's great. Thanks Shobha. If folks have questions for Shobha, we're not going to do this exactly the way we always do it where we run right through the speakers. We're actually going to chat for a couple minutes here so please submit questions and we'll see if any come in for Shobha and then we'll go on to David's talk where he talks about how this is actually happening in the hospital in the clinical context. But one of the questions I wanted to start with Shobha and then for you David, well actually first, really important, are you Terminator or are you C-3PO for both of you? Terminator. You're Terminator. Okay, all right. David? How about R2-D2 gone rogue? So a little mischievous R2-D2. Okay, that's that's a good answer. I think I come out somewhere like it's the Star Wars universe, right, where these technologies can be really, really good and they can be really, really dangerous, right? I think that's that's personally where I am. So, you know, Shobha, you talked about this continuum of applications of these technologies from kind of very sophisticated clinical decision-making to, in some ways, straightforward rudimentary administrative processes. As you're out there in the community, where do you see most companies focusing their AI efforts and what's your opinion and sort of the innovation team's opinion about where it's going to have the most impact first on that administrative to clinical continuum? Well, I think the focus is hard because the companies are focusing on all on all the aspects. I think in terms of where do I see it adopting the quickest is those areas where it's high impact, low risk and that's really in the documentation front and a lot of people are already using AI to assist with documentation because, again, that takes out a major pain point for clinicians, right, in having to do so much documentation and go back to just really taking care of patients. So it's really supporting the clinician and giving the time back. I think in terms of where it is going, it has the opportunity to really help with optimizing guideline-directed medical therapy and I think that's a relatively safe space for it because right now, routinely, when guidelines are put out, it takes almost 15 years for guidelines to be really implemented in routine clinical care and a lot of that is because as they change it's hard to keep up to date with with all the changes. Through AI, it's able to offer an assessment to the clinician and then the clinician can use their judgment. So, I mean, we all know that AI can shift through a lot more data and noise than we can as humans, but what we are able to do is use our own judgment, our own education, our own knowledge and then defer into what should be implemented. Yeah, it's really interesting. I think if, you know, listening to your talk, if I had to break this down into like takeaways for our audience, how you approach this, there's sort of three rules of thumb that came away from your talk and I think about. One is the task to be done is far more important than the technology doing it, so don't get distracted by whether it's AI or not. It's, you know, just does this technology, does this product solve the problem you're actually trying to solve? Number two, I love your point about high impact, low risk. I think keeping that as a watchword for how we bring these technologies and the third thing you mentioned with the guidelines is do we have boundary conditions that are pre-established so we know if the machine is operating outside those boundaries? So the good news is in guidelines, we know if the machine gives us a recommendation that would be contraindicated by our guidelines, so we are able to establish that as opposed to allowing moving into a really speculative clinical space where the machine might be right, the machine might also be wrong with catastrophic consequences. David, before we go to your talk, you know, one sitting there as a clinician who obviously is taking care of patients and trying to run a quality program, and I know you're gonna get to this a little bit, when companies come to you or your administrators come to you or technologies come to you, how are you figuring out like what's legit, what feels safe, what you want to invest your time and energy in versus what's just a fancy PowerPoint slide? So at this stage in the game, and I'll get into the, you know, boots on the ground examples in my presentation, I think as Shobham mentioned, the high-impact, low hanging fruit is probably the easiest way in to sell into an end-user like myself, where it takes out a lot of the labor of love that goes, or of hate or whatever, that goes into a lot of the day-to-day work, and I think in terms of the more advanced solutions where it gets into medical diagnoses, treatment, a lot of stuff that is much treated with much more of a healthy skepticism, I think those are the areas in which a lot more vetting, a lot more time, a lot more maturity in the technology needs to be seen. So for those kind of solutions, it's more of a wait-and-see approach. For those that are lower impact, or higher impact with lower risk, those are sort of things that can be implemented a little bit more easy. Great, so really practical approach. Nathan, do we have any questions that have come in from the audience that we should address now before turning over to David? Actually, yes, we have several. Some of these actually are surrounding dependence on AI, and it's being embedded into the care treatment plan, and whether we know whether that information is correct, and how reliable that is in the world of potentially being hacked, and that implication of that. So there's multiple questions there. Do you see a dependence coming in the clinical world for AI, and is that correct? And then what is the implication, I think, for security around that? David, you want to take a crack at that first? Yes, I guess in terms of the dependence, a lot of AI, I guess, in theory, and I will get into this in my presentation as well, is invisible, and you are using it without necessarily knowing you're using it. So if that leads to some form of, quote-unquote, dependence, you would already say that we're already dependent on a lot of these AI technologies, because the end savings in what they're doing is added convenience, efficiency, and so on. In terms of not being able to do your same work with AI, I think that may be a question potentially for the future generations who grew up, you know, with these tools from when they were born, and they didn't know a time that was like that before that. But that just makes us older, and they're just, they embrace the technology, and it just becomes part of their day-to-day life. Like, my kids will never know, you know, a rotary phone. I'm like on the tail end of knowing what a rotary phone even is. Have we become dependent on, like, did that really phase anything out by not having rotary phones? And are we dependent now on modern technology to make calls and not have switchboard operators? Sure, but it's just sort of the, in my view, a natural progression of technological advancement. It's the cybersecurity part I can't really comment too much on. I'm sure in the future it'll be AI fighting AI enhanced cybersecurity. It's going to be a problem no matter what, and you could potentially just take a fatalistic view, like I'm sure all of our health data, credit card data, has been leaked somewhere on the dark web. And so, but again, with medical information, genetic information, that definitely becomes much of a bigger security. And I would definitely want to know for any vendor trying to sell me an AI solution, you know, what is, how is this data protected, and how am I protected as well? Yeah, all I know is that HIPAA is not the answer anymore. That's one thing I'm certainly sure of. You know, on the question of dependence, I think the black box issue of AI is a concerning one, particularly when you're consuming huge amounts of data, and you can't tell which variable, like which cause is driving which effect. That's really kind of upsetting for those of us who've worked with risk models and things like that for a long time. On the other hand, how many of you know how a Tesla coil works in an MRI machine? Nobody, right? Do we trust the images that come out of an MRI machine or out of a CT machine? Of course we do. So I think we have to be careful that, yes, we don't really understand how AI works, but actually we don't understand how most technologies work. The internal combustion engine, you know, how many of you could really fix it? I sure as heck couldn't, right? But I drive it all the time. We rely on it. We have ways of doing it. So that's how we're gonna have to adapt as a society. All right, David, why don't you take us away and close us out, and then hopefully we'll have a minute or two at the end for any final questions. Oh, sure. So again, I'm David. I'm currently with the co-director of our Cardiology Quality Program at UCLA Health. I'm also, in my last year, being the chair for the ACC Healthcare Innovation Section. We're sort of the hub for member engagement, networking, career development, and work with Shilpa and Ami Bhatt, their ACC Innovation Program. So just a little plug, if you are at ACC in Atlanta in April, please check out the future hub and you'll see some of the work that we're doing over there. We've already gone over a lot of these learning objectives, the benefits, challenges, how AI is improving or can be used to potentially improve patient care and well-being. And we've gone over a couple of real-world case studies, but I'm gonna go a little bit deeper into the nitty-gritty as a practicing cardiologist. And we can start with this first audience. Here's your post-test. You saw the pre-test. Your post-test question. You can tell which generation David and I are from, right, in our selection of... Okay. David, you wanna read the question? Oh, sorry. The increased use of AI for medical applications makes me feel, A, excited than concerned, more concerned than excited, or equally excited and concerned. ♪♪ Okay, seems like everybody is on that fence, so R2-D2 gone rogue, or friendly Dr. Darth Vader. Yeah, there... Can we have the next slide, please? So this, I like this illustration a lot, it just sort of discusses the symbiosis, I guess, of man and, or woman and machine throughout the years. And if you think about just humans in general, we've been developing tools since we've been in the caveman era, and the whole purpose of these tools has been to help us achieve more tasks, do more and advance essentially to what we are today and where we're going. I think of AI in a very similar manner, it is exciting, it is still a tool, albeit a very powerful tool that's getting exponentially more powerful by the day. And I just sort of wanted to give some examples about how AI is already in our day-to-day lives as I alluded to. You may not know, ooh, my camera can do all these fancy things, I can take photos of my children sleeping now without turning on the lights. This is AI in action, taking multiple layers of exposures and then merging them all together, but it's done seamlessly and you don't really know that it's doing that. I'm sure everybody here already knows about these predictive emails, helping you draft your emails based, I guess, knows what it's reading and saying how you should respond, and it's up to you to whether or not you want to use those suggestions. But these are also AI-enabled tools that people are already using. I don't trust the full self-driving mode of a Tesla, but I do trust the other model where it stays in lane and can navigate freeways and things for you. But this is also out there, Tesla's not the only one anymore, I think it was Mercedes in California is the first one to achieve level three autonomous driving, so they're on the highways, they are doing all this stuff, and this is all AI-enabled care, or AI-enabled tools, and it's everywhere. As Shilpa mentioned from the shopping list that she created, the photography that I mentioned, banking, social media with those algorithms, plus or minus, which are good or bad for society. Even film, TV, and art, there's generative AI art. I had no idea, I learned from a lot of my patients who are writers and in that area that AI is being used for script review, script revision, creating ideas, albeit that's why the big writer strike was, because AI was a huge contention about potentially taking their jobs, lowering their pay, lowering the pool of talent, and limiting their upward mobility in that career, so that was very interesting for me to determine. And even software. That last slide I learned in Microsoft now has design ideas, so I just made bullet points about what this thing was supposed to be about, and then it said, hey, here's some ideas, so this is literally a screenshot from PowerPoint about how to make my slide more engaging. Again, this is all using AI to foster creativity, things like that. I think that's a lot of where these operational high-impact, lower-risk areas are gaining a lot of traction in the community, which is, a lot of this stuff is really working on improving creativity, improving efficiencies, reducing resource or labor-intensive tasks. How does that translate over to cardiovascular care from my perspective as a practicing clinician and also in my role as administrator, implementing and maintaining and expanding our ever-growing quality program? And so, we go back to the very, very basics. What are we doing in healthcare? Why are we doing this? We're focused on cardiovascular disease. I think these three guiding principles have been the same. We're just trying to detect disease earlier, detect it more accurately. With that information, we are trying to prevent worsening morbidity or mortality. And why? Because we want to increase both the quality and the quantity of life. And again, AI is still just a tool that is enabling us to transform our imagination and is doing so initially, I would say, in this component by helping to reduce the time-intensive tasks. Shilpa mentioned it's accelerating disease discovery, whether it's from large unstructured data sets, whether it's from molecular discoveries from imaging, it could help improve prediction, minimize error. And one of the interesting things about AI is that it never needs to sleep, right? The model's always running, the model's always training. And so, this always-on philosophy of AI allows us to expand or maximize the full 24 hours in a day. And speaking as a clinician, you know, the in-person encounters, I think my biggest trouble with the EHR and whatnot is giving my patients their full attention, eye contact, you're talking to them, you're empathizing, you're having a relationship that you're building. But I've got five, 10 minutes to do that, and then I got chart, and then I got to go and then just cycle through. And so, the burden of documentation and data abstraction is a huge one. This is just an example of a room that just demonstrates just by looking, you know, the physician is looking at the screen and the patient's going to be sitting behind him away from them. So, even that simple encounter alone is setting you up to fail initially at that interaction unless you're able to recognize that and intervene. And so, this is sort of, you know, the old school notes. You don't live in this era, so at least we have text that is searchable. And I'm also, as I mentioned, I'm old enough to know this world through my parents who are both physicians and how that led to this micro recorder, which led to somebody else listening and transcribing it down for you with handwriting that you could read or in a typewriter. And this has sort of evolved over the years without AI, and now they are leveraging some AI tools. You know, the Google Glass, there was a company called Augmatics back in the day. I think now this is most, I don't know if they're still around, but it was, you know, put a camera, have someone else scribe for me so I can be present. This dictation tool, voice to text, smart phrases, all these things have really just been trying to solve the same problem, which is giving time back to the physicians, restoring that human element. And this is really evident by pajama time, they call it. So, after hours work that a lot of clinicians and people are doing, even before COVID, the average time that a physician spent at home on extra hour work was about 90 minutes. As a parent of two young kids, I'm like exhausted at nine, the last thing I want to do is open my EHR and then like type and catch up, only to do it the next day. And post-COVID, you know, once telehealth became broadly adopted, people signed up for their electronic messaging, the volume of messages have just increased dramatically. And this is a universal problem plaguing everybody in the healthcare system. And also patients' expectations have shifted in the pandemic. On-demand care, increased convenience, all this stuff is happening in my consumer life, why can't it happen in medicine? And so, this weird thing has started to creep and you'll see news articles about this billable minutes mentality that is creeping, you know, you contact your physician, but then if they've got to go in the chart, they've got to look at the meds, they've got to look at what we did last, make recommendations, that is still time that is being spent and now the expectation is maybe this relationship is, you know, it's a short thing, but there's a potential bill associated with that, which isn't the greatest thing and it's not the best thing to deter people from trying to contact you through billing. Anyways, this is a huge problem that people are trying to solve. And so, you know, time saved equals wellness gained for the physician, for anybody doing a very resource-heavy task that you just feel like is endless. And so, these are just some examples of the newer AI tools that are being leveraged with conversational AI, ambient AI, these are sort of the terms that are being thrown around for this. Obviously, generative AI, Epic and OpenAI or Microsoft have a partnership now in which they've already started to try to incorporate some of the OpenAI tools into Epic. And the way that a lot of this is under investigation and early studies is, you know, automatic note summarization. So this would help a lot with the abstractors here. So imagine from a cath lab, you have all the stuff that you need, it's just in a report generated automatically and there's nothing to sort through, it goes to an automatic variable field and voila, here you go. So it has huge implications for registries, imaging, structured imaging, reporting templates, generative AI responsive to patient messages. So they'll draft a response that obviously the physician will oversee and ultimately approve. I don't have the study but I guess early results say that even like a generative AI message sounds more empathetic than something coming from your physician. I don't know how many people here would be happy to know that or would be happy to also, you know, what if I'm getting a response, is it really from my physician, is he just clicking sign or did he or she really take the time to evaluate this? But it falls into the time. When I respond to a message, I don't have time to have a long preamble, it's kind of, you know, here's your question and dot dot dot. Thank you, hope you're well. So the ambient clinical documentation is great. There's a company that I just tested out and you just does the whole thing and then it just creates your note. Everything goes where it's supposed to go, it was quite fascinating to see that. But this is just starting to scratch the surface. And obviously improve revenue cycle management, you do your note, you do your visit, it scans everything, here's all the stuff you reviewed, established level four, etc. Click sign, easy button. So that's on a patient time savings make life better standpoint in which I think AI is already having an impact. Next I'd like to shift gears to talk about more of the population health analytics where I think you're shifting from just tasks and time to actually learning about disease states and making clinical programs and pathways, which is a you need to take a pause, you need to have a longer time frame to vet this and to cycle this. So as Shilpa mentioned, care coordination, creating and maintaining a data infrastructure from unstructured data, having built our quality program from the ground up without any AI tools, this is by far the hardest thing to do. And then tweaking it is equally as hard. And so I can only imagine how many people we are missing from our eligible denominator. There are technologies where you can identify from radiology, who's had a stroke, who's had calcium or atherosclerosis noted and who would potentially be eligible for even little things like aspirin or statin that aren't showing up in our infrastructure that we built or across the nation. There are ways in which AI can take that to help build your cohort and build it accurately. And also identifying high risk patients and helping to predict adverse events. I think this prediction model of large data sets is a huge field and everyone's saying they can predict this and etc. This is one of Epic's cognitive computing model, which unfortunately falls into this black box of how it works. It doesn't use LACE. It uses all that 200 million patient data that it has to tell you who's high risk, medium risk and low risk. I don't really know how it works. It apparently was the best performing model in this study in helping reduce heart failure readmissions at this one hospital. But it's being incorporated and used in this program to get people on GDMT, reduce hospital readmissions. I know at UCLA they don't want to turn it on because they don't know how it works. So I've asked out of curiosity. And you know, a lot of these large health systems are using this. So large language models, all purpose prediction, it can do everything. So having to cut through what it actually can do accurately or not is definitely very hard. But this is sort of where a lot of the AI models are trying to gain a foothold and saying wouldn't it be great if I could minority report this problem for you? Shifting gears quickly to operations, a lot of AI tools are already being tested or deployed to make the hospital smarter. Everyone knows throughput is a problem. If you could get the cath lab fast, done faster, turned over faster, you could do more procedures, increase your revenue for the hospital. We also know that there's a lot of staffing shortages across the nation for health care workers. So could an AI computer vision program help identify patients who are high risk for falling? Do you really need to pay multiple sitters to sit on the floor for these people who are watching Netflix or Disney Plus all night? Can you have a program that's sort of on a centralized monitor to help improve your efficiencies and scale? And this is the one I think that gets the most hype in cardiology, in cardiovascular disease, which is diagnostics. From the left to the right of your screen, you see your basic ECG, your chest x-ray, and then increasing complexity and cost of the studies that follow, echo, that's a stress test in the middle. And then on the far right, you see your coronary CTAs and your cardiac MRIs. And so what a lot of these diagnostic tests or how AI is being applied for these sort of follows a couple of different principles. One is we get so much information and confirmatory diagnoses from more expensive and invasive tests. So we better detect that up front with these cheaper tests that are ubiquitous. Just think about how many more chest x-rays are out in the world, ECGs, than there are echoes. Think about how many more echoes there are than cardiac MRIs in the wild. Think about how many people don't even get the advanced testing because they're subclinical. And by the time they present, the disease has already progressed to a point where it's clinically relevant. So you're really just trying to find subclinical disease and make it preventative, kind of like you do with primary prevention for ASCVD. A couple of examples, just going out in what's been published in the research and what's been approved through FDA and whatnot, based on your EKG alone, they can predict your age. They can predict your gender, your EF, your likelihood of developing AFib, even if you're in sinus rhythm, valvular heart disease, aortic stenosis, just from an EKG, and also even amyloidosis. And these are far, we don't really know how they're doing it, but they're doing it with very high accuracy. And there's a couple of centers across the country that have already deployed this to every ECG that's running through their model. Mayo Clinic is one of them. They've been the first ones around that. And Columbia or New York Presbyterian is another health care system in which every ECG is run through their advanced AI model and deployed. So I know that the Mayo Clinic uses EF screening technology for their ECGs to see who may have low EF or normal EF. Moving to echocardiogram, again, time savings. Let's have something that can ECG read and echo for us. It goes through, quantifies the chambers. I as a cardiologist look at it wholesale, see the conclusions. And for the most part, I'm saying, hey, this is pretty accurate, and sign off on it. There's ways to determine automated analysis of valvular disease severity, differentiating HOCM, or hypertrophic cardiomyopathy, from hypertensive cardiomyopathy, the athlete's heart. PEP detector, there's all this stuff that people are trying to get so much detailed information and granularity from a test that is, again, ubiquitous to cardiology and is advancing at a rapid rate. And then on the more expensive diagnostic imaging sides with coronary CTAs and cardiac MRIs, you're getting, what does the tissue look like automated? You're getting CTFFR, which is basically a measure of whether a blockage is limiting or not, flow limiting, which you would need a cath for. So there's all these AI technologies being layered on top of the existing information, again, to improve disease detection, but also try to detect it even earlier. And in terms of the disease discovery model, as everyone knows, HEF-REF, or heart failure reduced EF, from these registries is one category that's pretty well defined by a low ejection fraction. HEF-PEF, or preserved ejection fraction, is a heterogeneous disease, and so AI has been used to try and phenotype different forms of HEF-PEF into these four classes. AlphaFold is really cool. It's partnered with, or developed with Google DeepMind, but essentially it can predict based on an amino acid sequencing what your protein will fold and look like, which is amazing because you can see how this thing will fold and interact with your known models. And obviously the malicious intent part of, we've sort of touched on it briefly, cyberattacks, discrimination, worsening health disparities, bias in is bias out, reducing the workforce, and potentially paradoxically burning people out. And say, great, AI can make my whole note for me, but then now I have to see double the patients. That really hasn't helped me in any way. It's just made me more efficient, but I don't know if that will necessarily make me feel better, more satisfied, and having well-being. And I'll touch briefly, since we're running out of time, on the regulation capacity. This was a study that came out a couple years ago saying, wow, this thing can detect malignancies of skin cancer just as great as a dermatologist can. And here's an example of that. But when they looked back at sort of what features it was examining, it was cheating. It basically said the malignant, it was using a ruler, presence of a ruler on the image to determine malignancy or not, because malignant images tended to have rulers more. So it said, aha, ruler, malignant. And that was its key feature. So again, the explainability of the black box is very difficult to tease out, and who knows how that will be addressed in the years ahead. FDA is already on board trying to figure out ways in which you can put these technologies to use for drug development and enable devices to follow certain pathways, standardization to get FDA approval. And this is really just a model of what's the problem you're trying to solve for, data, use the data. How does it work when you train? How does it work in the real world? One of the biggest problems is that performance is dropping off drastically when deployed into the real world and is not being necessarily retrained or also being evaluated for potential unintended consequences, which are still to be determined. So anyways, in summary, AI is exciting, AI is concerning. I think the burden of proof and the value still need to be addressed in our very critical issues. But ultimately, this is still just a tool, and one that we need to make sure is both true and accurate, as I like this picture demonstrating that both are true, but there's only one ground truth, and hopefully AI will help us maintain that integrity of that center image. Thanks, David. I think that covered the whole range of where we're going from administrative to clinical and let our audience know that this is no longer speculation. This is absolutely coming. I'll look to Nathan. I know we're probably pretty much done. Is there any other real pressing questions that you think we should get out before we wrap and allow these good people to do the very analog human thing of feeding ourselves with calories? Anything, Cindy? Hi, there. I think most of them have been addressed. There were a lot of questions about prediction, and so I think you covered that very well. And then just how we were using data for abstraction, or using AI for abstraction, so I think that was also covered. Great. So why don't we end it there? We'll obviously stick around. I know we used up most of that time, and a lot of these questions are very specific, so the three of us will be up here if anyone wants to ask any questions. Let me thank Shopa. A big round of applause for Shopa and David for being with us today. This is our future, and I'm excited about it, but we should have a little bit of caution too. But I think it'll bring good things. Thank you so much. Enjoy the rest of the conference. It was great being with you today.
Video Summary
The two video summaries discuss the use of artificial intelligence (AI) in healthcare and its impact on patient care. Both videos emphasize the importance of understanding and integrating AI into the healthcare system. They highlight that AI can be used for administrative tasks, diagnostics, treatment planning, and patient engagement.<br /><br />The first summary focuses on the American College of Cardiology's perspective on AI. It mentions the need for an AI strategy, appropriate governance, operational readiness, and education to successfully implement AI in healthcare. The risks of AI are also discussed, including explainable outcomes, training data, and ethical and privacy considerations. The video highlights the collaboration between the American College of Cardiology and technology companies to co-develop AI technologies and shares specific AI use cases in cardiology.<br /><br />The second summary discusses the impact of AI on healthcare and emphasizes the need for regulatory approval and understanding patient demographics. It encourages healthcare professionals to embrace AI and highlights its potential benefits in documentation, data abstraction, diagnostics, and operational improvements. The video also addresses concerns related to cybersecurity, bias, discrimination, and workload.<br /><br />Overall, both videos recognize the potential of AI in improving patient care but also caution about the need for appropriate implementation, education, and ethical considerations.
Keywords
artificial intelligence
healthcare
patient care
AI strategy
governance
education
cardiology
regulatory approval
diagnostics
cybersecurity
×
Please select your language
1
English