false
Catalog
Showcase your Team’s Quality Achievements Quality ...
Showcase your Team’s Quality Achievements Quality ...
Showcase your Team’s Quality Achievements Quality Summit Wrap-Up
Back to course
[Please upgrade your browser to play this video content]
Video Transcription
Well, thank you guys for sticking it out to the last day. My name is Hilary Miller and I'm the new Director of Data Science and Analytics at NCDR. And Joe Squire and myself are going to, who is the Director of Analytics and Data Science at the UPMC Heart and Vascular Institute. And we're going to talk to you all a little bit about actually practically using data to do a data analysis. So our objectives today are to introduce analytics and data science, spend a little time getting into the analytics mindset, and then Joe is going to take us through a real world example using our TVT and CDR dashboards, and then talk about how best to communicate your findings to your leadership and your teams. So what is data science? So tactically speaking, it is extracting useful insights from data using scientific methods, statistics, and computational algorithms. So what does that actually mean, right, in health care? Well, it can be used for a couple different things. Prediction, so categorizing a patient's stroke risk based on their imaging, detecting anomalies, so identifying heart rate abnormality in an ECG, and clustering, identifying groups to explain various outcomes to provide a more personalized treatment. But you might be asking, what does this have to do with me? Well, NCDR risk models and metrics are rooted in those principles of scientific method, algorithms, and analytics. And the reports that you guys get back in your dashboards and other reporting are actually outputs from those things that can be in certain tools, such as the dashboard, that can be utilized to analyze data and gain insights for your organization. No data science experience needed. But in fact, you are part of this process because the data that you guys send into the registry, if it's not high quality, those risk models and metrics won't have the right level of validity to them. So you are very much part of this cycle. And I also want to say that utilizing dashboards or using the data that you have is actually a very vital part of leading to data science research. Because a lot of times, ideas for data science models and algorithms start by someone in your shoes asking questions about the data that they have. So we're going to spend a little bit of time today getting into the analytics mindset. And what we want to give you all is a process and some guiding questions to help get you started on your data analysis at your site. Now, the questions that you go through could lead to improvements at your site that could help improve patient outcomes within your site, or it could spark additional research that leads to a data science model down the line. Both of these things, all of these things, are incredibly important in quality improvement. But first, let's talk about tacos. All right. So I want to ask you all, what type of taco shell do you prefer? Can you see the number? Corn? I want to make sure you guys can see it. So number one for corn, two for flour, three for other, and four for I do not like tacos. Sorry, guys. All right. Let's try this again. And that's a tough question. How to give it some thought. All right. Oh, it's still moving. Okay. We're a little split. Got a good number who really like corn, a good that like flour, some that like other. I'm curious what else you use to make flour for your tacos. And then some that don't like tacos. So you may already know this, but tacos are a really big part of the San Antonio culture. There are more than 40 taco shops within 15 minutes of our conference location. So let's leave the conference, let's leave ourselves as conference attendees for a second and pretend for a moment that we are an operations manager working at a San Antonio resort. So you have a meeting with your boss, and your boss has been reading a lot of literature from the American Culinary Company, or ACC. And they report that 60% of San Antonio resort visitors eat tacos each week. Now your boss and you are very proud of your taco menu at your resort. So she wants you to know, how are we doing? So you guys together look at your dashboard, and you see that your resort is at about 40%. So she wants you to investigate it. Now, it may be tempting to dive right into your analysis or dive right out, depending on your level of comfort, but there actually is a process and guidelines that you can go through to help make sure that you have an effective analysis. So let's talk through. So the first thing to do in your process is to actually define the question to be answered. Without a clearly established question, you don't have that grounding force of which to take the rest of your analysis. And it might take time to define this question, and you might need to refine it over time. But sitting down and thinking about, what am I actually trying to solve? So here, we want to know, why are taco sales at 40% rather than the expected 60% for our resort visitors? Next, we want to think through some data that could be useful. And this can be done a couple different ways. You can consult with experts, both within your resort or organization or externally online. And you might identify things such as looking at food sales, looking at exit survey or visitor profile information and other things. And this is something that you'll get better at doing over time. And as you talk to more people about what data might be available within your organization, you'll understand more who to go to to get certain information. I also want you to spend some time just brainstorming, because, right, you're the operations manager. You're an expert of your own resort. So what do you know about the information that could be helpful? And also think, what is the most ideal data that, if I had any data in the world to answer this question, what would I go try to get? And the reality is, you might not have it, but you might have something similar that can help you answer. And also, sometimes it is good to start with the data in front of you, but be aware of what else is available to help support your analysis later on. And like every part of this process, it is iterative. So you may go back and visit this later on. Okay. So you've identified the data. Now what? All right. So we're going to walk through these common questions, guiding questions to use for data analysis. So I have them listed in numerical order here. But in reality, you're going to go in all sorts of orders when going through a data analysis to try to answer these questions. And you might ask one question five or six times to try to get to the root of what you're really trying to solve. But I'll walk through them really quickly. So where are we today? Is this a pattern? Which group or category? How are things connected? Is this weird? And what can we predict based on the past? So we'll walk through an example of each of these using our taco problem. Okay. So we're going to answer this one together. Make sure I do this right this time. Where are we today? Are we achieving the expected target? So thinking back to our dashboard, we're aiming for 60%. We had 40. Are we achieving the expected target for our taco sales? No. All right. Awesome. Yes. Making sure I don't have everyone's mind on lunch now. Yes. No. We are not hitting our expected target. So what do we do next? What do we look at next? We could ask this question again for other metrics, like overall food sales or other things to see if we're hitting other targets. But we decide we want to look specifically at the sales for each type of taco that our resort offers. Oh, sorry. First, we need to define, is this a pattern? So we want to know, are we improving, right? Our taco sales, we're at 40%, but are we going up, are we trending up, or are we trending down in the wrong direction? So I want to talk a little bit about what's called the two and three data point trap, right? Don't just rely on two data points to determine if there has been change over time. And this may lead to false conclusions. And in fact, in the panel yesterday, Dr. Bradley talked about, you know, you get people getting really excited, hey, we've gone up since last month. But if you're only looking at last month, you may not realize that you're just seeing variation around your mean. You need to look back further to actually see if there's a pattern. So let's look back a little further. Is this a pattern? Sure is. Our trends indicate that we've been kind of hovering around 35 and 40% for a while, but yet we see that long back, we were achieving the target. So what happened? What changed? All right, so now we want to look at which group or category. So this is where you take some of your subject matter knowledge and say, okay, where could there be differences within these taco sales that we could take a look at? So we're going to look here at our breakfast and our brisket taco sales. Do we identify similarities or differences? So we see breakfast taco sales and brisket taco sales are almost the same. But we know that we serve brisket tacos for two meals and breakfast tacos just for one meal. So do we actually expect to see the exact same performance for these two types of tacos? Let's look into that further. So now we ask ourselves, is this weird? Or are we observing something unexpected? Well, we look at our, we combine category with pattern and we see we are in fact observing something unexpected because historically our brisket taco sales have been much higher than our breakfast taco sales. Okay, so we're getting a little more information here going through these questions about what might be happening to taco sales. So now we want to know how are things connected. So here is where we talk to other people, either within our organization or within our team to understand if there might have been changes to an operation or a policy that could be driving some of this. So we talked to our menu team and they let us know, oh, yeah, a while ago we changed our supplier and we decided to just cut corn tortillas out of the brisket taco offering and we're only offering corn and flour with breakfast and just flour with brisket. We thought people wouldn't care. Well, we know from our poll earlier that people have preferences on their tacos. So this offering, this change in offering might be leading to some of this decrease. But then we keep talking, right? We keep talking to our menu team and we ask them, have you guys ever done anything like this before? Has this ever happened before? And they may tell you, you know, I'm new, this position gets turned over a lot, I don't really have a lot of historical information, but let me check in old memos and see if I can find anything else out for you. So what they find out is that, what can we predict based on the past? Well, what they find out is that a couple years ago they tried removing the vegetarian taco from the menu and they were wondering, you know, they don't really know what happened, they were wondering maybe there was a dip in sales then. So you go back to your dashboard, you go back to your data and you check, is that, is that happened, was that happening before? Like is this something we have seen before? And in fact it was, right? Taco sales went down when we took away the vegetarian option. And this isn't necessarily apples to apples, but it is helpful to see how changing a menu can impact what happens with your consumers. So now what? Right? We've gone through all of these questions, we've gained some good insights, what do we do with this? So this could result in changing the supplier, yet correlation is not always causation, right? So what it may require is more discussion, exploration and analysis. And I want to hone in that this is an iterative process, right? So a resolution may not be found on the first cycle. There might be several iterations of defining the problem, identifying data to use, asking questions and then communicating out those findings. And you'll probably learn a little bit more each time you go through it. And this process does get easier to do every time you do it. But I do want to recall that, or I want to say that the goal of analysis is to guide informed decision making. So I hope today helps a little bit with the circle on the left, data analysis, and that you guys remember that you are a subject matter expert, so this piece in the middle, and both of these together can lead to really great informed decision making. So I'm going to turn it over to Joe now to talk a little bit about how to apply these guiding questions in this process using NCDR dashboards and registry data, and then also talk about that last piece of the iterative process, which is communicating out your findings. Thanks, Hillary. Can we give Hillary a round of applause there? And so we're going to take what Hillary laid out here and we're going to apply it to a real-world example here using the NCDR dashboards that are available with your registry participation. And so my hope is that we accomplish two things with this, is one, we get a high-level view of how we can access and how we can navigate those dashboards, and two, how we can actually do a real-world practical data analysis at a high level. And again, like Hillary said, it's an iterative process, so we're not going to go very deep, and there's other things, other ways to approach this analysis at a deeper level that we won't go into, but we'll stay at a high level here and run through this. And so Hillary outlined this framework of basically getting questions together to guide how you're going to approach your data, and so we'll run through this pattern here with the NCDR dashboards. And so if you go out to the NCDR portal and you log in, you go to your registry, as we all have done. On the left-hand side there, you can see the dropdown for dashboards, and you can click into the facility's beta dashboard. And this will take you to the overview page, and on the overview page here, the main goal of this is to give you the high-level quality and process measures that you can benchmark yourself to the NCDR with and that you can then use to identify issues and where you're doing well and where you need to improve. And the nice thing about this is you can actually adjust that 50th percentile to be the 75th, 90th percentile, depending on what you're targeting. And you can also change the timeframe pretty drastically. Here you can see the timeframes at the top there. We're in Q2 2022, and it shows the last rolling four quarters for most of the metrics below. And if we go down to the metrics there where the top line there is observed mortality, you can see we get our percentage for the last rolling four quarters. There's a numerator, denominator underneath that. Then we have the NCDR 50th percentile, and then we have a quarter-over-quarter variance next to that, as well as some graphs on the right side there for volume and some demographics. And that will differ depending on your registry that you're in. And so going through Hillary's framework, we're going to take a look and kind of go through these metrics. And if we were to do that, we'd see that a lot of these are underperforming the national average, and so we're not meeting our target, right? But the thing is they're not completely weird until we get down to new permanent pacemaker rate and cardiac rehabilitation rate. And so these are drastically underperforming the 50th percentile and obviously weird. So we want to understand, is this a pattern or is this just an anomaly? And so what we can do is we can go up to the graph icon at the top here, click on that, and it'll bring up line charts for all the metrics. And like Hillary discussed, two data points don't make a trend. So we need to look at how the measure is performing over time. And so if we look in the bottom right here, we'll see our new permanent pacemaker and our cardiac rehab rates. And you can see with the new permanent pacemaker, over the last four quarters, it's been trending up. In the last six quarters for cardiac rehab, it's been trending down. And then the natural question is, like Hillary said, we want to understand, do we know if there's something that happened that triggered this? Was there a policy put in place that new permanent pacemaker placements cannot go to cardiac rehab? Weirder things have happened. But that's the kind of thing we want to think about at this point to understand and hone where we want to go with our analysis. Usually you're not going to know something right up front. And so we'll want to dig into a little deeper detail. So we'll go over to the metric summary here and click on the top bar there where it says metric summary. And this will take us over to this display here, where at the top we have our benchmarks. And all the metrics are outlined how they fall into the percentiles of the NCDR. And then below it are the details of those metrics, similar to what we saw on the first page, but a little more granular up front. And what we can do at this point is we can actually click on one of the metrics down there. It'll bring up the patient detail. And we can go to the patient detail page, and we can export that data and play around with it a little to see if there's any groups or categories that are really driving any of these variations that we're seeing in our metrics. And so once you do that, and once you, and you may not find anything, but you still have to deal with the issue of the metrics being off and that you're not achieving your target. So whether you find anything or not, you're going to have to present the data. And so the best way that I've found that you can approach presenting data is that you should get a framework in place, just like we have a framework for our questions where we analyze the data. And this helps you structure your presentation and be efficient in planning that presentation. And the framework I like to use is the goals, issues, actions framework. And what this is is we start out by telling people the goals of the data that we're showing them. We highlight the issues that are preventing us from achieving those goals. And then we recommend actions on how we can overcome those issues to achieve those goals. So what I'm going to do now is I'm going to show you an example of a bad data presentation. And then I'm going to show you an example of a good data presentation using the data that we just went through, just so you get an idea of how it works and how we can apply it to a real world example. So here's the bad data presentation. Here's our TAVR dashboard for the last four quarters ending in Q2 2022. Our absorbed mortality is at 1.6%. Significant cardiac events at 1.6. Acute kidney injuries at 0. Disabled bleeding is at 2.4. Device complications are at 1.2. Permanent pacemaker implantations at 9.8. KCCQ performed pre-procedures at 100. 5-meter walk test is at 98.7. And cardiac rehab referrals at 66.7. If anybody has any questions, please let me know. Otherwise, I'm going to change and turn it over to Dr. Strange, and we'll talk about strategy. I've seen it happens all the time, right? Like, you're rushed. And it's not necessarily the individual's fault. Sometimes the meeting is pressured, and you've got to run through it. But again, going back to the whole discussion of culture and everything that we've had over the last couple of days, we really need to think about having that discussion and being able to present the data in a way that people can digest it, understand it, and take action on it. And so obviously, the bad characteristics. You basically just don't tell people any goal about the data. So it ignores the context of what's going on. You read from the screen. You don't guide the audience to any issues. You kind of just expect them to pick it up along the way. And you obviously can't discuss next steps, because you haven't discussed the issues. And so a good data presentation. So using the goals, issues, actions framework, you can structure it in the manner that I'm going to show you here. So this is our TAVR dashboard. And its goal is to give us a high level view of our program and the quality and process metrics that are important to the NCDR and have been vetted through the NCDR. It allows us to find any issues that we may have with our metrics, as well as benchmark ourselves nationally. And so we can address those issues. And so if we look at the top here, we can see that this is ending in Q2 2022. And the data encompasses the last four quarters. And this is data from the NCDR TVT registry. And you can see through here that we're largely underperforming the national average. So we have a lot of work to do. But I want to call out two primary issues that we have with this data. And this is our new permanent pacemaker rate and our cardiac rehabilitation rate. You can see with the new permanent pacemaker, we're about double the national average. And cardiac rehabilitation is 30% under the national average. And this isn't an anomaly. We have a pattern going on here. And you can see on the bottom right there that over the last four quarters, we're going up with our permanent pacemaker implantations on our TAVR patients. And we're going down with our cardiac rehabilitation rate over the last six quarters. In the last quarter here, we're about 10% above the national average with our new permanent pacemaker and 70% below the national average with our cardiac rehabilitation rate. And so what I would suggest we do now is that we pull additional data. We see if we can find any drivers that might be causing these issues. And we take this data and the insights that we may find. And we meet with our stakeholders, our clinicians, our administrators, our cath lab managers. And we figure out if they understand anything from the front line perspective that we can intervene on and reverse these metrics in the way that they're going by identifying these root causes and then forming work groups to improve those metrics. And so what we did here is we stated the goal of the data being shown. We contextualized the data. We guided the audience to the issues. And we discussed next steps where we would go into what I call diagnostic analytics, which is discovering the why behind the descriptive metrics that we have, where it says we're going this direction. Now we want to know why. So that's diagnosing with analytics. And so if you can get to that point where you're able to guide audiences to the issues and you ask them, you basically recommend to them what the action should be, that's what we want to do with our data. We want to get action from our data. And so to wrap this up, the basic concept behind what we're doing is driving to that action. And so to get to that action, we really need to be efficient with how we look at our data. We really need to have frameworks that we can fall back on to make us efficient and effective. And so getting a framework in place of questions to guide your initial data exploration, similar to what Hilary laid out, or the five whys, or something that's specific to your analytical needs that you can fall back on and use to iterate and use to guide you in your exploration. Using existing tools. So the NCR dashboards are a great tool to start out your exploration with. I'm sure a lot of people have internal tools or vendor tools as well that you can use for exploration. So defer to existing tools up front. And then if you have to get into exporting the data or you have to get into spinning it up in something that's a little more complex, go down that route. But always defer to existing tools to be efficient. And then plan your presentation. So if you have a framework on how you plan your presentation, it makes you much more effective in how you're going to structure everything and how you're going to get people to take that action. And I personally like the goals, issues, actions framework. And so once you get people to take that action and you have that QI activity going into place and you see results from that, good or bad, you need to share it. And so here at the Quality Summit, we've seen over 100 posters of people sharing their work. And the reason we share our work is so that we can design, lead, and achieve quality together rather than in isolation. So that other people can see what we've done before, what has gone on before them, and so that they can learn from that and build on that and grow. And you don't have to wait for just the Quality Summit to publish your work. There's journals out there. And Minow is going to talk after we do a Q&A here on Jack Case Reports, which specifically looks for novel and educational QI activities to be published in that journal. And so before we go to Minow, we'll jump into a Q&A here. And then we'll hand it over to Minow. She can discuss the Jack Case Reports and how that process goes on. All righty. Thank you, Jo. Thank you. Nice. OK, so we're going to spend about 10 minutes on questions and then transition over to Dr. Minow Walsh. So going through a couple of these, so does NCDR offer more information sessions or education on data analytics? So that's a great question. So for the dashboards themselves, we do have some training on the QII Learning Center on how to understand what is actually on the dashboard. But as far as I know, there are no specific educations to data analytics. So that is some great feedback that we will take back with us. When do you know? I'll give this one to Jo. When do you know when to react? If you wait six months for multiple data points, is it too late then to address it? That's a loaded question. I can answer it. I mean, I personally, and it gets a little out of scope here, but I personally like control charts. And I think those are really, they're a low entry level for individuals to kind of stand up. There's a lot of programs out there that can automate it. But control charts are a great way of guiding you and helping you understand whether to react or not based on certain principles. So I would encourage anybody, if you're interested in understanding when to react and when not to react, look up control charts. A great book is, I'm going to blank it on the author's name. I'm blanking on the title now, too. But an individual who's done a lot of work around it is Donald Wheeler. So look up his books. And oh, Measures of Success, it's not by Donald Wheeler, but Measures of Success is a fantastic book that is very easy to read, very easy to understand. And it talks about reacting less and being more productive. Awesome. Thanks, Joe. And we actually had a follow up question about tools to help automate control charts. Do you guys use any? Yeah. So there's the one that I use that's pretty easy to use is QI Macros. And it's just an add-on to Excel, which makes it super simple. Export data, and you can easily tee it up in there. It does cost money. I think it's like $400 for a lifetime license. But I believe it's lifetime. But that's my go-to other than more advanced tools. Awesome. Thanks. And I see a trend here. So I do see a question. Would the NCDR ever consider using control charts instead of trend graphs to help better determine if an institution is in control or statistically significant? So that is, again, great feedback that we will take back with us. And that, I think, actually might be all of our questions to answer here. So with that, I will turn it over to Dr. Minnow Walsh. Thank you all. Thank you. Thank you. So how many of you think they should have entitled their presentation from tacos to TAVR? Come on. Thanks for having me today. I want to call out one thing about this meeting. I am a big fan of the NCDR. I am a past president of the ACC. I've been to a lot of meetings. This is the only meeting with majority of women who participate and do the work. So let's give a round of applause. All due respect to the men, the quality summit is always like this. So as Joe said, I'm here to talk about Jack Case Reports and publishing your work. I am a deputy editor of Jack Case Reports. I was past interim editor-in-chief. And it's a journal that if you haven't explored the various options for submission, I really want you to go on the website on your way home and see what we do there. So this is amazing work that you're all doing. We see this every year when we come together here. And we see the abstracts and all the work that you're showcasing. But I think you're sometimes hiding your light under a bushel. And I'm not going to give you four ways. But one of the things we see in science, clinical science, data science, basic sciences, if it's not written down, it didn't happen. Your program can be doing a fantastic job at whatever it is, whatever data you're looking at. And you've heard today how to look at your data. But if you don't report it in public, who knows how it happened? And how can they replicate your data? This is the landing page of the Jack Case Reports journal. And you can see that the banner across the top will show the various viewpoints or cases that are presented. And this is the current issue. So it's a little bit of a misnomer to think that this journal of the whole suite of journals of the Jack Journals is just about case reports. It is about case reports. And you are more than welcome to take a patient case with your team, with your docs, your MPs, whoever is involved in the patient care, and encourage them to report a single individual case. But there's a lot of other submission categories, and I want to show you what some of those are. So this is our editor-in-chief, Gilbert Tang. He took over July 1st. And the other deputy editor along with me is Dede Wang from Wayne State. I'm not going to show you the entire editorial board, but it's really diverse. It's international. There are people of all specialties. So when you submit something to our journal, unlike one of the specialty journals like Jack Heart Failure or Jack Intervention, we have a wide array of clinicians who know what, including CVT members, who are really familiar with the data that you're submitting and the cases you're submitting. And we make sure that we get it out to the right person. So these are our instructions to the authors. And so I know you can't read the fine print, but this is the landing page. It's very easy to navigate from the journal to instructions for authors. And what I want to show you is this is you right here. Quality Improvement Projects. You know, I don't think we've advertised this enough because we get very few submissions, which is why I think I was invited to come talk to you because we really want to beef this part of the journal up. We want to hear about what you're doing. And if you go into the journal, you'll see the instructions for what we want here. This is Quality Improvement Projects. And I'm not gonna go through all this because it's for you to read, but we really want, you know, a word count maximum, 1500. We get a little generous if you go over a little bit. But we want the background, the rationale of your project. You're all really familiar with that. And you also just heard how to do a great data presentation. What the project summary and what are the take home messages? The take home messages really should be, this is how we did this. This was the problem we addressed. And this is the data. And this is how we convinced our clinicians to do things differently. However we improved our process, we looked at same day discharge. There's so many different things we want to hear about that you can just go here and look at what we're interested in. So again, I don't want to go through the whole details. I just am showing you these are the landing pages that you're gonna see. Really important that you have take home messages. Extremely important when you're submitting to any journal is to do a literature search before you submit. Make sure you give us the right references. If somebody's done something similar, put it in your references. Doesn't mean that we're not gonna report, you know, except yours, because it may be slightly different, novel, or we might think that it's a really great way to go about things. So I want to give you just a little bit of behind the scenes on how an editorial board works, because I think people get so angry at the feedback that they get from editorial boards. But I think it's really important if you understand that we're deliberating and we're really on your side and we want to accept your paper. If you want to read this whole thing, it was published last month. So your manuscript doesn't have to be a first in human procedure or technique. People are always thinking this is the only thing that we're interested in. First in human is cool. We'd like to see that. I don't know if any of you, I'm a transplant cardiologist. The first robotic heart transplant was done in the UK last week. I messaged the chief surgeon on LinkedIn, it's the only place I can find the person, and said we want to publish that. I haven't heard back yet. So first in human is really cool for our journal, but it's certainly not the most common presentation. Very rare disease, we like to publish that too. It doesn't have to be that either. But it needs to be well written and educational. The number one thing that we send a manuscript back for is that it's poorly written and the English grammar's incorrect. So make sure that you and others on your team read the paper carefully and make sure that it's well written. And number one thing of our journal is we want manuscripts that people want to read. We want readership, and we want people to learn from what we're publishing. So that's why your data and what your teams are doing is ideal to submit to our journal. Jack Case Reports is very visual, so we don't just want narrative. We did this, we did this, here's our problem, we addressed this, this is how many patients. For the cases, we want images and videos, meaning echoes, MR, CT, coronary angiography, pictures of surgery, et cetera. If it's a pathology specimen, we really want to see that, and we want to hear what the pathologist thought about. One of the most frequent reasons for us to reject something is that something is presumed but not proven. One of the things Joe said was causative or not. He was talking about a different thing, there's often an alignment with we think this caused this in this patient, we will not publish cases where it's some kind of speculative situation. A really common, I can't tell you how many COVID vaccine cases we did not publish. There was no evidence for causation, and there's often no evidence of causation of other things too, so be really careful that you don't have a hypothesis that you haven't proven. If there was no indicated biopsy or testing or genetic testing that nails down the diagnosis, we'd be less interested in publishing your findings. It's a visual journal, so for quality projects, we'd really like to see tabular data, some sort of visualization of your data that's not just paragraph after paragraph. A table's nice, a chart's nice, use AI. You have to write down now in journals that you have used AI in your presentation, but we really want, it's like having a central figure. We want the readers to be able to see what you're talking about. So this is kind of the process of how we evaluate submissions. We get the submission in, you see the red arrow there, reject, that's called a desk reject. Why would we do a desk reject? Like I told you, poorly written, we can't understand it. That doesn't mean that we're not gonna accept possibly a revised manuscript. We give the author's feedback and we say, we really can't, there are so many errors in the writing of this manuscript that we're unable to send it out to review. Please send this back when you've given attention to that. The editor-in-chief, or one of us deputy editors, sends it out to the editorial board, to one of the associate editors who is familiar with that area. And then the associate editor, again, somebody who's an expert, might say, yeah, this is low priority, or it's not correct, or the care wasn't correct for this patient, and that might get rejected at that level. But if it goes out to review, that's always a great sign. That means we think that we're interested enough to send it to external reviewers, and you can get an accept decision, you can get a revise decision, and even at the very end, you can get a reject decision. We do accept people rebutting and saying I want you to reconsider. We've just reconsidered one this morning because the authors have pointed out to us that, you know, reviewer X said this, and we actually did revise accordingly, and so we don't understand the rejection. So we're going back around the bush on that one. So this is the process. We're on your side. We want you to publish. We want to publish in our journal. This is the most recent call for submissions, and this is not for quality improvement so much, but we do various special issues. We did one on COVID. We did one on cardio obstetrics. Our most recent one is going to come out on rheumatic heart disease, and then we have worldwide submissions for that. We have one coming out on cardiac masses, and this call is open right now. This is complications. We want to hear what went wrong. We don't want to publish malpractice. We know what malpractice is. We understand because we have a bunch of experts who say, oh, dear. Oh, dear. This was the wrong thing to do. We're not going to publish cases like that, but others suggest, you know, we missed the diagnosis the first time around. We did different imaging, and look, we identified the problem. Those are the type of submissions that we're calling for right now. So I already said this. We're on your side. We want to publish, and we really want this meeting to be partnered with the journal so that we're able to publish your submissions. So especially if you submitted your data to this meeting and had it in abstract form, you're ready to go. Go ahead, start writing it up, and I will take questions along with questions for anyone else, and please contact me personally if you want to have, if you have questions about a particular submission. Thanks. Thank you, Minow. That was great, and thanks to the panel because our goal here today was helping all of you showcase the quality work that you do. So there are a couple of questions. One is, how do they access Jack Case reports? I'm sad, sad, sad that you don't. You're all members of the college. You have, if you're a member of the college, you have access to all the Jack journals. So you just go to Jack journals, Google, click, you'll be in. And again, if you're not a member of the college, then it is an open access journal. So even if you're not, you can get on. So just type jackcasereports.com. And is the submission process laid out there on the website? Yes. What you want to do, if it goes from Jack case reports, it'll say information for authors, and you click on that. And it takes you to the landing page that I showed you. Great. And go ahead and read a few cases and a few viewpoints so that you kind of get the feel of what the journal's about. So there's another question. Is there a list of projects, QI projects, that have already been done and published? Would be good to know so we don't work on the same thing that something else has been done. Great question. And I'll tell you, there have been so few published in our journal that I don't think that really has any bearing. And we're always willing to publish additional information on another topic or another institution because people do things differently. And we're all about education. So I wouldn't worry about that. But they can find that on the? Well, you'd have to look at prior. Worry the journal, and then you'd have to put in quality submission. And you might be able to find it that way. And once it's submitted, there's notifications that are sent to the author? Yes. You'll get notified. And this is true if you submit to any of our journals. You'll get notification that your paper was received. And then the waiting period comes. You have to give at least two weeks for the editorial board to look and send out a review and things like that. So don't expect an answer within two to three weeks, unless, sadly, it was a desk reject. You'll hear about that pretty fast. Here's a question that seemed to have been a barrier for someone. What do I do if two reviewers want my paper written differently? It has stopped me from previously from publishing. That's a good question. That's for the editor to decide. So sometimes we have somebody who says, a reviewer says reject, and this says accept with no revision. Then you as the associate or deputy or editor in chief say, which one do we think has more? Sometimes reviewers can be mean. And they're sort of inappropriate. And we say, well, no. Other times, a reviewer is a real expert in the field and says, I really don't think this was done well or was done correctly. We tend to listen to senior. When I say senior, I mean really experts in that area. But it's for the editorial board to resolve those. And they're not uncommon, where one person thinks one thing and one person thinks another. We do not let politics of science enter in and let people who are in competing groups kind of look at each other's work. So there's a question. Do QI projects have to follow the SQUIRE guidelines for Jack Case reports? It says SQUIRE. Well, it depends on one of the things you can do is just, in advance, query us. Hey, this is what I want to write. Is that something that you'd be interested in seeing? We're happy to answer that on email before you start writing. I'd have to know specifically what you're talking about. So we had over 100 posters submitted here. Any advice on how they kind of turn that into a submission? Right. Start writing. So it's already up there. You already got the poster. Most of its posters, I think, right? I only got here last night. But so it's all your data. You just have to formulate it in a different fashion. And again, if you want to email me personally, go ahead. And I'll take a quick look. Great. Let's see. We have another. This was it. Would Jack Case reports ever create an annual award for best QI paper and support that presentation at Quality Summit? Nice. Good idea. Well, we'll take that back. Well, one thing we're doing. No, I like that suggestion. Appreciate any input like that. But one thing we've been doing this year that's new is we've been partnering with different meetings around the world. And we've partnered with the Latin American ACC meeting, as well as ACC Asia, to encourage those folks to submit their work to the journal. And I can see the wheel turning another year and maybe having a special issue on quality improvement associated with this meeting that I'm not going to promise because we're kind of experimenting with this model. But this is a huge meeting, and I think we would get a lot of publications. Yeah. Great. All right. Since we have a few minutes, I'm going to go back. There's actually a question here for Joe, I think. It's like, so switching back to sort of data analytics and presentations, what are some of the biggest lessons learned from data analytic presentations? Like, what's worked well? What has absolutely failed? Namely, to get buy-in to help drive performance improvement. Any epic failures? There's always epic failures. I think probably some of the, and we touched on this in one of the sessions, was the concept of having people believe in the data that you're showing them. I think that's probably where you're going to experience your biggest failures, is if you come barging into a room and you have especially negative data to show people, they're instantly going to balk at it if they haven't been accustomed to it, introduced to it, basically made literate of that data and understanding what's behind it, and how it's collected, and what it represents. And I think that's probably where you're going to see your biggest failures. I think for successes, everybody likes positive data, which you can't tee that up. But that's always going to be like your biggest success. But I think where you're going to have the biggest impact is when you can actually help guide people to understand where we can start looking for the resolution to issues that are there. And if you can offer them that support, rather than coming in saying this is bad, and not actually guiding them to how we can solve this, most people won't accept that. Because it's like your kid coming to you with a problem, or your employee coming to you with a problem and not offering any type of solution. Well, it's then on somebody else. You pass the buck, and it's on somebody else to solve the issue. So I think guiding people to the issue and helping them solve that problem is really where you're going to be most successful. Yeah, I'll touch on that, too. One big thing that I've learned from failing at it is knowing your audience. So if you're going into a meeting, you want to think, and you might not know the first time, but you'll come to learn who is more comfortable with data and who is a little less comfortable, and how do they want to receive data information, right? Is it someone who has five minutes, and they want bullet points, like we are here, we want to be here, this is what happened? Or do they actually want to get into the details? They want to know, OK, show me these graphs, show me these visuals, let's talk through what it is. So I think knowing your audience and tailoring your communication of data to that audience is a really big factor in successful data sharing. Yeah, and I would also add that, kind of make it real, just as you showed Hillary the taco example, kind of maybe changing it up a little bit to help people see a different perspective actually can kind of help explain things a little bit. But I think knowing your audience, kind of how they're going to react, and visuals really do help. I'll add something just from the quality standpoint, as I'm a quality leader in my system, and in regard to heart failure, the defensive crouch of a lot of clinicians who are not providing guideline-directed medical therapy to at the time of hospital discharge is a big issue. I'm sure you have that in your institutions as well. And there's a constant discussion of what, they're only in the hospital for two days, how could we possibly provide them with, and I say things like, of course, granted, I know I'm not the leader, but I say, life-saving medical therapy? Would you like your family member to leave the hospital without that? I mean, but it's interesting, because depending on who the audience is, as you say, like if people trust the data, we've been collecting this data monthly for decades, you can be assured that this is your data, and that we're not doing very well in this last quarter. But I think coming back with making sure they do believe the data, and also accept that what the problems are is, oh, if a short length of stay is part of the issue, we're going to help you, because we're going to engage in our institution, we're engaging the pharmacy team to help you with that. So that is kind of it for our questions. So I just want to say thank you to our panel, and I hope all of you got some ideas that you can take back about how you can present your data, and then also showcase your quality. Think about submitting to Jack Case Reports. What a great way to showcase what you do to the physicians and the care teams that you work with, to have it published in a Jack Journal, I think is just a really great way to showcase. So hopefully you all walk away with some ideas. So that being said, we're really kind of at the end here. And I just want to say a huge thank you to everyone who's been here, who's contributed to our faculty, our planning groups. There's a few special shout outs. I don't know if Maria's in the room, but Maria Ortez and Maddy Tappe, those are those calm ducks with the feet just paddling underneath that have gotten us here to really help make this program a success. And we've had a really great couple of days. We've been energized as staff. I hope all of you have felt energized. Hopefully you have one or two things that you can take back to your organizations and your institutions to say, I think we can do this. I think we can improve what we're doing, look for those opportunities, and to move forward. There was, I don't know if it's up on the screen, but next year, Denver, a new city for us. So hopefully many of you will be able to make your way there. We always want ideas. And if you have ideas for sessions, topics, please put that in the evaluation. We do read everything. No matter what you put in there, we read it, we look at it. It helps us plan for the next year. If you have ideas about speakers, if you want to volunteer to be part of the planning of this program, put that in there. Contact us through the support center. We are always looking for folks who want to help us really make this a success. So again, thank you all so much for being here. We've had a great time. San Antonio's been a great city. The hotel has been great. And I hope all of you walk away feeling that you are recognized and appreciated, and what you do does matter and really helps transform cardiovascular care. So thank you so much for being here.
Video Summary
Hilary Miller and Joe Squire conducted a presentation on utilizing data analytics effectively in healthcare, focusing on extracting insights and improving patient outcomes without needing prior data science experience. They emphasized the importance of high-quality data for risk models and how tools like dashboards can facilitate data analysis. The session incorporated an engaging taco-themed example to demonstrate the iterative process of data analysis, including defining problems, identifying relevant data, and analyzing patterns. They highlighted guiding questions essential for conducting analyses, such as assessing current performance, identifying patterns, and exploring potential connections between data elements. Joe emphasized the importance of structured data presentations using frameworks like goals, issues, and actions to facilitate informed decision-making. The presentation concluded with a discussion on using NCDR dashboards for effective data analysis and motivating action within organizations. Additionally, Minow Walsh encouraged the audience to submit quality improvement projects to Jack Case Reports, outlining the submission process and emphasizing the importance of publishing work to foster collaborative advancements in healthcare. The panel answered several questions related to data analysis, visualization, and the submission process for research papers.
Keywords
data analytics
healthcare
patient outcomes
risk models
dashboards
data analysis
structured presentations
NCDR dashboards
quality improvement
×
Please select your language
1
English