false
Catalog
Advancing Transparency and Accountability Through ...
Advancing Transparency and Accountability Through ...
Advancing Transparency and Accountability Through Public Reporting
Back to course
[Please upgrade your browser to play this video content]
Video Transcription
Please welcome Senior Medical Officer of External Affairs, ACC National Cardiovascular Data Registry, Dr. Ralph Brindis. Good morning, everyone. Again, my name is Ralph Brindis, and we're here for an opening session on day two to discuss advanced transparency and accountability through public reporting. I want to first actually have a small poll. How many people in this room have their hospital participating in an NCDR public reporting endeavor in one of their registries? Raise their hand. I don't know if we can see with the lights how many people that is. It's quite high. The next question is, how many people in the room their hospital is not yet participating but is honestly, strongly thinking of participating? Raise your hand. Okay. And the last question. How many people in the room are saying their hospital is saying hell no, we won't go, and we will not ever participate in voluntary public reporting? Raise your hand. Okay. You were too scared to do such. I want to give, with that in mind, let's start our session. I want to give a brief introduction, but I really urge you to look on our quality summit to look at their bios more completely. First, my co-chair and moderator is Ellie Huff. She's the vice president of cardiovascular service line operations at Baylor Scott and White Hospital Health Heart Hospitals, and also a member of our NCDR public reporting work group. Next to her is Dr. Dave Shaheen, professor of surgery, Harvard Medical School, and Mass General. He's the chair of STS quality measurement task force. He's the past chair of the STS national database workforce. He's on the board of directors and executive committee of the NQF, and he also is a co-chair of the TVT public reporting subcommittee, but no one in this room knows that he's also a relative, a Donabedian. How's that? Next to him is Dr. Greg Dahmer, who's an interventional cardiologist by training, a national and a past SCI president. He was a recent medical director of quality and outcomes of the Carolyn Clinic. He's the founding chair and now co-chair of the ACC NCDR public reporting advisory group. Next to him is Ben Harder, the managing editor and chief of health analysis at US News and World Report. Ben has been a terrific ACC collaborator. He serves on the TVT strategic advisory group and also on our public reporting work group. My involvement in public reporting is that I also co-chair with Dave, the TVT public reporting subcommittee, and I also chair the clinical advisory panel for cardiovascular public reporting for the state of California. Our format today is that we're going to have Dave give a 15 minute talk on accurate measurement stakeholder trust in public reporting. That's going to follow by Greg giving us a talk on the good, bad, and ugly of public reporting, and this is going to be followed and moderated by Ellie for a panel discussion and Q&A with our audience. With that in mind, Dave, tell us about accurate measurement. Thank you so much, Ralph. I have no financial disclosures. What I'd like to share with you this morning are some thoughts about several generic pervasive issues in healthcare quality, how we measure that quality, how we use those measures to improve quality. I'm going to drill down on how those issues impact us in the cardiovascular world, and then I'm going to finish with some thoughts about how we move forward from here. So first of all, what's the problem? I think all of you realize, many of the public doesn't realize, however, that we have substantial and sometimes unacceptable variation in healthcare quality, both at the hospital and the individual practitioner level. We've got to improve quality, we've got to reduce that variation, and we've got to deal with the few but significant low outliers. How do we do that? Two requirements. We've got to measure quality, and we have to engage stakeholders. Problem is that both those requirements have their issues. Healthcare quality measurement is often problematic. It's methodologically very challenging, and oftentimes those methodologic issues are not appropriately dealt with. Outcomes are the most technically challenging, but also the most important of all the various measures that we produce. There are no standards for healthcare measurement, except for those of the so-called consensus-based entity, which used to be NQF, now Battelle. There's no requirement that a measure developer publish those methodologies, that they're peer-reviewed, that they're externally validated. We have too many measures, the results are often conflicting, often inaccurate, and as a result, our end users receive conflicting, contradictory results. On the stakeholder side, providers often distrust these measures, and because of that, they don't want to participate in public reporting. Providers also often don't see any particular advantage to public reporting or a responsibility or necessity to do that, and on the consumer side, when they see all these conflicting results, they tend to ignore them. So we've got some problems to deal with. Let me drill down and unpack several of those. First of all, just to show you a little bit about this substantial variation that we have in healthcare quality, these are data from probably the most extensively studied procedure in all of healthcare, coronary artery bypass surgery, and these are fairly recent data from just a couple of years ago. On the bottom right, you can see, particularly for adjusted morbidity, that there's a substantial spread in morbidity across providers, but I think what's most telling is the box in the upper left, which is a breakdown by star rating status in STS, and you can see, for example, for adjusted morbidity, it ranges from 4%, excuse me, from 8% in the three-star programs to 16.7%, so more than a doubling as you go from a three-star program to a one-star program, and similarly for adjusted mortality, you go from 1.4% in a three-star program to 4.1% in your one-star programs. So that's healthcare variation at the hospital level for CABG. This is similarly shown in this paper, where we looked at individual surgeon performance, same sort of variation. No different in percutaneous coronary interventions. These are data from New York State, the most recent data that's published, 2017 to 19, and you can see on the right-hand graph, very substantial variation in the results for PCI. Same story for TAVR. For programs, and again, this is from New York State, for programs with adequate sample size to classify outliers, the range in risk-adjusted mortality anywhere from 0.5% up to nearly 5%. That's almost a tenfold variation. So I said earlier that optimal improvement requires both quality measurement and stakeholder engagement. A lot of credit goes to Dr. Don Berwick. Many of you know that name. He founded IHI, one of the great quality leaders in American healthcare, and he, I think, tried to explain how measurement, reporting, and improvement are all needed together. On the right-hand side of this flowchart, you can see what I would call the intrinsic pathway for quality improvement. A provider sees their data, and without any external forces operating, they simply want to get better. So this is something that I hope every program wants to do, whether or not there's public reporting or not. But on the right-hand side, you see what I would call the extrinsic pathway, what Don Berwick calls Pathway 1, in which various stakeholders, payers, patients, other consumers see outcomes data, and they choose providers based on that. And then there's the crosswalk between those two pathways, where providers, realizing that patients and others are looking at these data, redouble their efforts to improve their quality. So healthcare quality measurement is intrinsic to both those pathways, but it's complex and often problematic. Outcomes, what Donna Bedian called the ultimate validators of healthcare quality, are also the most methodologic challenging. Here's just a partial list of all the things that a developer should be thinking about when they develop a measure. What's the data source, the burden of collecting data, what's the data quality, the type of measure, the variation and gap, sample size, observation periods, risk adjustment, validity and reliability, outlier determination, how you present the data, minimizing the time lag, and unintended consequences. And again, this is just a partial list. And I'll tell you that most developers are not looking at all these issues. I've been talking about this, particularly with respect to the cardiovascular world for years, and arguing for the need in this environment of complex methodologies to have standards so that everybody is playing by the same rules. But the reality is, right now what we have in healthcare is what many have called measurement chaos. We have too many measures. We have no mandatory standards, variable methodologies, and inconsistent findings. Just to show you a few examples, here are some of the state and national reports that many of you are familiar with. These are all in cardiovascular. Some of the commercial, organizational, web-based, and public service report cards. RateMD, ProPublica, LeapFrog, HealthGrade, Vizient, U.S. News & World Report. And then finally, some of the professional society quality programs, STS, ACC, the American College of Surgeons, National Quality Improvement Program. We're not immune from this problem, even in the cardiovascular world, where we have many good measures. Ben Harder is here today. Ben and I are friends, and I deeply respect the fact that, although Ben and I disagree on many methodologic aspects of how you measure quality in healthcare, they put their methodology out there every year in great detail for anybody to look at. And that's what any measure developer should do. So, we have here in STS world, two well-intentioned cardiac surgery rating systems, highly transparent, published methodologies that produce markedly different results. On the left-hand side, you see a paper by Jawwitz that I participated in, which shows on the horizontal axis, the x-axis, the decile of STS ODE ratio for CABG surgery. And then on the left-hand side, on the vertical axis, you see the decile of U.S. News & World Report rank. And what you would like to see is concordance between those two, so that everything is lying along the diagonal. And of course, it's a scatterplot. There's no correlation whatsoever. I tried to look at some of the reasons for this by looking at differences between these two rating approaches, and I've listed in the box on the lower right-hand corner, which I won't go through right now in view of time constraints, but there's many, many differences between these approaches. And again, to their credit, U.S. News actually does a cross-tabulation each year of STS results versus their own results. And you can see, for example, for aortic valve, which is the upper of the two tables shown here, that out of 44 below-average programs as rated by U.S. News & World Report, we only agreed in two of those. That's 4.5 percent agreement. And similarly, for high-performing programs, we only agreed in 12 percent. So again, two well-intentioned programs that produce markedly different results. And what's a consumer or patient supposed to do with that information? If you're interested in learning more about the so-called measurement chaos, I highly recommend the Healthcare Association of New York State reports. They've done two of them now, the most recent in 2019, and they point out how consumers are confused by these multiple contradictory reports. Policymakers and providers derive limited value from them and really don't know how to deal with the problem. So when you have these inaccurate, inconsistent metrics, you can confuse and mislead patients, inappropriately penalize good performers, inappropriately reassure or reward low performers and misdirect our very scarce quality improvement resources. I would ask of any measure developer, who developed your measure? Were clinicians involved? What's your data source? Does it represent the overall population? Do you audit your data? Do you risk adjust it? How do you manage exceptional risk cases? How do you classify outliers? Do you rate or do you rank? What are your criteria? What are your confidence interval criteria? Are your methods published in the peer review literature? Have you submitted your measures for external validation? These are all important questions. So I've given you a lot of bad news. Now, let me give you what I consider to be the good news for those of us in this room and in our profession. Cardiovascular medicine and surgery quality programs can answer all the questions I just posed and are worthy of stakeholder trust. We've been at the forefront of clinical registries and responsible quality measurement and public reporting literally for decades and we're further along this journey than any other health care sector. And I would say that well-crafted, consistent, publicly reported measures are the best defense and response to that measure chaos that I just described. So let me start first with the surgical side. A few words about the STS National Database. We've been around for more than 30 years now. We're in virtually every state and I think you can see we're sitting here in Texas right now. And Texas has 111 programs in the STS Database. This map is up to date as of five days ago. California, 110 programs. We have about a thousand programs in the United States and a number of countries. But even more important than that, how many of the total cardiac cases in the country does our database encompass? And we've looked at this a number of times now. Most recently, we looked at data from 2018, at which time 97% of all CABG hospitalizations billed to CMS were at an STS site. So we can go to any external entity and say, look, we have a database that covers basically all cardiac cases in the country. That's something that very few registries can say. We've also had a tremendous impact of our registry and our quality activities on our outcomes. You can see on the upper left, these are CABG outcomes from 1998 to 2016. We've had anywhere from 30% decrease in operative mortality, 56% decrease in renal failure, 65% decrease in re-op and so forth. And similarly, increases in process measure compliance in the lower right-hand side. Public reporting has been very important to us. We began our voluntary public reporting efforts in 2010. You can go to our website and this is what will show up as the first page. And we make a pretty bold statement. We believe that the public has a right to know the quality of surgical outcomes, period. It's our responsibility. This was a pretty big deal when we released these data in 2010. This is from the New England Journal of Medicine. And the statement here is that the voluntary reporting of risk-adjusted outcomes in what was then 20% of U.S. cardiac surgical programs is a watershed event in health care accountability. So what's happened since 2010? Well, you can see all three of our databases started at around 20% participation. And as of three weeks ago, we had 80% of adult cardiac programs participating in our public reporting program, 90% of our congenital programs. General Thoracic got a bit of a late start, so they're only at 60%. And overall, for the three databases, 77% participation in this voluntary public reporting program. This is the current MGH, My Hospital's, public report. Just three things I'll point out about it. First of all, we present the data in numerical, graphic, and star rating formats, so that all readers of varying degrees of numeracy will be able to comprehend these charts. We have heart outcomes, risk-adjusted mortality, and morbidity for the most common major procedures. And we also include several process measures for CABG. We've shown in several studies that STS public reporting is associated with superior outcomes. These are data of the first four years of the national voluntary program, showing on the left-hand side that risk-adjusted mortality rates for publicly reporting programs are consistently and significantly lower than the risk-adjusted mortality rates of non-reporting programs. And on the right-hand side, in every single reporting period during those first four years of the program, composite scores for publicly reporting programs were higher. I did the same sort of a study in my own state, Massachusetts, where we had mandatory quality improvement, and the results were even more striking. What we showed was that there were lower operative mortality and lower adjusted mortality odds ratios in Massachusetts during the period of public reporting than there were in STS national data, where you had voluntary public reporting. And also, very importantly, with regards to the issue of risk aversion, there was none. In fact, our risk in Massachusetts, our predicted risk of mortality, was higher during the first period that I studied, and about the same during the second period. So no evidence of risk aversion at that level. And based on our quality program, we were privileged and honored to receive the Eisenberg Award of the Joint Commission and the National Quality Forum. I show this slide because standing beside me on the right in this picture is Dr. Fred Grover, somebody who many of you know, who was one of our great leaders in STS quality, and a person that was an inspiration to me. In cardiovascular medicine and interventional cardiology, you also have a tremendous set of registries that were launched in 1997. It's now a robust family of registries. I've listed most of them here, over 50 million records, many data quality checks, random audits, doing all the right thing to get high quality data. You have a public reporting program based on four process measures. I do have, although I congratulate ACC for those four process measures, I think it's time to move to outcomes. You do that in TAVR, in a program that is a joint program with STS. There's a very innovative win-difference approach that's used to rank six potential outcomes. And I think it's time to move in that direction with Cath PCI as well. These are the public reporting participation numbers as of the middle of July. We've got 54% participating in Cath PCI and 44% participating in public reporting for TVT. I don't want to forget the American Heart Association, which has many excellent Get With The Guidelines reporting programs as well. And finally, if you don't remember anything else from this talk, remember this slide. I talked at the very beginning about the many conflicting measurement programs out there and the fact that we had few standards and we did not have many of these measures that were submitted for external validation. Until recently when there was a shift from NQF to Battelle as a so-called consensus-based entity, NQF was the premier measure validator in American health care. And to receive NQF endorsement took a lot of work and meant you had to jump through a lot of hoops. And if a measure is endorsed by NQF, you can be assured that it's valid. I just want you to look at the three organizations that have by far the highest number of NQF-endorsed measures of any in health care. You've got the Society of Thoracic Surgeons at 33, American College of Cardiology at 17, and the American Heart Association at six. So we have by really an order of magnitude the greatest number of NQF-endorsed high-quality measures of anybody in health care. So what's the bottom line? Well, you can trust cardiovascular quality measurement, whether it's STS, ACC, or AHA. We're using high-quality audited registry data, risk adjustment, it's published, externally validated, and we have a more than three-decade track record in many cases. I think that you should try to move towards much higher percentages of public reporting. There's no reason not to publicly report, particularly in our specialty, with the quality of our measures. And I think you should start including risk-adjusted outcomes for NCDR. I think it's time. I can say that. Many of my cardiology colleagues might get in trouble for saying that, but I think that's the way you need to go. It's our ethical responsibility. There's four ethical principles that we talk about in health care, beneficence and non-maleficence. We need to do good for our patients, and we need to avoid doing bad. One of the best ways to do that is to measure and report. And we need to provide patients with autonomous decision-making capability by giving them all the available information. We know that public reporting improves outcomes, it incentivizes improvement, in the future at least, and also to some extent now we're seeing quality-based payer incentives shift patients to higher-performing providers. Appropriate public reporting identifies high and low performing programs for recognition or remediation. And finally, good public reporting of high-quality measures is the best defense against unreliable rating or ranking systems. So with that, thank you very much. So I'm going to speak with you this morning about a topic that I've covered many times. Public reporting and my involvement, it hasn't always been the most popular thing. So rather than stand behind a podium, I like to keep moving because I figure I'm not quite as good a target if I keep moving. So the title of the talk, Good, Bad, and Ugly of Public Reporting, is one that I've used in the past. Yeah, I did retire. I retired at the end of last year. But my wife asked me, so if you're retired, why are you going around giving talks? Why are you writing this paper? Why are you doing this? And I said, well, honey, it's all relative. It's a relative retirement. So if you've heard me speak on this before, you'll notice that I'm using a title again. I have no disclosures other than the fact I was the chair and I'm the co-chair, as we prepare to turn this over, of the Public Reporting Advisory Group. So understand, I'm biased in regards to public reporting. And I admit it without any shame whatsoever. So the Good, Bad, and Ugly of Public Reporting, you'll notice that I borrowed this title from the 1966 movie by the Italian director Sergio Leone, starred Clint Eastwood. This movie, probably more than any other, was the one that launched his acting career. But to try to cover the good, bad, and ugly of public reporting in 20 minutes is very difficult and reminds me of another movie that I'm sure you're all much more familiar with, and that's Mission Impossible. But we're going to try to do what we can to cover this as efficiently as we can. So with the help of my colleague, Dr. Shaheen, who's already covered a lot of these points, with apologies to the director of this movie, we've rearranged the title a little bit because I want to end with the good of public reporting. So we're going to talk about the bad and ugly first, and then go ahead and talk about the good. So the first point is to understand you're publicly reporting data. Where does this data come from? And it comes from only two sources. Number one is administrative data, which basically means hospital bills. If any of you have ever gotten a bill from a hospital, you know it can be very confusing to sort all of this out. The administrative data that's most commonly used comes from the Medicare database. So right off the bat, you're limited in terms of the scope. It's only the Medicare population, although Ben and his colleagues are expanding this to include Medicare Advantage plans, some outpatient data. But there's a lot of things that can be missing from that. Sometimes procedures don't get coded, and it's a lot of problems. We don't have time to go into all of them. But another problem is it's delayed by two to three years from real time. And I always had a hard time explaining this to my colleagues because they would come to me and they would say, but I had a great year last year. I didn't have a single mortality, and here this report is coming out. And I said, well, wait a minute. You understand, this data is two to three years old. It's not real time. A couple of quick examples. This is actually a study that Dave did looking at their Massachusetts administrative data and comparing it to actual clinical data. And what they showed for bypass surgery was there was a 27% discrepancy in even the number of cases that were tabulated. And there was a absolute, absolute difference in mortality of 0.83%. Now you say, well, that doesn't sound like very much. But remember, even back in that era, mortality for bypass surgery was 3% maybe. So now you're talking about something, you're almost an absolute one percentage point off. That's a lot. Moving ahead, on the other side of the slide is administrative data compared with clinical data from one of the pivotal TAVR trials. And what was shown is that it was pretty good in terms of deciding whether or not the patient was dead or alive. It was modestly good in determining whether they had a complication such as an MI or acute kidney injury. Really wasn't very good for picking up other complications such as bleeding. Now clinical data, which I don't have to tell this group about clinical data, it's derived from a study of clinical data. It's derived from the databases that we use. It's extensive, it's well defined. A lot of the measurements have already been vetted through the NQF that makes risk adjustment more robust. But again, in this group, I don't have to tell you that it's very labor intensive. There is an audit, but it's not nearly enough. And it is costly. And if I were Ed McMahon talking to Johnny Carson, I would say, well how costly is it? You remember that TV show. It's actually very costly. This slide shows the cost of public reporting just in Massachusetts from a survey of 24 cath labs, and it's only for PCI. And you can see the annual cost, anywhere from 50 to $200,000 a year. One to one and a half FTEs, five to 10 hours per week. And in a somewhat arbitrary burden scale, it was felt to be quite burdensome. And in the other headline from another publication, US physician practices spend more than $15 billion annually reporting on various quality measures. Billions, billions. Now that's just, that's not cardiology, that's the whole picture. Plus, there's promotional costs. I don't know if you've ever been recognized as one of these areas, but one of these programs will give you a phone call and say, oh, Dr. Dahmer, congratulations, your facility's being recognized, and our program is being one of the best, and if you'd like to buy our advertising package, we have a good, better, and best plan, and it could cost you $90,000. So there are promotional costs involved if you wanna advertise this for your institution. And let's have respect for our colleagues in primary care. This was just published a couple of weeks ago, and looked at, now this is for value-based contracting, but the average primary care physician has to track 57 different quality measures for these various programs. It's an overwhelming burden if you're gonna do it. Inconsistent results. Again, Dr. Sheehan's commented on this. I'll go into a little bit more depth. There's a couple of studies here. This one looked at four different programs, U.S. News and World Reports, Health Grade, Leapfrog, Consumer Reports, and what they find. No hospital rated as a high performer in all four rating systems, and only 10%, let me say that again, 10% of these hospitals were rated as a high performer in one rating system and a high performer in another rating system. So inconsistent results. Another study, very quickly looking at three different reporting systems, Hospital Compare, U.S. News, Health Grades, and they just looked at 30-day mortality ratings for COPD and heart failure. And again, all three ratings agreed in only about 16%, and were off by one category in about 80% of the cases. So inconsistent results. Now let me set the stage for this next problem with public reporting, and I'll disagree a little bit with my colleague, Dr. Sheehan, about the topic here. So think about this. It's Monday morning, you have an important meeting with the CEO and the CMO, and they want to know something about the quality in your area. Pretty important meeting. It's at eight o'clock, maybe overslept a few minutes. The traffic is especially bad that morning, it's slow, and finally the traffic opens up, and you see your chance at, I can catch up now, and I don't want to be late for that meeting. So you speed up a little bit. And then you see this. What are you gonna do? I can tell you what you're not gonna do. You're not gonna speed up. You're gonna slow down. And that's an example of risk avoidance, because you don't want a ticket, because you're gonna get even later. So does it exist? Well, yes, it does exist. And it's inherent in our human nature. Now, I need to explain this slide a little bit better. What you're seeing in the graphic is operative mortality at the Cleveland Clinic. And it's what happened after New York State started their public reporting program. So the dark bars are the mortality for bypass surgery in New York State. That's what was reported in their database. And you can see it's, over year over year, it's about two and a half to three and a half percent. The clear bars is the operative mortality at the Cleveland Clinic in patients referred from New York, because 67% of surgeons in New York refuse to operate on at least one high-risk patient. So they put them on a bus and send them to Cleveland, where they got operated on. And their mortality, as you can see, was considerably higher. Why? Risk aversion on the part of those cardiac surgeons. Same thing for PCI. When the New York database started reporting PCI mortality, they saw a decrease in mortality year over year, 0.82 to 0.58%. And initially, this was heralded as, look what a great success. This is bringing about. Well, and then they started, they stopped, and they looked at what was happening with cardiogenic shock. Obviously, a very high-risk situation. And they noticed that the number of cases for cardiogenic shock went down by about almost half, because physicians did not want to do these high-risk cases for fear that they would be reported as having a high mortality. Another example. So Massachusetts has always, has been a state, very strong public reporting, as Dave has mentioned. On this graph, it looks at what happened with PCI mortality when four hospitals, that's those four little black carrots along the bottom, were identified as poor performers. And I'm not talking about just some little note. I'm talking about front page stories in the Boston Globe. Were identified as having elevated mortality. Their expected mortality, that's what's on this graphic here, is the expected mortality. Their profile went down, because they were avoiding the high-risk cases. So if you plan to put this on a graph here, this is kind of how a simplified way of looking at things. On the horizontal axis is the incremental value to a patient, and on the vertical axis is the risk of the procedure. So in the lower left-hand corners, there's that nice concentric 60% RCA stenosis in the patient that has angina once a month. You're probably not gonna really do that person a lot of good by doing a PCI. And that accelerates all the way up to the upper right-hand corner, which is the elderly patient with an anterior wall infarction and cardiogenic shock. A patient that's clearly high-risk, but probably has the most of the benefit. And unfortunately, public reporting, honestly, it does have this perverse effect by kind of pushing you downward, away from those higher-risk patients, which is why now a lot of these databases, including New York, eliminates some of these refractory, high-risk patients with cardiogenic shock, hypoxic brain injury, where you have this gorgeous PCI result, but unfortunately, the patient is left with severe brain dysfunction. And ultimately, the family may decide that they don't wanna continue their care. Now, some of this risk avoidance can be modulated by good risk adjustment. And the NCDR just published this about two years ago. Their latest formula for risk adjustment of mortality. And you can see on the right, it includes a lot of the clinical instability things that people are worried about, plus their cardiac arrest and whether they're responsive or unresponsive. We don't have time to go into how to actually operationalize this at your local institutions. Maybe we'll get into that with some of the questions, but it can be done. It's not perfect, but it can be done. So, I'm gonna take a little bit of a brief side trip here and talk about the history of ACC public reporting. We've been doing this now for 14 years. This started back in 2010 with something that was very carefully studied on the part of the college, realizing there was a lot of anxiety about this at first. So, from concept through design up to reality, that took about six years before public reporting first was introduced from the NCDR. And it took a lot of time before it reached out to what it actually, you see now, if you go to CardioSmart, find your heart a home, and that's the patient-facing look at public reporting with more than 2,000 hospitals now publicly reporting, and thank you for all those out there that do. Along the way, now I'm gonna use the first person here and say I a lot, but understand it's not I. There was the Public Reporting Advisory Group along the way providing a lot of input, and then I can't even count the number of dedicated and very talented ACC staff individuals that all contributed to making this website and making it all work and run, and the number are really great. But somehow along the way, I became the face of public reporting for the NCDR. And one of the first things I had to do was to go up to Washington and convince the ACC Board of Governors, that's B-O-G, Board of Governors, that public reporting was happening and it was a good thing and the college was gonna do it. And there was a lot of anxiety about this. And actually, I was able to capture this picture. This is coming in the Hart House on the morning that I'm giving the talk to the Board of Governors about public reporting. And I was able to get this picture of the group coming in. They were not a happy group, not at all a happy group. And that's why I keep moving on the stage here because people can't shoot at me. They were not a happy group. And for all of the difficulty with that, I had to then kind of go and talk to my colleagues in interventional cardiology, and they were an even tougher group, and then Heart Rhythm Society. So, people ask, well, how did this, how did you wind up kind of being the target here? And you've seen this scene in the movies. They have the whole line of soldiers, they're all standing at attention, and the colonel is out there and he says, I need a volunteer for this very dangerous and perhaps even suicidal mission. And I would like that volunteer to step forward and everybody in the line does this, except for that one person who really wasn't paying attention in here, and they're left standing out in front. Well, somehow that turned into me. I'm not sure how that happened. But the college was great, and they were very supportive. For a while, I was in their witness protection program. And, you know, that worked out pretty well for a while, but I knew I was being covered, and I have to go get here, a little something that was sent to me. One day at the, I come home from the cath lab, and my wife says, you got this really big and heavy box, and it's come from the ACC. And I go, I don't know what that is, give me a break. So I opened it up, and what I had was this. And if you can't really see it, this was sent from the college to help protect me during this public reporting thing. And there was a note on the inside of it. And the note was from Bill Achen, who at the time was the vice president for quality. And it said, Greg, I just wanted to be sure you knew we are behind you, way behind you. And I gotta take this thing off. This is heavy. So, you know, I kind of look at things in a simple way. There's what I call them is the big five, and that's U.S. News and World Report, the Medicare or Hospital Compare, and then the databases, although SVS does not publicly report their data like STS and NCDR. And then there's all the other ones, which are of varying quality. I kind of put those below the line. I don't put a lot of stock in them. As Dr. Sheehan has pointed out, lots of variability. They don't publish their methodology, etc., etc., etc. And, you know, it's even worse for doctors. You can see I'm a fan of the far side. It's a bummer of a birthmark there, Doc, you know, because a lot of these, I mean, they're terrible. I mean, people can send in, they write whatever they want to write about them. I mean, I was criticized because there were no parking spots when the patient came for their visit, and they had to drive around. I was very frustrated. And even the CMS site, go look up one of your favorite doctor, it really doesn't say a whole lot on that site. It's really just noise. I call it noise. It's too much information. And to summarize what Dr. Sheehan said, too many measures, highly variable, and inconsistent and confusing findings. But despite this, public reporting has a tremendous amount of support from payers, public health officials, paid advocates, but not so much from doctors. So look at the graph on the right. You can see the scorecard measures physician quality, low, useful, low, improves patient care in New York, low, but may dissuade PCI from being done in appropriate patients. Yeah, that's what doctors think about it. So I've just scratched the surface of the ugly and bad. So what's the good for public reporting? Well, there is good, and I'm going to convince you that there is some good. Don't you have a right to know? Airline safety is a big concern. I'm sure everybody has seen this picture on the left, the door that came off the Alaska Airlines flight. In the middle, there's a tire coming off a United flight as it took off. And then on the right, this just happened about a week ago in Atlanta. A couple of Delta planes got into it, and the big plane hurt the little plane as you see there. So yeah, I mean, don't you have a right to know about? I bet most of you got here on an airplane. Don't you have a right to know about airline safety? And there are some websites for that, that you can look that up. I'll bet you the majority of individuals in here have kids in school. You ever look at schooldigger.com? It's really a very good site. It profiles schools, test scores, enrollment, all sorts of interesting things. These are, happen to be the two schools that my grandchildren go to, and they're rated pretty highly. And I'll bet when you were thinking about coming to this conference, you went to this site and said, is it the hotel any good? I mean, I want to be staying at it. And you know, it has an, I think you'd all agree, it has, this is a great place, has excellent rating. So I would say to you, the good of public reporting is as patients, don't you deserve to know about the quality? I mean, my answer is yes. So I'm going to ask you to just kind of go to a little different place here and think about this. I mean, you probably, in this audience, you recognize this. That's a bicuspid aortic valve, and you see along the bottom, you see those, you know, accumulations of calcium. It's just like a single slit. Obviously, the person's having heart surgery, and that valve is about to come out and be sent to a pathologist. If that were you, and statistically, there's people in this audience that are going to need to have their aortic valve replaced at some point. If that were you, would you want to know your surgeon's results? Would you want to know the results at the facility where you're going to go to? I think the answer is yes, and now I will tell you that that's not any old valve. That's my aortic valve. Two years ago, on its way out to pay a visit to a pathologist, and you can bet your bottom dollar that I knew the results at the institution where I had this done, and it was my home institution, because an excellent surgeon, and as Dr. Sheehan has pointed out, the public has a right to know about the quality of outcomes. You would want to know for you or your loved ones. It's an obligation. It's not been that easy to show, though. It's easiest to show for quality measures, or at least for process measures. It improves process measures or the documentation of process measures, as you can see here for acute myocardial infarction from the CMS data, all improve over time that they're public reporting. Dr. Sheehan's shown this slide. The green are those places that are reporting versus the red. Consistently, public reporting places do better. The same has been shown for PCI. In this study here, that publicly reported institutions have better quality outcomes, but even I have to admit that the results are mixed. Two studies here, a big review from the Annals and another from the Cochrane group. It's been very difficult to show. It's not like you can do a double-blind placebo-controlled trial of public reporting. You can't really do that, so it's been very hard to show that public reporting does give you the quality improvement that you want. It's pretty complicated, this public reporting thing, but I have a simpler version. I break it down into this. Public reporting, there's passive public reporting. That's like what US News and World Report does, CMS does, a number of states. You're going to get reported and you have nothing to do. You can't really control it. You're going to get reported. The other is active public reporting, and that's things like STS, NCDR. The point about that is, this public reporting thing, the train has left the station. It's not coming back. It's going to happen, but wouldn't you rather get on board with data that you control, rather than turning the control over to somebody else? There's lots of challenges ahead. We don't have time to go into everything on here. I'll just mention a couple. Things are moving towards more disease-based reporting. That is, instead of reporting first medical contact of balloon time, you actually report across the spectrum of how patients with STEMI are treated, realizing that it's a team sport. It's not just the interventional cardiologists. It's the emergency room, EMS, the people taking care of the patients afterwards. Mortars process measure and patient-centric reporting. You've seen some of this in PCI with the ORBITA A and B trials. There was an interesting test done with our colleagues in orthopedics and knee surgery, and CMS was actually doing this. It wasn't reporting how many people died during their knee surgery, how many people had infections from their knee surgery. They went back and said, okay, a year later, are you walking better, and do you have less pain? Really patient-centric reporting. I'll end this with a slide that I've ended a lot of talks on, and that's the status of public reporting. It's like fireworks. There's an explosion of activity, and it's going in a lot of different directions. It draws a large crowd. There's no doubt about that. Some think it's beautiful and a wonderful thing. Others think it's a very scary thing to do, but the bottom line is you can get hurt if it's not done properly. That's why I would emphasize everything that Dave said earlier about the methodology, knowing the methodology, having the methodology published to get results. At that point, I will stop. Thank you very much. All right. Well, I want to thank our esteemed panel for sharing their expertise today. I think we can all walk home and make changes. If you haven't already signed up for public reporting, we're hoping that this call to action will do that. Now I'd like to tip my hat and thank Ben Harder for joining us here this morning from U.S. News and World Report to share his expertise. Ben, we clearly heard that the consumers are confused about all the methodologies and are a bit lost. What's our next best alternative? Yes. Consumers, we heard that there's an obligation to publicly report. I think it's also that patients are seeking this information. They want to know about the quality of the hospital or the program that they're going to, and they do look for this information at volume online. The alternative to the sort of public reporting that we've been talking about this morning is, you know, the Yelps and the Facebook likes and the sort of commentary on the parking situation and so on. And if that's not what we want patients to, or if that's not all that we want patients to have access to, then we really need to embrace and push forward with public reporting. So, you know, as you heard my co-speakers here talk about today, U.S. News does, I think you described as passive public reporting. Hospitals, programs, providers don't need to do anything, and in fact can't do anything to opt out of that public reporting. We use all of that data that's generated through administrative claims to evaluate every hospital that offers a given service. That means about a thousand heart surgery programs in the country, nearly a thousand TAVR programs, 2,500 hospitals that offer AMI care, about 4,000 hospitals that provide care to patients who are hospitalized for a heart failure. And so we're publishing all of that information on our website, freely available to consumers, and that forms a foundation for them of information that they can use to make informed decisions about where to seek care in their community and for the type of service that they need. But I think that's, you know, that's a foundation because, as we heard, there are limitations to administrative data, and we recognize that. And we've done everything we can with the advice of folks on this stage and many others to try to pull the signal from administrative claims to address issues like risk aversion and so on, and make sure that our measurement is as good as we can do it with administrative data. But there is really no substitute for the sort of clinical data that is available in registries. And so U.S. News sees the registries very much as allies in making that information available to consumers and encouraging them to avail themselves of that information instead of relying strictly on Facebook likes and so on. And so that's why we've built in explicit incentives in our methodologies to encourage hospitals to embrace public reporting. Our rating methodologies for heart attack, for heart failure, for TAVR, as well as for several thoracic surgeries all build in whether or not a hospital has engaged in voluntary public reporting through the relevant registries. And that is often one of the most powerful signals in our in our model of which hospitals have the strongest programs in those areas. And so it is one thing that is completely under the control of the department in the hospital, and it makes an enormous difference in what kind of rating that hospital gets in that particular service. And we think it's the right thing for patients, because it puts that information from the clinical registries in front of them. Great. Another question, Ben, regarding U.S. News. Can you talk a little bit about the weighting categories? And maybe, I know our hospital, our health system, has really looked into the doximity piece and trying to better understand that. Because we actually went down in doximity, despite putting a lot of effort and time there. So could you talk a little bit about that? Sure. So as many of you know, we survey thousands of physicians about where they would refer their sickest patients. And we use doximity to reach those physicians. I call it the physician survey, because we're surveying physicians. We used to reach them through the U.S. Postal Service and FedEx. We found it more cost-effective to do at scale through the doximity platform, but it is a physician survey. But I would emphasize that that is a measure that we use in our ranking of the top 50 cardiothoracic heart, you know, vascular cardiology programs in the country. But it applies really to those 50 hospitals, and in fact only a subset of them get any real meaningful benefit from that. It's also basically non-modifiable. I know hospitals often think that they can do something to improve it. We haven't seen evidence of that. So I think that that is a relatively modest piece of our methodology. Again, it applies at most to about 50 hospitals in the country. Whereas, as I mentioned, we rate over 4,000 hospitals in one or more cardiology services and cardiovascular services. And in those ratings, AMI, heart failure, TAVR, and so on, we do not use expert opinion at all. And as I mentioned, whether a hospital has embraced public reporting through the relevant registries is a very heavily weighted measure. And it's 3% for public transparency. And it also factors into those the top 50 rankings, where it's 3%, but it's actually a much heavier loading in the models that we use for the ratings in AMI, and heart failure, and so on. Greg, I have a question for you. Dave has challenged the ACC and the NCDR to move away from process measures into outcomes measures, like we already have done in the TVT registry and some of our other registries. What can you tell our audience about the potential future in doing that here? It's coming. No, we're well aware of that, and that's a very valid criticism, is that we, up to this point for the most part, have process measures, because those are the easiest to tabulate. And it was like sticking our toe in the water with a very anxious group of hospitals and physicians. So that was by intention that we did that. But composite measures that include outcomes, things not only mortality, like for PCI, but mortality, occurrence of renal insufficiency or renal failure, bleeding, are all being incorporated into a composite measure. That's the other challenge, is remember that the first word in all this is public. So we have, this is for the public, so we have to put something out there that is easy for the public to understand. And Dave has outlined their approach. They have a star rating, which is pretty easy to understand. They have a much more detailed approach. You know, we're focusing more on having it for the public, and although this entire audience would understand the details of that, we're trying to make it fairly simple for the audience and to summarize things in a composite measure. So it's coming. One question I have for the panel, especially maybe Dr. Shaheen. So we saw that Cath PCI public reporting, the adoption rate's like not as great as STS at 80%. What recommendations or best practices do you have to share from your journey? Well, I think getting stakeholder trust is probably the single most important thing. I think if providers trust the results, they're going to be more likely to do it. But there's also that issue that I tried to raise of feeling as a community that you have an ethical responsibility to do this. And unless you can convince people that this isn't something that's being mandated, this is something that you as a health care provider have a responsibility to do. I think that is, you know, equally important to the trust issue. Could I just go back to one thing that Greg said just for a second? On this issue of risk aversion, it is probably the most commonly discussed objection to public reporting. And there are at least a dozen papers in cardiac surgery that looked at this, and about half of those separate providers into degrees of either quality or years of experience in the case of physicians. And one thing that those papers have shown, and please feel free to email me, I'll send them to you, is that in a public reporting environment there is better matching of high-risk patients to high-performing providers. Yes, some of the lower performing providers or less experienced providers may turn down those patients, but they end up getting shifted to higher performing, more experienced centers and physicians. So I think that's the other way to look at this issue. It's not across the board that these patients are necessarily being denied services, but they're getting shifted, I think, at least in cardiac surgery, to better providers. And one last point in terms of the issue of being willing to take on a very high-risk patient. Most of the risk models in cardiovascular surgery and healthcare in general tend to over- predict adverse outcomes at the very high end. So when you get up to 40, 50 percent predicted mortality, actually most patients in that category are going to have less than 40 or 50 percent actual mortality. So your chance that you're going to have a favorable O to E ratio if you operate on very high-risk patients is actually much better. On the other side, if you're operating just on patients that have very low predicted mortality, down in the, let's say, half percent range, you lose one of those patients and your outcomes are going to be adversely affected for years, literally. So as, you know, Ralph and I have often discussed this as from an O to E ratio standpoint, it's money in the bank in a way when you operate on and save a high-risk patient. So just two things to think about in terms of this risk aversion issue. I will point out that the top 50 US news hospitals are not hospitals that avoid risk, they are hospitals that are magnets for risk. And in fact, we saw a slide earlier of what happened to the patients who couldn't get care in New York and the types of hospitals that they ended up getting care at are many of the ones that are in our top 50. So I think that should help put to bed the concerns that risk aversion is somehow promoted through public reporting. It's actually public reporting, when done well, rewards hospitals that take on the level of risk that they can handle and get good outcomes for. So Ben, can we have you come to some of our hospital meetings to talk to our physicians for us? I know we're over time, so I'm going to close us out on one last question and it ties into our whole theme today. So Dr. Brindis, can you talk a bit about mandatory public reporting that's coming in California because that's setting the stage, I think, for action today. Well thanks, I'll make this quick. This coming fall, the state of California is taking on public reporting mandatory for TAVR and the first report has already been put to bed and will be released shortly. I'm particularly proud about the state of California. I also have a conflict of interest as Greg, but the partnership between the state of California Health Department and the California ACC, the California SDS, has populated a clinical advisory committee consisting of surgeons, interventional cardiologists, and cardiologists. And we work hand-in-hand with the methodology that the state takes on in any public reporting process and we allow hospitals and physicians the opportunity to look at their results and even look for adjudication, a very good process which he hadn't mentioned yet today. And so we're kind of excited and leading that way and particularly appreciating that TAVR is the predominant surgical strategy now for isolated aortic valve replacement. Yeah, well with that we will close out our session. I want to thank you this morning for giving us the gift of your time to have this really important discussion.
Video Summary
Dr. Ralph Brindis, a Senior Medical Officer, opened a session discussing transparency and accountability through public reporting in cardiovascular healthcare. The aim was to promote advanced methods of public reporting within hospitals participating in the National Cardiovascular Data Registry (NCDR). Dr. Dave Shaheen of Harvard Medical School emphasized the issues of variation in healthcare quality and the importance of accurate measurement for improving quality. He highlighted that current healthcare quality measurement is chaotic, leading to conflicting results that undermine stakeholder trust. Dr. Shaheen advocated for establishing standards and rigorous methodologies to ensure the reliability of public reporting.<br /><br />Dr. Greg Dahmer followed by addressing the good, bad, and ugly aspects of public reporting. He acknowledged issues such as the use of administrative data, which can be outdated and inaccurate, and the potential for risk aversion, where providers might avoid high-risk cases to prevent poor outcomes being publicly reported. Despite these challenges, Dahmer argued for public reporting as it promotes transparency and allows patients to make informed decisions.<br /><br />Ellie Huff facilitated a discussion panel, including Ben Harder of U.S. News & World Report, on how their rankings integrate public reporting from clinical registries. Harder explained that public reporting offers consumers a more reliable alternative to informal sources like social media and is incentivized in U.S. News rankings to encourage hospitals toward transparency.<br /><br />The panel agreed that although public reporting poses challenges, it is essential for ethical medical practice and improving overall healthcare quality, ensuring that patients have the right to know their provider's performance. The session concluded with a reminder of California's upcoming mandatory public reporting for transcatheter aortic valve replacement (TAVR) procedures, setting a standard for state-level public reporting initiatives.
Keywords
public reporting
cardiovascular healthcare
transparency
accountability
healthcare quality
National Cardiovascular Data Registry
risk aversion
U.S. News rankings
California TAVR reporting
ethical medical practice
×
Please select your language
1
English