The Joint Commission conducts unannounced on-site inspections through surveys at hospitals every 18 to 36 months. The inspections take a week, and the surveyors collect data on patient safety, infection control, medication management, or more. These surveys are a big deal, because they are how the Joint Commission makes decisions on accreditation. Should a hospital lose accreditation, it can affect its reputation, or even result in censure or closure. When a hospital is being surveyed, its employees know, and hospitals now often make it a point to train staff in "survey readiness".
A new paper in JAMA Internal Medicine looked at how hospital behavior changes when these surveys are being conducted. They didn't just look at process measures, though; they looked at how patient outcomes changed. Specifically, they looked at how outcomes differed on survey weeks versus non-survey weeks.
Because the timing of the surveys is unannounced, this is sort of like a randomized controlled trial. The researchers looked at Medicare admissions at almost 2000 hospitals in the three weeks before and the three weeks after a survey happened. They controlled for patients' sociodemographic and clinical characteristics, and they also conducted subanalyses for teaching hospitals versus non-teaching hospitals.
The main outcome of interest was 30-day mortality. They also looked at secondary measures like rates of infection with C. Diff, mortality from in-hospital cardiac arrest, and patient safety indicators like the PSI 90 and PSI 4.
Because there's so much Medicare data, the study sample included more than 244,000 admissions during survey weeks and almost 1.5 million admissions during nonsurvey weeks. There were no real differences between patients, admission diagnoses, or in-hospital procedures between the two groups. The average age of patients in the study cohort was about 73 years.
The 30-day mortality for patients in the non-survey weeks was 7.21%. In the survey weeks, though, it was 7.03%, which was significantly lower. Analyses showed that this observed difference was larger than 99.5% of the mortality changes in 1000 different random permutations of the survey date combinations, meaning that these differences are very unlikely to be due to chance. The reductions in mortality were even bigger in major teaching hospitals, where 30-day mortality dropped from 6.41% in non-survey weeks to 5.93% in survey weeks.
There were no significant differences in any other factors, though, including length of stay or any of the secondary outcomes.
There are a number of reasons for why mortality might be lower during Joint Commission survey weeks. But the quasi-randomness of the unannounced surveys makes this more likely causal than we might otherwise consider. The most obvious explanation is that hospital employees might be more attentive and careful during surveys than they are at other times. The fact that major teaching hospitals saw the largest effect might mean that they are able to make more significant changes during survey weeks than other hospitals.
Of course, this is still an observational study, and the causal link is therefore not proven. The overall effect was also less than a percent, which means it could be very small, and only significant because of the huge size of the database. Given the inability to detect any secondary measure differences also means we don't have any idea what the means of the reduction in mortality was. Finally, this was a study only of Medicare patients, not all patients.
But, still, it shows that measuring safety might actually improve it. Unfortunately, those improvements fade away when the measurement is over. We need to find ways to make them stick.