vlog

[Skip to Navigation]
Sign In
Figure 1. Adverse Drug Event, Catheter-Associated Urinary Tract Infection (CAUTI), Central Catheter–Associated Bloodstream Infection (CCABSI), and Fall (Moderate or Greater Harm) Rates Before and After Implementation of Solutions for Patient Safety (SPS)

Dots indicate the hospital-level average in hospital-acquired condition rates over time. The orange slope lines were estimated using an interrupted time series model. The orange dashed lines represent 95% CIs of the slope lines. The vertical blue line indicates time of the SPS initiation.

Figure 2. Pressure Injury (PI), Surgical Site Infection (SSI), Ventilator-Associated Pneumonia (VAP), and Venous Thromboembolism (VTE) Event Rates Before and After Implementation of Solutions for Patient Safety (SPS)

Dots indicate the hospital-level average in hospital-acquired condition rates over time. The orange slope lines were estimated using an interrupted time series model. The orange dashed lines represent 95% CIs of the slope lines. The vertical blue line indicates time of the SPS initiation.

Table 1. Definitions
Table 2. Study Hospital Characteristicsa
Table 3. Interrupted Time Series Analysis of Hospital-Acquired Condition (HAC) Rates Before and After Membership in the Children’s Hospitals’ Solutions for Patient Safety (SPS)a
1.
US Centers for Medicare & Medicaid Services. Partnership for Patients and the hospital improvement innovation networks: continuing forward momentum on reducing patient harm. Accessed June 16, 2022.
2.
Perla RJ, Pham H, Gilfillan R, et al. Government as innovation catalyst: lessons from the early center for Medicare and Medicaid innovation models. Health Aff (Millwood). 2018;37(2):213-221. doi:
3.
Conway P, Wagner D, McGann P, Joshi M. Did hospital engagement networks actually improve care? N Engl J Med. 2014;371(21):2040-2041. doi:
4.
Pronovost P, Jha AK. Did hospital engagement networks actually improve care? N Engl J Med. 2014;371(8):691-693. doi:
5.
Lyren A, Brilli RJ, Zieker K, Marino M, Muething S, Sharek PJ. Children’s hospitals’ solutions for patient safety collaborative impact on hospital-acquired harm. ʱ徱ٰ. 2017;140(3):e20163494. doi:
6.
Children’s Hospitals’ Solutions for Patient Safety. About us. Accessed June 27, 2022.
7.
Langley GL, Moen R, Nolan KM, Nolan TW, Norman CL, Provost LP. The Improvement Guide: A Practical Approach to Enhancing Organizational Performance. 2nd ed. Jossey-Bass Publishers; 2009.
8.
Institute for Healthcare Improvement. The Breakthrough Series: IHI’s collaborative model for achieving breakthrough improvement. Accessed June 16, 2022.
9.
Lyren A, Dawson A, Purcell D, Hoffman JM, Provost L. Developing evidence for new patient safety bundles through multihospital collaboration. J Patient Saf. 2021;17(8):e1576-e1584. doi:
10.
Lyren A, Coffey M, Shepherd M, Lashutka N, Muething S; SPS Leadership Group. We will not compete on safety: how children’s hospitals have come together to hasten harm reduction. Jt Comm J Qual Patient Saf. 2018;44(7):377-388. doi:
11.
Press Ganey Associates. Safety and high reliability. Accessed June 16, 2022.
12.
National Coordinating Council for Medication Error Reporting and Prevention. NCC MERP index for categorizing medication errors. Accessed June 16, 2022.
13.
National Healthcare Safety Network. Urinary tract infection (catheter-associated urinary tract infection [CAUTI] and non-catheter-associated urinary tract infection [UTI]) event. Accessed June 16, 2022.
14.
National Healthcare Safety Network. Bloodstream infection event (central line–associated bloodstream infection and non–central line associated bloodstream infection). Accessed June 16, 2022.
15.
National Healthcare Safety Network. Surgical site event. Accessed June 16, 2022.
16.
Agency for Healthcare Research and Quality. Module 5: how to measure pressure injury rates and prevention practices. Accessed June 22, 2022.
17.
National Healthcare Safety Network. Pneumonia and non-ventilator-associated pneumonia event. Accessed June 16, 2022.
18.
Children’s Hospitals’ Solutions for Patient Safety. Operational definitions. Accessed June 21, 2022.
19.
French B, Heagerty PJ. Analysis of longitudinal data to evaluate a policy change. Stat Med. 2008;27(24):5005-5025. doi:
20.
Wagner AK, Soumerai SB, Zhang F, Ross-Degnan D. Segmented regression analysis of interrupted time series studies in medication use research. J Clin Pharm Ther. 2002;27(4):299-309. doi:
21.
French B, Stuart EA. Study designs and statistical methods for studies of child and adolescent health policies. JAMA Pediatr. 2020;174(10):925-927. doi:
22.
Bernal JL, Cummins S, Gasparrini A. Interrupted time series regression for the evaluation of public health interventions: a tutorial. Int J Epidemiol. 2017;46(1):348-355.
23.
Macrae C. The problem with incident reporting. BMJ Qual Saf. 2016;25(2):71-75. doi:
Views 3,572
Original Investigation
July 25, 2022

Association Between Hospital-Acquired Harm Outcomes and Membership in a National Patient Safety Collaborative

Author Affiliations
  • 1Department of Paediatrics, Temerty Faculty of Medicine, Toronto, Ontario, Canada
  • 2The Hospital for Sick Children, Toronto, Ontario, Canada
  • 3Children’s Hospitals’ Solutions for Patient Safety, Toronto, Ontario, Canada
  • 4Department of Family Medicine, Division of Biostatistics, Oregon Health & Science University, Portland
  • 5Department of Pediatrics and Department of Bioethics, Case Western Reserve University School of Medicine, Cleveland, Ohio
  • 6UH Rainbow Babies and Children’s Hospital, Cleveland, Ohio
  • 7Community Research at United Way of Central New Mexico, Albuquerque
  • 8James M. Anderson Center for Health Systems Excellence, Cincinnati Children’s Hospital, Cincinnati, Ohio
  • 9Office of Quality and Patient Safety, Department of Pharmacy and Pharmaceutical Sciences, St Jude Children’s Research Hospital, Memphis, Tennessee
  • 10Department of Pediatrics, Nationwide Children’s Hospital, Columbus, Ohio
  • 11Children’s Hospital of Philadelphia, Philadelphia, Pennsylvania
  • 12Department of Pediatrics, Perelman School of Medicine at the University of Pennsylvania, Philadelphia
  • 13Leonard Davis Institute, University of Pennsylvania, Philadelphia
  • 14Riley Hospital for Children, Indiana University Health, Indianapolis
  • 15Indiana University School of Medicine, Indianapolis
  • 16The Center for Quality and Patient Safety, Seattle Children’s Hospital, Seattle, Washington
  • 17Division of General Pediatrics and Hospital Medicine, Department of Pediatrics, University of Washington, Seattle
JAMA Pediatr. 2022;176(9):924-932. doi:10.1001/jamapediatrics.2022.2493
Key Points

Question What is the association between a federally funded hospital engagement network improvement model, as exemplified by the Children’s Hospitals’ Solutions for Patient Safety, and hospital-acquired harms?

Findings In this hospital cohort study that included 99 children’s hospitals across the US and Canada, significant improvement occurred for 3 of 8 harms accounting for secular trends.

Meaning This study of the Children’s Hospitals’ Solutions for Patient Safety Hospital Engagement Network adds additional rigor to previous claims of an association between harm reduction and the hospital engagement network improvement model by adjusting for secular trends.

Abstract

Importance Hospital engagement networks supported by the US Centers for Medicare & Medicaid Services Partnership for Patients program have reported significant reductions in hospital-acquired harm, but methodological limitations and lack of peer review have led to persistent questions about the effectiveness of this approach.

Objective To evaluate associations between membership in Children’s Hospitals’ Solutions for Patient Safety (SPS), a federally funded hospital engagement network, and hospital-acquired harm using standardized definitions and secular trend adjustment.

Design, Setting, and Participants This prospective hospital cohort study included 99 children’s hospitals. Using interrupted time series analyses with staggered intervention introduction, immediate and postimplementation changes in hospital-acquired harm rates were analyzed, with adjustment for preexisting secular trends. Outcomes were further evaluated by early-adopting (n = 73) and late-adopting (n = 26) cohorts.

Exposures Hospitals implemented harm prevention bundles, reported outcomes and bundle compliance using standard definitions to the network monthly, participated in learning events, and implemented a broad safety culture program. Hospitals received regular reports on their comparative performance.

Main Outcomes and Measures Outcomes for 8 hospital-acquired conditions were evaluated over 1 year before and 3 years after intervention.

Results In total, 99 hospitals met the inclusion criteria and were included in the analysis. A total of 73 were considered part of the early-adopting cohort (joined between 2012-2013) and 26 were considered part of the late-adopting cohort (joined between 2014-2016). A total of 42 hospitals were freestanding children’s hospitals, and 57 were children’s hospitals within hospital or health systems. The implementation of SPS was associated with an improvement in hospital-acquired condition rates in 3 of the 8 conditions after accounting for secular trends. Membership in the SPS was associated with an immediate reduction in central catheter–associated bloodstream infections (coefficient = −0.152; 95% CI, −0.213 to −0.019) and falls of moderate or greater severity (coefficient = −0.331; 95% CI, −0.594 to −0.069). The implementation of the SPS was associated with a reduction in the monthly rate of adverse drug events (coefficient = −0.021; 95% CI, −0.034 to −0.008) in the post-SPS period. The study team observed larger decreases for the early-adopting cohort compared with the late-adopting cohort.

Conclusions and Relevance Through the application of rigorous methods (standard definitions and longitudinal time series analysis with adjustment for secular trends), this study provides a more thorough analysis of the association between the Partnership for Patients hospital engagement network model and reductions in hospital-acquired conditions. These findings strengthen previous claims of an association between this model and improvement. However, inconsistent observations across hospital-acquired conditions when adjusted for secular trends suggests that some caution regarding attributing all effects observed to this model is warranted.

Introduction

Substantial public investment has supported collaboratives aimed at reducing hospital-acquired harm, most notably through the Partnership for Patients (P4P) program established by the US Centers for Medicare & Medicaid Services (CMS) as part of the 2010 Affordable Care Act legislation. The P4P program used a collaborative approach to quality improvement designed to reduce hospital-acquired conditions (HACs), through technical and implementation support provided by several hospital engagement networks (HENs). This approach, which engaged 4000 hospitals, reportedly achieved substantial improvements, including 2 million fewer harms, tens of thousands of lives saved, and billions of dollars in avoided health care costs.1 Media commentators and academic experts have mostly endorsed these claims.2 However, debate has ensued regarding the true effect and return on investment.3,4 Specifically, some critics assert that results are overstated, citing methodologic flaws, such as lack of standard definitions and inferior analytic methods.4 To some extent, these concerns have been refuted by program leaders3 and addressed through subsequent evaluation,1 but controversy remains regarding the relative contributions to this improvement from direct effects of the intervention vs secular trends, and independent program evaluations have not undergone peer review.

It is critically important to understand the true effect of large-scale, government-funded collaborative improvement programs to guide policy and practice around health care improvement. Costs include not only direct federal funding but also additional costs, such as membership dues, person-hours and travel for participation in learning activities, and health care human resources to implement and sustain quality improvement activities. While double-blinded, randomized, placebo-controlled trials may not be practical in the realm of large-scale patient safety work, standardized measurement, quasi-experimental design, and more advanced analytic methods are feasible and can help answer these important questions.

The program examined in the present study, the Children’s Hospitals’ Solutions for Patient Safety (SPS), is one of several HENs and later hospital improvement and innovation networks (HIINs) funded by the P4P program. SPS member hospitals implement harm prevention bundles and submit regular process and outcome data across a broad portfolio of HACs. The experience of the first 33 SPS member hospitals was previously evaluated in a prospective cohort study with historical controls using interrupted time series methodology.5 Reductions in HACs ranging between 9% and 71% were reported, consistent with reported results of the larger group of P4P-funded HENs.5 The present study expands on this analysis by reporting on a larger and more diverse group of hospitals over a longer period of time and aims to address limitations of prior analyses by using standardized definitions and measurement, quasi-experimental design, and more advanced analytic methods (adjustment for secular trends) to examine the association between the HEN improvement model (using SPS as an exemplar) and patient safety outcomes. Our hypothesis, based on the findings of a previous SPS study,5 was that after joining the national collaborative, we would observe both an immediate change and a slope change in hospital-acquired harm outcomes. Furthermore, given that large-scale improvement efforts are typically scaled up over a period of time, heterogeneity in this association by earlier vs later entry hospitals into the network was performed to explore potential differences among different cohorts.

Methods
Setting

The network had its roots in a statewide children’s hospital collaborative formed in 2008.6 In 2011, this group of 8 children’s hospitals partnered with 25 others across the US and were awarded funds from the P4P program to form the Children’s Hospitals’ SPS. The network enrolled additional hospitals across the US and later Canada, and as of 2022, includes 144 children’s hospitals6 (freestanding children’s hospitals and children’s hospitals within hospital or health systems). In 2018, SPS transitioned away from the HIIN contract to become exclusively member funded. Hospitals included in the present study enrolled in the network in annual cohorts from 2012 to 2016 and represent a wide range of hospital sizes and geographical locations. The institutional review board at Cincinnati Children’s Hospital Medical Center reviewed and approved a waiver of consent for this study.

Inclusion and Exclusion Criteria and Hospital Characteristics

All 139 hospital network members as of 2019 were considered for inclusion. From this set, we excluded hospitals that did not provide HAC outcome data at any point during the study period (n = 36) and hospitals that did not contribute any data prior to joining the SPS collaborative (n = 4). This resulted in a study sample of 99 hospitals. Hospital characteristics, including size (by beds and patient-days), type (freestanding vs hospital within hospital), and presence of markers of high acuity care were assessed, as was whether they belonged to an early-adopting (2012-2013) or late-adopting (2014-2016) cohort of hospitals joining the HEN/HIIN.

Quality Improvement Model

The quality improvement approach used by SPS is based on the Model for Improvement7 and the Institute for Healthcare Improvement’s Breakthrough Collaborative model.8 Centralized network leadership oversaw the identification and dissemination of HAC prevention best practices. The methodology for generating and disseminating HAC prevention strategies as well as the network’s approach for engaging members in shared improvement work have been described previously.9,10 In brief, volunteer leaders and subject matter experts for each HAC were recruited to partner with centralized clinical improvement experts, data analysts, and project management staff to establish best practice prevention bundles as well as standardized outcome and process measure definitions. Member hospitals committed to reporting outcome and process measures (compliance to prevention bundles) monthly for a portfolio of HACs. Education around clinical practice, measurement, and data submission was provided through a 6-month orientation course offered annually for new members and ongoing regular topical webinars and in-person learning sessions for all members. In addition to the HAC bundles, hospitals implemented a safety culture program through a formal partnership with an organization that assists health care organizations in implementing a comprehensive set of practices patterned after high-reliability organizations.11 This safety culture program included safety event cause analysis standardization; hospital board safety education; leadership strategies for safety, such as rounding and organization-wide safety huddles; teamwork and communication training for all staff; and a peer safety coach program. Implementation and continuous learning were supported through an all-teach, all-learn approach involving regular in-person and synchronous virtual events and an extensive archive of materials. Implementation and sustainment of the elements of the culture program were measured by self-report annually. Hospital-level performance data were available to individual hospitals on a continuous basis, with periodic targeted communications to hospital leaders, including customized hospital performance summaries. Sharing of individual hospital outcome data was limited to higher-performing hospitals, with their permission.

Measures

The outcome measures for this study include self-reported rates of 8 HACs identified by CMS and defined by established national bodies where possible and, when unavailable, defined by expert consensus (Table 1).12-17 Rates were defined as HAC events per 1000 relevant exposures (or 100, in the case of surgical site infection) and were measured monthly for the study period between January 1, 2011, and December 31, 2016.

Data Collection and Reporting

Hospitals collected and submitted HAC outcome and process measures data to the network monthly via a secure web-based form. Instructions to hospitals regarding event detection and surveillance were, “Each hospital will report data using their own collection methods until specific high detection methods are prescribed by the network. Methods may include, but are not limited to, safety event report or medical record review, automated notifications, or other.”18 For venous thromboembolism, a 2-step diagnostic validation procedure was specified. We conducted surveillance for potential data submission errors by error-checking code on the web-based form and by manual analysis by the SPS data team.

Statistical Analysis

We used descriptive statistics including counts and frequencies as well as means and SDs to characterize study hospitals. Monthly HAC rate data from January 1, 2011, to December 31, 2016, were analyzed. To visually describe the changes before and after SPS initiation in HAC outcome rates, we plotted longitudinal profile plots locally weighted scatterplot smoother (LOWESS) curves over the study period. We used 2 analytic approaches to study the association of SPS with HAC outcome rates: (1) pre-post analysis ignoring secular trends adjustment to follow what has been done in previous work and (2) pre-post analysis accounting for secular trends.5 The preintervention period of 1 year was chosen because hospitals submitted 1 year of retrospective outcome data on joining the network. First, we descriptively report point estimates for pre-SPS and post-SPS HAC outcome rates and the relative rate change from the pre-SPS to post-SPS period along with their corresponding 95% CIs.

Second, to examine changes in HAC rates from pre-SPS to post-SPS initiation accounting for secular trends, we used hospital-level segmented regression analysis. This quasi-experimental interrupted time series analyses with staggered intervention introduction allows us to control for secular trends in HAC rates and to assess pre-SPS trends (ie, secular trends), the difference in rates immediately before and after joining SPS (immediate association) and the post-SPS change over time (change from secular trends). Specifically, we used generalized linear mixed-effects models, assuming a Poisson distribution for the monthly HAC rates with a log link, and we included an offset for the log of hospital population size in that month.19 Each model included as independent variables time (in months) since the start of the study, an indicator variable for time occurring before or after SPS initiation (coded 0 before joining the SPS and coded 1 after joining the SPS), and a continuous variable counting the number of months after SPS initiation (coded 0 before SPS initiation and time-12 after initiation), as described in Wagner et al.20 The regression coefficient corresponding with the time variable estimates the pre-SPS baseline secular trend; the coefficient corresponding with the indicator variable for pre-SPS vs post-SPS estimates the level change in HAC rates (on the log scale) immediately following SPS initiation; and the coefficient corresponding with the number of months after SPS initiation variable estimates the change in HAC outcome rate trends after SPS initiation (on the log scale) compared with the baseline secular trend prior to joining SPS. To account for temporal correlation of observations within hospitals,21 we included hospital-level random intercepts and hospital-level random slopes for time. Our hypothesis was that the SPS would be associated with an immediate and long-term change in HAC rates. As such, to test that hypothesis, we performed a 2-df likelihood ratio test evaluating if the regression coefficients associated with immediate or long-term change in HAC rates were equal to 0.

As a secondary analysis, we also sought to evaluate if the associations differed by time of entry into the SPS program. To evaluate this, we repeated our analyses comparing early-adopting hospitals (those that joined in 2012-2013) and late-adopting hospitals (those that joined 2014-2016) by including interaction terms of an early/late indicator with all independent variables in a single interrupted time series model, as described above. All model effect estimates were presented with corresponding 95% CIs. To address the potential for overdispersion to lead to incorrect estimation of the SEs,22 we performed a sensitivity analysis where we performed generalized linear mixed-effects models, assuming a negative binomial distribution for the monthly HAC rates instead of assuming a Poisson distribution.

All statistical tests were 2-sided, and significance was defined as P < .05. This analysis is not establishing causality, and thus no multiplicity adjustment was performed. Data were aggregated centrally in a Microsoft database and exported for analysis in RStudio version 1.4.1103 (R Institute).

Results

In total, 99 hospitals met the inclusion criteria and were included in the analysis. A total of 73 were considered part of the early-adopting cohort (joined between 2012-2013) and 26 were considered part of the late-adopting cohort (joined between 2014-2016). A total of 42 hospitals were freestanding children’s hospitals, and 57 were children’s hospitals within hospital or health systems. The mean (SD) number of beds per site was 270 (162) beds. Hospital characteristics, including markers of higher acuity care (cardiac, neonatal, and pediatric intensive care units as well as hematology/oncology service lines) are described in Table 2. Unadjusted rates in both the pre-SPS and post-SPS periods are reported in eFigure 1, eFigure 2, and eTable 1 in the Supplement and demonstrate that all HACs, except for venous thromboembolism events, saw a reduction in rates over the SPS implementation. We note that the early-adopting cohort had higher pre-SPS rates than the late-adopting cohort for all HACs, and the early-adopting cohort had more reduction of HAC rates from pre-SPS implementation to post-SPS implementation compared with the late-adopting cohort.

Results from comparative interrupted time series analyses that account for secular trends are reported in Table 3, Figure 1, Figure 2, and eTable 2 in the Supplement. Implementation of the SPS was associated with an improvement in HAC rates in 3 of the 8 conditions after accounting for secular trends. Membership in SPS was associated with an immediate reduction in central catheter–associated bloodstream infections (−0.152; 95% CI, −0.213 to −0.019) and falls of moderate or greater severity (−0.331; 95% CI, −0.594 to −0.069). Implementation of the SPS was associated with a reduction in the monthly rate of adverse drug events (−0.021; 95% CI, −0.034 to −0.008) in the post-SPS period. Catheter-associated urinary tract infection, pressure injury, surgical site infection, ventilator-associated pneumonia, and venous thromboembolism events did not change significantly with the intervention overall. When assessing associations by early-adopting and late-adopting group, the early-adopting cohort showed more decreases in HAC rates (both immediate and gradual) than the later-adopting cohort across most HACs. Lastly, sensitivity analyses using generalized linear mixed-effects models assuming a negative binomial distribution for the monthly HAC rates showed qualitatively similar results as those presented using a Poisson distribution (eTable 3 in the Supplement).

Discussion

Several results of this analysis merit further examination. Of greatest importance is the extent to which improvement occurred above secular trends and what implications this has for the discussion on effectiveness of the federally funded approach, such as the HEN model.2-4 Previous discussion about this has been somewhat polarized between 2 extremes. On one extreme is the impressive reductions observed using pre-post analysis and the corresponding cost reductions, and the other extreme is the assertion that none of this improvement was caused by the HEN model itself. To our knowledge, this analysis of the SPS network is the first study that demonstrates statistically significant declines in multiple HACs after adjusting for secular trends within an HEN. This study addresses some of the methodological concerns expressed about earlier evaluations of the HEN model,4 including use of standard definitions for outcomes, detection and surveillance methods, and interrupted time series with adjustment for secular trends. In the present study of the SPS HEN, 3 harms improved above secular trends and 5 did not. This strengthens the claim that enrollment in the federally funded P4P program HEN resulted in improvements above what would have occurred without this intervention. However, it also raises caution against overstating the effect across all harms.

Second, we found the effect of the SPS intervention varied by HAC. There are likely numerous factors influencing this finding. For some, established evidence for prevention practices and existence of prior collaborative efforts may explain why stronger secular trending for these HACs existed. While this could accelerate improvement by making it easier to build will and organize resources, it can also mean that further improvement is harder to realize. The complexity of prevention activities varied greatly between HACs in terms of number of professional groups affected, number and type of care practices targeted, and intensity of education and reinforcement required to achieve reliable process performance. This variable implementation complexity between HACs likely resulted in mixed improvement results. Furthermore, the different HAC baseline incidences mean that the power to show statistically significant differences varied by HAC.

Additionally, there was considerable variation by phase of enrollment into the SPS. There are a few potential explanations for this. First, network hospitals varied greatly by size and type, with larger hospitals, more freestanding hospitals, and more hospitals with markers of higher acuity represented among the early-adopting cohort. Consequently, it is likely that the early-adopting cohort had patients at higher risk of experiencing an HAC and therefore potentially more susceptible to improvement from evidence-based interventions. Second, the later-adopting cohort, with more children’s hospitals within adult hospital or health systems, may have already been exposed to HAC reduction interventions that were part of the growing trend in the years leading up to their joining the SPS. A third possible explanation is that the early-adopting cohorts were more committed and/or better resourced, and their local quality improvement efforts were more robust. This finding is relevant as large-scale improvement interventions often originate with inclusion of larger academic centers, and widespread opportunity remains to spread these interventions to community-based and rural care settings.

Limitations

There are several limitations to this study. First, HACs are self-reported and subject to surveillance bias, variation in event classification behaviors, data entry errors, and other well-known limitations of voluntary reporting.23 This could bias the results in an unknown direction. Second, while there were some basic data accuracy checks built into the submission process designed to detect data submission errors, these unvalidated data submissions could bias the results in an unknown direction. Third, hospitals were likely subjected to varying levels of best practice recommendations prior to participating in the network, thus potentially limiting the association of the intervention with postintervention outcomes. This would likely bias the results toward the null. Fourth, with the late-adopting group including a smaller number of hospitals, with a larger proportion of missing data, conclusions for some harms could not be drawn for this group owing to limitations in statistical power, particularly for lower-frequency HACs. Fifth, while failure to account for secular trending is a widely accepted limitation of prior analyses, assumption that secular trending would have continued at the preexisting pace or slope is also a potential limitation. We cannot predict what would have happened without the intervention, and assuming the trend would have continued (ie, overadjusting) is arguably as much of a potential error as failing to adjust for the preexisting trend. Sixth, SPS is one example of an HEN, and it had some unique features. Thus, the associations observed may not be fully applicable to all HENs. Despite these limitations, these results are encouraging and suggest SPS contributed to some reductions in hospital-acquired harms, supporting the assertion that substantial harm reduction, and the related lives saved, have been achieved at population scale through the HEN approach. While data regarding attributable costs per harm are available, the present study does not address how the HEN model may have affected health care costs.

Conclusions

This study of 99 members of a pediatric SPS HEN supports the assertion that the P4P program was highly effective in accelerating reduction in HACs. However, this analysis suggests that earlier claims may have overstated some of the effects owing to failure to adjust for secular trends. Differential outcomes in early-adopting vs late-adopting cohorts suggests that scale and spread plans should be made with these factors in mind. Additionally, future studies focused particularly on an economic assessment of the benefits of a SPS and the HEN model more broadly are warranted.

Back to top
Article Information

Accepted for Publication: May 31, 2022.

Published Online: July 25, 2022. doi:10.1001/jamapediatrics.2022.2493

Corresponding Author: Maitreya Coffey, MD, The Hospital for Sick Children, 555 University Ave, Toronto, ON M5G 1X8, Canada (trey.coffey@sickkids.ca).

Author Contributions: Drs Coffey and Marino had full access to all of the data in the study and take responsibility for the integrity of the data and the accuracy of the data analysis.

Study concept and design: Coffey, Marino, Lyren, Purcell, Hoffman, Brilli, Muething, Hyman, Saysana, Sharek.

Acquisition, analysis, or interpretation of data: All authors.

Drafting of the manuscript: Coffey, Marino, Sharek.

Critical revision of the manuscript for important intellectual content: Coffey, Marino, Lyren, Hoffman, Brilli, Muething, Hyman, Saysana, Sharek.

Statistical analysis: Marino, Purcell, Brilli.

Obtained funding: Muething.

Administrative, technical, or material support: Coffey, Lyren, Muething.

Study supervision: Coffey, Lyren, Brilli, Muething.

Conflict of Interest Disclosures: Dr Coffey reported grants from the US Centers for Medicare & Medicaid Services and fees from Children’s Hospitals Solutions for Patient Safety. Dr Purcell reported fees from Children’s Hospitals Solutions for Patient Safety and previously received salary support from Cincinnati Children’s Hospital during the conduct of the study. Dr Muething reported grants from Ohio Solutions for Patient Safety and from the US Center for Medicare & Medicaid Services during the conduct of the study. Dr Saysana reported fees from Solutions for Patient Safety during the conduct of the study and fees from Indianapolis Coalition for Patient Safety Indiana University Health Physicians outside the submitted work. No other disclosures were reported.

Funding/Support: This work was supported by the Children’s Hospitals’ Solutions for Patient Safety network, which is funded by network hospital participation fees and previously cofunded by grant HHSM-500-2016-00073C from the US Centers for Medicare & Medicaid Services.

Role of the Funder/Sponsor: The funder had no role in the design and conduct of the study; collection, management, analysis, and interpretation of the data; preparation, review, or approval of the manuscript; and decision to submit the manuscript for publication.

Additional Contributions: We thank Jillian Powell, BS, and Nicole Rotundo, BS (James M. Anderson Center for Health Systems Excellence, Cincinnati Children’s Hospital, Cincinnati, Ohio) for invaluable support throughout every stage of this study and manuscript preparation. They were not compensated for their work.

References
1.
US Centers for Medicare & Medicaid Services. Partnership for Patients and the hospital improvement innovation networks: continuing forward momentum on reducing patient harm. Accessed June 16, 2022.
2.
Perla RJ, Pham H, Gilfillan R, et al. Government as innovation catalyst: lessons from the early center for Medicare and Medicaid innovation models. Health Aff (Millwood). 2018;37(2):213-221. doi:
3.
Conway P, Wagner D, McGann P, Joshi M. Did hospital engagement networks actually improve care? N Engl J Med. 2014;371(21):2040-2041. doi:
4.
Pronovost P, Jha AK. Did hospital engagement networks actually improve care? N Engl J Med. 2014;371(8):691-693. doi:
5.
Lyren A, Brilli RJ, Zieker K, Marino M, Muething S, Sharek PJ. Children’s hospitals’ solutions for patient safety collaborative impact on hospital-acquired harm. ʱ徱ٰ. 2017;140(3):e20163494. doi:
6.
Children’s Hospitals’ Solutions for Patient Safety. About us. Accessed June 27, 2022.
7.
Langley GL, Moen R, Nolan KM, Nolan TW, Norman CL, Provost LP. The Improvement Guide: A Practical Approach to Enhancing Organizational Performance. 2nd ed. Jossey-Bass Publishers; 2009.
8.
Institute for Healthcare Improvement. The Breakthrough Series: IHI’s collaborative model for achieving breakthrough improvement. Accessed June 16, 2022.
9.
Lyren A, Dawson A, Purcell D, Hoffman JM, Provost L. Developing evidence for new patient safety bundles through multihospital collaboration. J Patient Saf. 2021;17(8):e1576-e1584. doi:
10.
Lyren A, Coffey M, Shepherd M, Lashutka N, Muething S; SPS Leadership Group. We will not compete on safety: how children’s hospitals have come together to hasten harm reduction. Jt Comm J Qual Patient Saf. 2018;44(7):377-388. doi:
11.
Press Ganey Associates. Safety and high reliability. Accessed June 16, 2022.
12.
National Coordinating Council for Medication Error Reporting and Prevention. NCC MERP index for categorizing medication errors. Accessed June 16, 2022.
13.
National Healthcare Safety Network. Urinary tract infection (catheter-associated urinary tract infection [CAUTI] and non-catheter-associated urinary tract infection [UTI]) event. Accessed June 16, 2022.
14.
National Healthcare Safety Network. Bloodstream infection event (central line–associated bloodstream infection and non–central line associated bloodstream infection). Accessed June 16, 2022.
15.
National Healthcare Safety Network. Surgical site event. Accessed June 16, 2022.
16.
Agency for Healthcare Research and Quality. Module 5: how to measure pressure injury rates and prevention practices. Accessed June 22, 2022.
17.
National Healthcare Safety Network. Pneumonia and non-ventilator-associated pneumonia event. Accessed June 16, 2022.
18.
Children’s Hospitals’ Solutions for Patient Safety. Operational definitions. Accessed June 21, 2022.
19.
French B, Heagerty PJ. Analysis of longitudinal data to evaluate a policy change. Stat Med. 2008;27(24):5005-5025. doi:
20.
Wagner AK, Soumerai SB, Zhang F, Ross-Degnan D. Segmented regression analysis of interrupted time series studies in medication use research. J Clin Pharm Ther. 2002;27(4):299-309. doi:
21.
French B, Stuart EA. Study designs and statistical methods for studies of child and adolescent health policies. JAMA Pediatr. 2020;174(10):925-927. doi:
22.
Bernal JL, Cummins S, Gasparrini A. Interrupted time series regression for the evaluation of public health interventions: a tutorial. Int J Epidemiol. 2017;46(1):348-355.
23.
Macrae C. The problem with incident reporting. BMJ Qual Saf. 2016;25(2):71-75. doi:
×