Background: Comparing outcomes across hospitals to learn from best performing hospitals can be valuable. However, reliably identifying best performance is challenging. This study assesses the possibility to distinguish best performing hospitals on single outcomes and consistency of performance on different outcomes.
Methods: Data were derived from the Dutch ColoRectal Audit 2013-2015. Outcomes considered were textbook outcome (colon), (circumferential) resection margins, (serious) complications, mortality, and 'failure to rescue'. To include uncertainty in rankings, random effect logistic regression models were used to calculate expected ranks (ERs), for each hospital and outcome. Rankability was calculated for each outcome, as a measure of reliability of ranking. Furthermore, correlation between ERs on different outcomes was assessed. Correlation was considered weak <0.40, moderate between 0.40 - 0.59 and strong >0.60.
Results: The study included 32 143 patients; of whom 11 373 were treated in 2015 across 84 hospitals, 8181 colon and 3192 rectal cancer patients. In this one-year period 'Postoperative complications' had the highest rankability for colon (57%) and rectal (41%) surgery. No (group of) hospital(s) had the highest ER(s) on all outcomes. Correlation between ERs of outcomes was moderate in 2 (of 25) and strong in 4 (of 25) combinations. Rankability of colorectal mortality increased from 14% in 2015 to 35% when data over 2013-2015 were used.
Conclusion: The highest reliability of identifying best performance based on an outcome was 57%. However, the balance between reliability and relevance of outcomes is vulnerable. No (group of) hospital(s) could be identified as best performer on all outcomes. Performance was not consistent on outcomes.
Keywords: Best performance; Clinical auditing; Colorectal cancer surgery; Outcome research.
Copyright © 2020. Published by Elsevier Ltd.