Objective: To evaluate the performance of computerized drug-drug interaction (DDI) software in identifying clinically important drug-drug interactions.
Design: One-time performance test of computer systems using a standard set of prescriptions.
Setting: Community pharmacies or central corporate locations with pharmacy terminals identical to those used in actual pharmacies.
Participants: Chain and health maintenance organization (HMO) pharmacies with seven or more practice sites in Washington State. A total of nine different DDI software programs were installed in 516 community pharmacies represented by these chains and HMOs.
Main outcome measures: Sensitivity, specificity, and positive and negative predictive values of software in detecting 16 well-established DDIs contained within six fictitious patient profiles.
Results: The software systems failed to detect clinically relevant DDIs one-third of the time. Sensitivity of the software programs ranged from 0.44 to 0.88, with 1.00 being perfect; specificity ranged from 0.71 to 1.00; positive predictive value ranged from 0.67 to 1.00; and negative predictive value ranged from 0.69 to 0.90. For software packages that were installed at different locations, between-installation differences were observed.
Conclusion: The performance of most DDI-detecting software programs tested in this study was suboptimal. Improvement is needed to advance their contribution to detection of DDIs.