Background: Health-related patient-reported outcomes (HR-PROs) are crucial for assessing the quality of life among individuals experiencing low back pain. However, manual data entry from paper forms, while convenient for patients, imposes a considerable tallying burden on collectors. In this study, we developed a deep learning (DL) model capable of automatically reading these paper forms.
Methods: We employed the Japanese Orthopaedic Association Back Pain Evaluation Questionnaire, a globally recognized assessment tool for low back pain. The questionnaire comprised 25 low back pain-related multiple-choice questions and three pain-related visual analog scales (VASs). We collected 1305 forms from an academic medical center as the training set, and 483 forms from a community medical center as the test set. The performance of our DL model for multiple-choice questions was evaluated using accuracy as a categorical classification task. The performance for VASs was evaluated using the correlation coefficient and absolute error as regression tasks.
Result: In external validation, the mean accuracy of the categorical questions was 0.997. When outputs for categorical questions with low probability (threshold: 0.9996) were excluded, the accuracy reached 1.000 for the remaining 65 % of questions. Regarding the VASs, the average of the correlation coefficients was 0.989, with the mean absolute error being 0.25.
Conclusion: Our DL model demonstrated remarkable accuracy and correlation coefficients when automatic reading paper-based HR-PROs during external validation.
Keywords: Artificial intelligence; Back pain; Convolutional neural network; Deep learning; HR-PRO; JOABPEQ; Questionnaire.
Copyright © 2024 Elsevier Ltd. All rights reserved.