Background: We examined the validity and reliability of the previously developed criterion-referenced assessment checklist (AC) and global rating scale (GRS) to assess performance in ultrasound-guided regional anaesthesia (UGRA).
Methods: Twenty-one anaesthetists' single, real-time UGRA procedures (total: 21 blocks) were assessed using a 22-item AC and a 9-item GRS scored on 3-point and 5-point Likert scales, respectively. We used one-way analysis of variance to compare the assessment scores between three groups (Group 1: ≤30 blocks in the preceding year; Group 2: 31-100; and Group 3: >100). The concurrent validity was evaluated using Pearson's correlation (r). We calculated Type A intra-class correlation coefficient using an absolute-agreement definition in two-way random effects model, and inter-rater reliability using an absolute agreement between raters. The inter-item consistency was assessed by Cronbach's α.
Results: The greater UGRA experience in the preceding year was associated with better AC [F (2, 18) 12.01; P<0.001] and GRS [F (2, 18) 7.44; P=0.004] scores. There was a strong correlation between the mean AC and GRS scores [r=0.73 (P<0.001)], and a strong inter-item consistency for AC (α=0.94) and GRS (α=0.83). The intra-class correlation coefficient (95% confidence interval) and inter-rater reliability (95% confidence interval) for AC were 0.96 (0.95-0.96) and 0.91 (0.88-0.95), respectively, and 0.93 (0.90-0.94) and 0.80 (0.74-0.86) for GRS.
Conclusions: Both assessments differentiated between individuals who had performed fewer (≤30) and many (>100) blocks in the preceding year, supporting construct validity. It also established concurrent validity and overall reliability. We recommend that both tools can be used in UGRA assessment.
Keywords: anaesthetists; checklist; educational assessment; quality; reproducibility of results; ultrasound.
Copyright © 2018 British Journal of Anaesthesia. Published by Elsevier Ltd. All rights reserved.