Background/aims: We aimed to compare the Mini-Mental State Examination (MMSE) with the Mini-Cog, measuring agreement in participants' classification, using a general population sample.
Methods: Cross-sectional evaluation of 609 community dwellers aged ≥60 years was performed by trained interviewers. Cohen's kappa and 95% confidence intervals (CI) were calculated to assess overall agreement, and Cronbach alphas computed to assess reliability. Two-parameter Item Response Theory models (difficulty and discrimination parameters) were used to assess discrimination.
Results: Considering MMSE cut-point for scores <24, 3.1% of the participants would be 'cognitive impaired' and 6.2% considering cut-point scores <25. Following Mini-Cog's cut-point score <3, 11.3% would be impaired. For MMSE cut-point <24 and Mini-Cog <3, we observed a Cohen's kappa of 0.116 (95% CI: -0.073 to 0.305), and of 0.258 (95% CI: 0.101-0.415) for cut-point <25. The highest kappa was obtained for cut-point <26 on the MMSE and Mini-Cog <3 (kappa = 0.413). MMSE Cronbach alpha was 0.6108 and Mini-Cog's alpha was 0.2776. Co-calibration according to inherent ability is graphically presented.
Conclusions: Agreement between scales seems fragile in our sample. The discriminative and reliability analysis suggests a better performance for subsets of the MMSE compared with the Mini-Cog. Usefulness of calibrated scores is discussed.
Copyright © 2012 S. Karger AG, Basel.