Objectives: Visual inspection is generally used to assess breast density. Our study aim was to compare visual assessment of breast density of experienced and inexperienced readers with semi-automated analysis of breast density.
Methods: Breast density was assessed by an experienced and an inexperienced reader in 200 mammograms and scored according to the quantitative BI-RADS classification. Breast density was also assessed by dedicated software using a semi-automated thresholding technique. Agreement between breast density classification of both readers as well as agreement between their assessment versus the semi-automated analysis as reference standard was expressed as the weighted kappa value.
Results: Using the semi-automated analysis, agreement between breast density measurements of both breasts in both projections was excellent (ICC >0.9, P < 0.0001). Reproducibility of the semi-automated analysis was excellent (ICC >0.8, P < 0.0001). The experienced reader correctly classified the BI-RADS breast density classification in 58.5% of the cases. Classification was overestimated in 35.5% of the cases and underestimated in 6.0% of the cases. Results of the inexperienced reader were less accurate. Agreement between the classification of both readers versus the semi-automated analysis was considered only moderate with weighted kappa values of 0.367 (experienced reader) and 0.232 (inexperienced reader).
Conclusion: Visual assessment of breast density on mammograms is inaccurate and observer-dependent.