In recent years a significant demand to develop computer-assisted diagnostic tools to assess prostate cancer using whole slide images has been observed. In this study we develop and validate a machine learning system for cancer assessment, inclusive of detection of perineural invasion and measurement of cancer portion to meet clinical reporting needs. The system analyses the whole slide image in three consecutive stages: tissue detection, classification, and slide level analysis. The whole slide image is divided into smaller regions (patches). The tissue detection stage relies upon traditional machine learning to identify WSI patches containing tissue, which are then further assessed at the classification stage where deep learning algorithms are employed to detect and classify cancer tissue. At the slide level analysis stage, entire slide level information is generated by aggregating all the patch level information of the slide. A total of 2340 haematoxylin and eosin stained slides were used to train and validate the system. A medical team consisting of 11 board certified pathologists with prostatic pathology subspeciality competences working independently in 4 different medical centres performed the annotations. Pixel-level annotation based on an agreed set of 10 annotation terms, determined based on medical relevance and prevalence, was created by the team. The system achieved an accuracy of 99.53% in tissue detection, with sensitivity and specificity respectively of 99.78% and 99.12%. The system achieved an accuracy of 92.80% in classifying tissue terms, with sensitivity and specificity respectively 92.61% and 99.25%, when 5x magnification level was used. For 10x magnification, these values were respectively 91.04%, 90.49%, and 99.07%. For 20x magnification they were 84.71%, 83.95%, 90.13%.
Keywords: CNN; Cancer detection; Computer-assisted diagnosis; Convolutional neural network; Deep learning; Prostate cancer; WSI; Whole slide imaging.
© 2024. The Author(s).