Objective: Accurate segmentation of the lung nodule in computed tomography images is a critical component of a computer-assisted lung cancer detection/diagnosis system. However, lung nodule segmentation is a challenging task due to the heterogeneity of nodules. This study is to develop a hybrid deep learning (H-DL) model for the segmentation of lung nodules with a wide variety of sizes, shapes, margins, and opacities.
Materials and methods: A dataset collected from Lung Image Database Consortium image collection containing 847 cases with lung nodules manually annotated by at least two radiologists with nodule diameters greater than 7 mm and less than 45 mm was randomly split into 683 training/validation and 164 independent test cases. The 50% consensus consolidation of radiologists' annotation was used as the reference standard for each nodule. We designed a new H-DL model combining two deep convolutional neural networks (DCNNs) with different structures as encoders to increase the learning capabilities for the segmentation of complex lung nodules. Leveraging the basic symmetric U-shaped architecture of U-Net, we redesigned two new U-shaped deep learning (U-DL) models that were expanded to six levels of convolutional layers. One U-DL model used a shallow DCNN structure containing 16 convolutional layers adapted from the VGG-19 as the encoder, and the other used a deep DCNN structure containing 200 layers adapted from DenseNet-201 as the encoder, while the same decoder with only one convolutional layer at each level was used in both U-DL models, and we referred to them as the shallow and deep U-DL models. Finally, an ensemble layer was used to combine the two U-DL models into the H-DL model. We compared the effectiveness of the H-DL, the shallow U-DL and the deep U-DL models by deploying them separately to the test set. The accuracy of volume segmentation for each nodule was evaluated by the 3D Dice coefficient and Jaccard index (JI) relative to the reference standard. For comparison, we calculated the median and minimum of the 3D Dice and JI over the individual radiologists who segmented each nodule, referred to as M-Dice, min-Dice, M-JI, and min-JI.
Results: For the 164 test cases with 327 nodules, our H-DL model achieved an average 3D Dice coefficient of 0.750 ± 0.135 and an average JI of 0.617 ± 0.159. The radiologists' average M-Dice was 0.778 ± 0.102, and the average M-JI was 0.651 ± 0.127; both were significantly higher than those achieved by the H-DL model (p < 0.05). The radiologists' average min-Dice (0.685 ± 0.139) and the average min-JI (0.537 ± 0.153) were significantly lower than those achieved by the H-DL model (p < 0.05). The results indicated that the H-DL model approached the average performance of radiologists and was superior to the radiologist whose manual segmentation had the min-Dice and min-JI. Moreover, the average Dice and average JI achieved by the H-DL model were significantly higher than those achieved by the individual shallow U-DL model (Dice of 0.745 ± 0.139, JI of 0.611 ± 0.161; p < 0.05) or the individual deep U-DL model alone (Dice of 0.739 ± 0.145, JI of 0.604 ± 0.163; p < 0.05).
Conclusion: Our newly developed H-DL model outperformed the individual shallow or deep U-DL models. The H-DL method combining multilevel features learned by both the shallow and deep DCNNs could achieve segmentation accuracy comparable to radiologists' segmentation for nodules with wide ranges of image characteristics.
Keywords: computer-aided diagnosis; deep learning; lung nodule; nodule segmentation.
© 2022 The Authors. Medical Physics published by Wiley Periodicals LLC on behalf of American Association of Physicists in Medicine.