Overfit detection method for deep neural networks trained to beamform ultrasound images

Ultrasonics. 2024 Dec 27:148:107562. doi: 10.1016/j.ultras.2024.107562. Online ahead of print.

Abstract

Deep neural networks (DNNs) have remarkable potential to reconstruct ultrasound images. However, this promise can suffer from overfitting to training data, which is typically detected via loss function monitoring during an otherwise time-consuming training process or via access to new sources of test data. We present a method to detect overfitting with associated evaluation approaches that only require knowledge of a network architecture and associated trained weights. Three types of artificial DNN inputs (i.e., zeros, ones, and Gaussian noise), unseen during DNN training, were input to three DNNs designed for ultrasound image formation, trained on multi-site data, and submitted to the Challenge on Ultrasound Beamforming with Deep Learning (CUBDL). Overfitting was detected using these artificial DNN inputs. Qualitative and quantitative comparisons of DNN-created images to ground truth images immediately revealed signs of overfitting (e.g., zeros input produced mean output values ≥0.08, ones input produced mean output values ≤0.07, with corresponding image-to-image normalized correlations ≤0.8). The proposed approach is promising to detect overfitting without requiring lengthy network retraining or the curation of additional test data. Potential applications include sanity checks during federated learning, as well as optimization, security, public policy, regulation creation, and benchmarking.

Keywords: Beamforming; Benchmarking; Deep learning; Image reconstruction; Standardization; Ultrasound.