In breast diagnostic imaging, the morphological variability of breast tumors and the inherent ambiguity of ultrasound images pose significant challenges. Moreover, multi-task computer-aided diagnosis systems in breast imaging may overlook inherent relationships between pixel-wise segmentation and categorical classification tasks. 
Approach. In this paper, we propose a multi-task learning network with deep inter-task interactions that exploits the inherently relations between two tasks. First, we fuse self-task attention and cross-task attention mechanisms to explore the two types of interaction information, location and semantic, between tasks. In addition, a feature aggregation block is developed based on the channel attention mechanism, which reduces the semantic differences between the decoder and the encoder. To exploit inter-task further, our network uses an circle training strategy to refine heterogeneous feature with the help of segmentation maps obtained from previous training. 
Main results. The experimental results show that our method achieved excellent performance on the BUSI and BUS-B datasets, with DSCs of 81.95% and 86.41% for segmentation tasks, and F1 scores of 82.13% and 69.01% for classification tasks, respectively. 
Significance. The proposed multi-task interaction learning not only enhances the performance of all tasks related to breast tumor segmentation and classification but also promotes research in multi-task learning, providing further insights for clinical applications.}.
Keywords: Attention mechanism; Breast ultrasound; Classification and segmentation; Deep learning; Multi-task.
© 2025 Institute of Physics and Engineering in Medicine. All rights, including for text and data mining, AI training, and similar technologies, are reserved.