Syn2Real: synthesis of CT image ring artifacts for deep learning-based correction

Phys Med Biol. 2025 Jan 22. doi: 10.1088/1361-6560/adad2c. Online ahead of print.

Abstract

Objective.We strive to overcome the challenges posed by ring artifacts in X-ray computed tomography (CT) by developing a novel approach for generating training data for deep learning-based methods. Training such networks require large, high quality, datasets that are often generated in the data domain, time-consuming and expensive. Our objective is to develop a technique for synthesizing realistic ring artifacts directly in the image domain, enabling scalable production of training data without relying on specific imaging system physics.
Approach.We develop ''Syn2Real,'' a computationally efficient pipeline that generates realistic ring artifacts directly in the image domain. To demonstrate the effectiveness of our approach, we train two versions of UNet, vanilla and a high capacity version with self-attention layers that we call UNetpp, with ℓ2and perceptual losses, as well as a diffusion model, on energy-integrating CT images with and without these synthetic ring artifacts. 
Main Results.Despite being trained on conventional single-energy CT images, our models effectively correct ring artifacts across various monoenergetic images, at different energy levels and slice thicknesses, from a prototype photon-counting CT system. This generalizability validates the realism and versatility of our ring artifact generation process.
Significance.Ring artifacts in X-ray CT pose a unique challenge to image quality and clinical utility. By focusing on data generation, our work provides a foundation for developing more robust and adaptable ring artifact correction methods for pre-clinical, clinical and other CT applications.

Keywords: CT; Data synthesis; Deep learning; Photon-counting CT; Ring artifacts; UNet.