Reducing Cross-Sensor Domain Gaps in Tactile Sensing via Few-Sample-Driven Style-to-Content Unsupervised Domain Adaptation

Sensors (Basel). 2025 Jan 5;25(1):256. doi: 10.3390/s25010256.

Abstract

Transferring knowledge learned from standard GelSight sensors to other visuotactile sensors is appealing for reducing data collection and annotation. However, such cross-sensor transfer is challenging due to the differences between sensors in internal light sources, imaging effects, and elastomer properties. By understanding the data collected from each type of visuotactile sensors as domains, we propose a few-sample-driven style-to-content unsupervised domain adaptation method to reduce cross-sensor domain gaps. We first propose a Global and Local Aggregation Bottleneck (GLAB) layer to compress features extracted by an encoder, enabling the extraction of features containing key information and facilitating unlabeled few-sample-driven learning. We introduce a Fourier-style transformation (FST) module and a prototype-constrained learning loss to promote global conditional domain-adversarial adaptation, bridging style-level gaps. We also propose a high-confidence guided teacher-student network, utilizing a self-distillation mechanism to further reduce content-level gaps between the two domains. Experiments on three cross-sensor domain adaptation and real-world robotic cross-sensor shape recognition tasks demonstrate that our method outperforms state-of-the-art approaches, particularly achieving 89.8% accuracy on the DIGIT recognition dataset.

Keywords: cross-sensor domain gaps; style to content; tactile sensing; unsupervised domain adaptation.