Siamese comparative transformer-based network for unsupervised landmark detection

PLoS One. 2024 Dec 31;19(12):e0313518. doi: 10.1371/journal.pone.0313518. eCollection 2024.

Abstract

Landmark detection is a common task that benefits downstream computer vision tasks. Current landmark detection algorithms often train a sophisticated image pose encoder by reconstructing the source image to identify landmarks. Although a well-trained encoder can effectively capture landmark information through image reconstruction, it overlooks the semantic relationships between landmarks. This contradicts the goal of achieving semantic representations in landmark detection tasks. To address these challenges, we introduce a novel Siamese comparative transformer-based network that strengthens the semantic connections among detected landmarks. Specifically, the connection between landmarks with the same semantics has been enhanced by employing a Siamese contrastive regularizer. In addition, we integrate a lightweight direction-guided Transformer into the image pose encoder to perceive global feature relationships, thereby improving the representation and encoding of landmarks. Experiments on the CelebA, AFLW, and Cat Heads benchmarks demonstrate that our proposed method achieves competitive performance compared to existing unsupervised methods and even supervised methods.

MeSH terms

  • Algorithms*
  • Animals
  • Cats
  • Head / diagnostic imaging
  • Humans
  • Image Processing, Computer-Assisted* / methods
  • Neural Networks, Computer
  • Semantics

Grants and funding

Fund 1: National Natural Science Foundation of China under Grant Award Number: 62101529 Fund 2: Postdoctoral Fellowship Program of CPSF Award Number: GZC20232676 The funders had no role in study design, data collection and analysis, decision to publish, or preparation of the manuscript.