Landmark detection is a common task that benefits downstream computer vision tasks. Current landmark detection algorithms often train a sophisticated image pose encoder by reconstructing the source image to identify landmarks. Although a well-trained encoder can effectively capture landmark information through image reconstruction, it overlooks the semantic relationships between landmarks. This contradicts the goal of achieving semantic representations in landmark detection tasks. To address these challenges, we introduce a novel Siamese comparative transformer-based network that strengthens the semantic connections among detected landmarks. Specifically, the connection between landmarks with the same semantics has been enhanced by employing a Siamese contrastive regularizer. In addition, we integrate a lightweight direction-guided Transformer into the image pose encoder to perceive global feature relationships, thereby improving the representation and encoding of landmarks. Experiments on the CelebA, AFLW, and Cat Heads benchmarks demonstrate that our proposed method achieves competitive performance compared to existing unsupervised methods and even supervised methods.
Copyright: © 2024 Zhao et al. This is an open access article distributed under the terms of the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original author and source are credited.