TY - JOUR
T1 - Achieving high performance on sketch-based image retrieval without real sketches for training
AU - Saavedra, Jose M.
AU - Stears, Christopher
AU - Campos, Waldo
N1 - Publisher Copyright:
© 2025
PY - 2025/7
Y1 - 2025/7
N2 - Sketch-based image retrieval (SBIR) has become an attractive area in computer vision. Along with the advances in deep learning, we have seen more sophisticated models for SBIR that have shown increasingly better results. However, these models are still based on supervised learning strategies, requiring the availability of real sketch-photo pairs. Having a paired dataset is impractical in real environments (e.g. eCommerce), which can limit the massification of this technology. Therefore, based on advances in foundation models for extracting highly semantic features from images, we propose S3BIR-DINOv2, a self-supervised SBIR model using pseudo-sketches to address the absence of real sketches for training, learnable vectors to allow the model to hold only one encoder for processing the underlying two image modalities, contrastive learning and an adapted DINOv2 as the visual encoder. Our experiments show our model performs outstandingly in diverse public datasets without requiring real sketches for training. We reach an overall mAP of 61.10% in Flickr15K and 44.37% in the eCommerce dataset.
AB - Sketch-based image retrieval (SBIR) has become an attractive area in computer vision. Along with the advances in deep learning, we have seen more sophisticated models for SBIR that have shown increasingly better results. However, these models are still based on supervised learning strategies, requiring the availability of real sketch-photo pairs. Having a paired dataset is impractical in real environments (e.g. eCommerce), which can limit the massification of this technology. Therefore, based on advances in foundation models for extracting highly semantic features from images, we propose S3BIR-DINOv2, a self-supervised SBIR model using pseudo-sketches to address the absence of real sketches for training, learnable vectors to allow the model to hold only one encoder for processing the underlying two image modalities, contrastive learning and an adapted DINOv2 as the visual encoder. Our experiments show our model performs outstandingly in diverse public datasets without requiring real sketches for training. We reach an overall mAP of 61.10% in Flickr15K and 44.37% in the eCommerce dataset.
KW - Bimodal representation learning
KW - Self-supervision
KW - Sketch-based image retrieval
UR - http://www.scopus.com/inward/record.url?scp=105003871358&partnerID=8YFLogxK
U2 - 10.1016/j.patrec.2025.04.018
DO - 10.1016/j.patrec.2025.04.018
M3 - Article
AN - SCOPUS:105003871358
SN - 0167-8655
VL - 193
SP - 94
EP - 100
JO - Pattern Recognition Letters
JF - Pattern Recognition Letters
ER -