Contrastive Self-Supervised Hashing With Dual Pseudo Agreement
Contrastive Self-Supervised Hashing With Dual Pseudo Agreement
Blog Article
Recently, unsupervised deep hashing has attracted increasing attention, mainly because of its potential ability to learn binary codes without identity annotations.However, because the labels are predicted by their pretext tasks, unsupervised deep hashing becomes unstable when learning with noisy labels.To mitigate this issue, we propose a simple but effective approach to self-supervised hash learning based on dual pseudo agreement.By adding a consistency constraint, our bilstein shocks jeep xj method can prevent corrupted labels and encourage generalization for effective knowledge distillation.
Specifically, we use the refined pseudo labels as a stabilization constraint to train hash codes, which can implicitly encode semantic structures of the data into the learned Hamming space.Based on the stable pseudo labels, we propose a self-supervised here hashing method with mutual information and noise contrastive loss.Throughout the process of hash learning, the stable pseudo labels and data distributions collaboratively work together as teachers to guide the binary codes learning process.Extensive experiments on three publicly available datasets demonstrate that the proposed method can consistently outperform state-of-the-art methods by large margins.