Hilfe beim Zugang
Self-Supervised Learning for High-Resolution Remote Sensing Images Change Detection With Variational Information Bottleneck
Notable achievements have been made in remote sensing images change detection with sample-driven supervised deep learning methods. However, the requirement of the number of labeled samples is impractical for many practical applications, which is a major constraint to the development of supervised de...
Ausführliche Beschreibung
Notable achievements have been made in remote sensing images change detection with sample-driven supervised deep learning methods. However, the requirement of the number of labeled samples is impractical for many practical applications, which is a major constraint to the development of supervised deep learning methods. Self-supervised learning using unlabeled data to construct pretext tasks for model pretraining can largely alleviate the sample dilemma faced by deep learning. And the construction of pretext task is the key to the performance of downstream task. In this work, an improved contrastive self-supervised pretext task that is more suitable for the downstream change detection is proposed. Specifically, an improved Siamese network, which is a change detection-like architecture, is trained to extract multilevel fusion features from different image pairs, both globally and locally. And on this basis, the contrastive loss between feature pairs is minimized to extract more valuable feature representation for downstream change detection. In addition, to further alleviate the problem of little priori information and much image noise in the downstream few-sample change detection, we propose to use variational information bottleneck theory to provide explicit regularization constraint for the model. Compared with other methods, our method shows better performance with stronger robustness and finer detection results in both quantitative and qualitative results of two publicly available datasets. Ausführliche Beschreibung