SSL-MBC: Self-Supervised Learning With Multibranch Consistency for Few-Shot PolSAR Image Classification

Deep learning methods have recently made substantial advances in polarimetric synthetic aperture radar (PolSAR) image classification. However, supervised training relying on massive labeled samples is one of its major limitations, especially for PolSAR images that are hard to manually annotate. Self...

Full description

Saved in:
Bibliographic Details
Main Authors: Wenmei Li, Hao Xia, Bin Xi, Yu Wang, Jing Lu, Yuhong He
Format: Article
Language:English
Published: IEEE 2025-01-01
Series:IEEE Journal of Selected Topics in Applied Earth Observations and Remote Sensing
Subjects:
Online Access:https://ieeexplore.ieee.org/document/10839016/
Tags: Add Tag
No Tags, Be the first to tag this record!
Description
Summary:Deep learning methods have recently made substantial advances in polarimetric synthetic aperture radar (PolSAR) image classification. However, supervised training relying on massive labeled samples is one of its major limitations, especially for PolSAR images that are hard to manually annotate. Self-supervised learning (SSL) is an effective solution for insufficient labeled samples by mining supervised information from the data itself. Nevertheless, fully utilizing SSL in PolSAR classification tasks is still a great challenge due to the data complexity. Based on the abovementioned issues, we propose an SSL model with multibranch consistency (SSL-MBC) for few-shot PolSAR image classification. Specifically, the data augmentation technique used in the pretext task involves a combination of various spatial transformations and channel transformations achieved through scattering feature extraction. In addition, the distinct scattering features of PolSAR data are considered as its unique multimodal representations. It is observed that the different modal representations of the same instance exhibit similarity in the encoding space, with the hidden features of more modals being more prominent. Therefore, a multibranch contrastive SSL framework, without negative samples, is employed to efficiently achieve representation learning. The resulting abstract features are then fine-tuned to ensure generalization in downstream tasks, thereby enabling few-shot classification. Experimental results yielded from selected PolSAR datasets convincingly indicate that our method exhibits superior performance compared to other existing methodologies. The exhaustive ablation study shows that the model performance degrades when either the data augmentation or any branch is masked, and the classification result does not rely on the label amount.
ISSN:1939-1404
2151-1535