X-ModalNet: A semi-supervised deep cross-modal network for classification of remote sensing data
This paper addresses the problem of semi-supervised transfer learning with limited cross-modality data in remote sensing. A large amount of multi-modal earth observation images, such as multispectral imagery (MSI) or synthetic aperture radar (SAR) data, are openly available on a global scale, enabli...
Ausführliche Beschreibung
Autor*in: |
Hong, Danfeng [verfasserIn] |
---|
Format: |
E-Artikel |
---|---|
Sprache: |
Englisch |
Erschienen: |
2020transfer abstract |
---|
Schlagwörter: |
---|
Umfang: |
12 |
---|
Übergeordnetes Werk: |
Enthalten in: In Vitro and In Vivo UV Light Skin Protection by an Antioxidant Derivative of NSAID Tolfenamic Acid - Skiadopoulos, V. ELSEVIER, 2013, official publication of the International Society for Photogrammetry and Remote Sensing (ISPRS), Amsterdam [u.a.] |
---|---|
Übergeordnetes Werk: |
volume:167 ; year:2020 ; pages:12-23 ; extent:12 |
Links: |
---|
DOI / URN: |
10.1016/j.isprsjprs.2020.06.014 |
---|
Katalog-ID: |
ELV051149109 |
---|
LEADER | 01000caa a22002652 4500 | ||
---|---|---|---|
001 | ELV051149109 | ||
003 | DE-627 | ||
005 | 20230626031629.0 | ||
007 | cr uuu---uuuuu | ||
008 | 210910s2020 xx |||||o 00| ||eng c | ||
024 | 7 | |a 10.1016/j.isprsjprs.2020.06.014 |2 doi | |
028 | 5 | 2 | |a /cbs_pica/cbs_olc/import_discovery/elsevier/einzuspielen/GBV00000000001111.pica |
035 | |a (DE-627)ELV051149109 | ||
035 | |a (ELSEVIER)S0924-2716(20)30172-6 | ||
040 | |a DE-627 |b ger |c DE-627 |e rakwb | ||
041 | |a eng | ||
082 | 0 | 4 | |a 570 |q VZ |
082 | 0 | 4 | |a 610 |q VZ |
082 | 0 | 4 | |a 620 |q VZ |
084 | |a 52.57 |2 bkl | ||
084 | |a 53.36 |2 bkl | ||
100 | 1 | |a Hong, Danfeng |e verfasserin |4 aut | |
245 | 1 | 0 | |a X-ModalNet: A semi-supervised deep cross-modal network for classification of remote sensing data |
264 | 1 | |c 2020transfer abstract | |
300 | |a 12 | ||
336 | |a nicht spezifiziert |b zzz |2 rdacontent | ||
337 | |a nicht spezifiziert |b z |2 rdamedia | ||
338 | |a nicht spezifiziert |b zu |2 rdacarrier | ||
520 | |a This paper addresses the problem of semi-supervised transfer learning with limited cross-modality data in remote sensing. A large amount of multi-modal earth observation images, such as multispectral imagery (MSI) or synthetic aperture radar (SAR) data, are openly available on a global scale, enabling parsing global urban scenes through remote sensing imagery. However, their ability in identifying materials (pixel-wise classification) remains limited, due to the noisy collection environment and poor discriminative information as well as limited number of well-annotated training images. To this end, we propose a novel cross-modal deep-learning framework, called X-ModalNet, with three well-designed modules: self-adversarial module, interactive learning module, and label propagation module, by learning to transfer more discriminative information from a small-scale hyperspectral image (HSI) into the classification task using a large-scale MSI or SAR data. Significantly, X-ModalNet generalizes well, owing to propagating labels on an updatable graph constructed by high-level features on the top of the network, yielding semi-supervised cross-modality learning. We evaluate X-ModalNet on two multi-modal remote sensing datasets (HSI-MSI and HSI-SAR) and achieve a significant improvement in comparison with several state-of-the-art methods. | ||
520 | |a This paper addresses the problem of semi-supervised transfer learning with limited cross-modality data in remote sensing. A large amount of multi-modal earth observation images, such as multispectral imagery (MSI) or synthetic aperture radar (SAR) data, are openly available on a global scale, enabling parsing global urban scenes through remote sensing imagery. However, their ability in identifying materials (pixel-wise classification) remains limited, due to the noisy collection environment and poor discriminative information as well as limited number of well-annotated training images. To this end, we propose a novel cross-modal deep-learning framework, called X-ModalNet, with three well-designed modules: self-adversarial module, interactive learning module, and label propagation module, by learning to transfer more discriminative information from a small-scale hyperspectral image (HSI) into the classification task using a large-scale MSI or SAR data. Significantly, X-ModalNet generalizes well, owing to propagating labels on an updatable graph constructed by high-level features on the top of the network, yielding semi-supervised cross-modality learning. We evaluate X-ModalNet on two multi-modal remote sensing datasets (HSI-MSI and HSI-SAR) and achieve a significant improvement in comparison with several state-of-the-art methods. | ||
650 | 7 | |a Deep learning |2 Elsevier | |
650 | 7 | |a Cross-modality |2 Elsevier | |
650 | 7 | |a Deep neural network |2 Elsevier | |
650 | 7 | |a Adversarial |2 Elsevier | |
650 | 7 | |a Hyperspectral |2 Elsevier | |
650 | 7 | |a Semi-supervised |2 Elsevier | |
650 | 7 | |a Mutual learning |2 Elsevier | |
650 | 7 | |a Multispectral |2 Elsevier | |
650 | 7 | |a Label propagation |2 Elsevier | |
650 | 7 | |a Fusion |2 Elsevier | |
650 | 7 | |a Remote sensing |2 Elsevier | |
650 | 7 | |a Synthetic aperture radar |2 Elsevier | |
700 | 1 | |a Yokoya, Naoto |4 oth | |
700 | 1 | |a Xia, Gui-Song |4 oth | |
700 | 1 | |a Chanussot, Jocelyn |4 oth | |
700 | 1 | |a Zhu, Xiao Xiang |4 oth | |
773 | 0 | 8 | |i Enthalten in |n Elsevier |a Skiadopoulos, V. ELSEVIER |t In Vitro and In Vivo UV Light Skin Protection by an Antioxidant Derivative of NSAID Tolfenamic Acid |d 2013 |d official publication of the International Society for Photogrammetry and Remote Sensing (ISPRS) |g Amsterdam [u.a.] |w (DE-627)ELV016966376 |
773 | 1 | 8 | |g volume:167 |g year:2020 |g pages:12-23 |g extent:12 |
856 | 4 | 0 | |u https://doi.org/10.1016/j.isprsjprs.2020.06.014 |3 Volltext |
912 | |a GBV_USEFLAG_U | ||
912 | |a GBV_ELV | ||
912 | |a SYSFLAG_U | ||
912 | |a GBV_ILN_70 | ||
936 | b | k | |a 52.57 |j Energiespeicherung |q VZ |
936 | b | k | |a 53.36 |j Energiedirektumwandler |j elektrische Energiespeicher |q VZ |
951 | |a AR | ||
952 | |d 167 |j 2020 |h 12-23 |g 12 |
author_variant |
d h dh |
---|---|
matchkey_str |
hongdanfengyokoyanaotoxiaguisongchanusso:2020----:mdleaeiuevsdeprsmdlewrfrlsiiai |
hierarchy_sort_str |
2020transfer abstract |
bklnumber |
52.57 53.36 |
publishDate |
2020 |
allfields |
10.1016/j.isprsjprs.2020.06.014 doi /cbs_pica/cbs_olc/import_discovery/elsevier/einzuspielen/GBV00000000001111.pica (DE-627)ELV051149109 (ELSEVIER)S0924-2716(20)30172-6 DE-627 ger DE-627 rakwb eng 570 VZ 610 VZ 620 VZ 52.57 bkl 53.36 bkl Hong, Danfeng verfasserin aut X-ModalNet: A semi-supervised deep cross-modal network for classification of remote sensing data 2020transfer abstract 12 nicht spezifiziert zzz rdacontent nicht spezifiziert z rdamedia nicht spezifiziert zu rdacarrier This paper addresses the problem of semi-supervised transfer learning with limited cross-modality data in remote sensing. A large amount of multi-modal earth observation images, such as multispectral imagery (MSI) or synthetic aperture radar (SAR) data, are openly available on a global scale, enabling parsing global urban scenes through remote sensing imagery. However, their ability in identifying materials (pixel-wise classification) remains limited, due to the noisy collection environment and poor discriminative information as well as limited number of well-annotated training images. To this end, we propose a novel cross-modal deep-learning framework, called X-ModalNet, with three well-designed modules: self-adversarial module, interactive learning module, and label propagation module, by learning to transfer more discriminative information from a small-scale hyperspectral image (HSI) into the classification task using a large-scale MSI or SAR data. Significantly, X-ModalNet generalizes well, owing to propagating labels on an updatable graph constructed by high-level features on the top of the network, yielding semi-supervised cross-modality learning. We evaluate X-ModalNet on two multi-modal remote sensing datasets (HSI-MSI and HSI-SAR) and achieve a significant improvement in comparison with several state-of-the-art methods. This paper addresses the problem of semi-supervised transfer learning with limited cross-modality data in remote sensing. A large amount of multi-modal earth observation images, such as multispectral imagery (MSI) or synthetic aperture radar (SAR) data, are openly available on a global scale, enabling parsing global urban scenes through remote sensing imagery. However, their ability in identifying materials (pixel-wise classification) remains limited, due to the noisy collection environment and poor discriminative information as well as limited number of well-annotated training images. To this end, we propose a novel cross-modal deep-learning framework, called X-ModalNet, with three well-designed modules: self-adversarial module, interactive learning module, and label propagation module, by learning to transfer more discriminative information from a small-scale hyperspectral image (HSI) into the classification task using a large-scale MSI or SAR data. Significantly, X-ModalNet generalizes well, owing to propagating labels on an updatable graph constructed by high-level features on the top of the network, yielding semi-supervised cross-modality learning. We evaluate X-ModalNet on two multi-modal remote sensing datasets (HSI-MSI and HSI-SAR) and achieve a significant improvement in comparison with several state-of-the-art methods. Deep learning Elsevier Cross-modality Elsevier Deep neural network Elsevier Adversarial Elsevier Hyperspectral Elsevier Semi-supervised Elsevier Mutual learning Elsevier Multispectral Elsevier Label propagation Elsevier Fusion Elsevier Remote sensing Elsevier Synthetic aperture radar Elsevier Yokoya, Naoto oth Xia, Gui-Song oth Chanussot, Jocelyn oth Zhu, Xiao Xiang oth Enthalten in Elsevier Skiadopoulos, V. ELSEVIER In Vitro and In Vivo UV Light Skin Protection by an Antioxidant Derivative of NSAID Tolfenamic Acid 2013 official publication of the International Society for Photogrammetry and Remote Sensing (ISPRS) Amsterdam [u.a.] (DE-627)ELV016966376 volume:167 year:2020 pages:12-23 extent:12 https://doi.org/10.1016/j.isprsjprs.2020.06.014 Volltext GBV_USEFLAG_U GBV_ELV SYSFLAG_U GBV_ILN_70 52.57 Energiespeicherung VZ 53.36 Energiedirektumwandler elektrische Energiespeicher VZ AR 167 2020 12-23 12 |
spelling |
10.1016/j.isprsjprs.2020.06.014 doi /cbs_pica/cbs_olc/import_discovery/elsevier/einzuspielen/GBV00000000001111.pica (DE-627)ELV051149109 (ELSEVIER)S0924-2716(20)30172-6 DE-627 ger DE-627 rakwb eng 570 VZ 610 VZ 620 VZ 52.57 bkl 53.36 bkl Hong, Danfeng verfasserin aut X-ModalNet: A semi-supervised deep cross-modal network for classification of remote sensing data 2020transfer abstract 12 nicht spezifiziert zzz rdacontent nicht spezifiziert z rdamedia nicht spezifiziert zu rdacarrier This paper addresses the problem of semi-supervised transfer learning with limited cross-modality data in remote sensing. A large amount of multi-modal earth observation images, such as multispectral imagery (MSI) or synthetic aperture radar (SAR) data, are openly available on a global scale, enabling parsing global urban scenes through remote sensing imagery. However, their ability in identifying materials (pixel-wise classification) remains limited, due to the noisy collection environment and poor discriminative information as well as limited number of well-annotated training images. To this end, we propose a novel cross-modal deep-learning framework, called X-ModalNet, with three well-designed modules: self-adversarial module, interactive learning module, and label propagation module, by learning to transfer more discriminative information from a small-scale hyperspectral image (HSI) into the classification task using a large-scale MSI or SAR data. Significantly, X-ModalNet generalizes well, owing to propagating labels on an updatable graph constructed by high-level features on the top of the network, yielding semi-supervised cross-modality learning. We evaluate X-ModalNet on two multi-modal remote sensing datasets (HSI-MSI and HSI-SAR) and achieve a significant improvement in comparison with several state-of-the-art methods. This paper addresses the problem of semi-supervised transfer learning with limited cross-modality data in remote sensing. A large amount of multi-modal earth observation images, such as multispectral imagery (MSI) or synthetic aperture radar (SAR) data, are openly available on a global scale, enabling parsing global urban scenes through remote sensing imagery. However, their ability in identifying materials (pixel-wise classification) remains limited, due to the noisy collection environment and poor discriminative information as well as limited number of well-annotated training images. To this end, we propose a novel cross-modal deep-learning framework, called X-ModalNet, with three well-designed modules: self-adversarial module, interactive learning module, and label propagation module, by learning to transfer more discriminative information from a small-scale hyperspectral image (HSI) into the classification task using a large-scale MSI or SAR data. Significantly, X-ModalNet generalizes well, owing to propagating labels on an updatable graph constructed by high-level features on the top of the network, yielding semi-supervised cross-modality learning. We evaluate X-ModalNet on two multi-modal remote sensing datasets (HSI-MSI and HSI-SAR) and achieve a significant improvement in comparison with several state-of-the-art methods. Deep learning Elsevier Cross-modality Elsevier Deep neural network Elsevier Adversarial Elsevier Hyperspectral Elsevier Semi-supervised Elsevier Mutual learning Elsevier Multispectral Elsevier Label propagation Elsevier Fusion Elsevier Remote sensing Elsevier Synthetic aperture radar Elsevier Yokoya, Naoto oth Xia, Gui-Song oth Chanussot, Jocelyn oth Zhu, Xiao Xiang oth Enthalten in Elsevier Skiadopoulos, V. ELSEVIER In Vitro and In Vivo UV Light Skin Protection by an Antioxidant Derivative of NSAID Tolfenamic Acid 2013 official publication of the International Society for Photogrammetry and Remote Sensing (ISPRS) Amsterdam [u.a.] (DE-627)ELV016966376 volume:167 year:2020 pages:12-23 extent:12 https://doi.org/10.1016/j.isprsjprs.2020.06.014 Volltext GBV_USEFLAG_U GBV_ELV SYSFLAG_U GBV_ILN_70 52.57 Energiespeicherung VZ 53.36 Energiedirektumwandler elektrische Energiespeicher VZ AR 167 2020 12-23 12 |
allfields_unstemmed |
10.1016/j.isprsjprs.2020.06.014 doi /cbs_pica/cbs_olc/import_discovery/elsevier/einzuspielen/GBV00000000001111.pica (DE-627)ELV051149109 (ELSEVIER)S0924-2716(20)30172-6 DE-627 ger DE-627 rakwb eng 570 VZ 610 VZ 620 VZ 52.57 bkl 53.36 bkl Hong, Danfeng verfasserin aut X-ModalNet: A semi-supervised deep cross-modal network for classification of remote sensing data 2020transfer abstract 12 nicht spezifiziert zzz rdacontent nicht spezifiziert z rdamedia nicht spezifiziert zu rdacarrier This paper addresses the problem of semi-supervised transfer learning with limited cross-modality data in remote sensing. A large amount of multi-modal earth observation images, such as multispectral imagery (MSI) or synthetic aperture radar (SAR) data, are openly available on a global scale, enabling parsing global urban scenes through remote sensing imagery. However, their ability in identifying materials (pixel-wise classification) remains limited, due to the noisy collection environment and poor discriminative information as well as limited number of well-annotated training images. To this end, we propose a novel cross-modal deep-learning framework, called X-ModalNet, with three well-designed modules: self-adversarial module, interactive learning module, and label propagation module, by learning to transfer more discriminative information from a small-scale hyperspectral image (HSI) into the classification task using a large-scale MSI or SAR data. Significantly, X-ModalNet generalizes well, owing to propagating labels on an updatable graph constructed by high-level features on the top of the network, yielding semi-supervised cross-modality learning. We evaluate X-ModalNet on two multi-modal remote sensing datasets (HSI-MSI and HSI-SAR) and achieve a significant improvement in comparison with several state-of-the-art methods. This paper addresses the problem of semi-supervised transfer learning with limited cross-modality data in remote sensing. A large amount of multi-modal earth observation images, such as multispectral imagery (MSI) or synthetic aperture radar (SAR) data, are openly available on a global scale, enabling parsing global urban scenes through remote sensing imagery. However, their ability in identifying materials (pixel-wise classification) remains limited, due to the noisy collection environment and poor discriminative information as well as limited number of well-annotated training images. To this end, we propose a novel cross-modal deep-learning framework, called X-ModalNet, with three well-designed modules: self-adversarial module, interactive learning module, and label propagation module, by learning to transfer more discriminative information from a small-scale hyperspectral image (HSI) into the classification task using a large-scale MSI or SAR data. Significantly, X-ModalNet generalizes well, owing to propagating labels on an updatable graph constructed by high-level features on the top of the network, yielding semi-supervised cross-modality learning. We evaluate X-ModalNet on two multi-modal remote sensing datasets (HSI-MSI and HSI-SAR) and achieve a significant improvement in comparison with several state-of-the-art methods. Deep learning Elsevier Cross-modality Elsevier Deep neural network Elsevier Adversarial Elsevier Hyperspectral Elsevier Semi-supervised Elsevier Mutual learning Elsevier Multispectral Elsevier Label propagation Elsevier Fusion Elsevier Remote sensing Elsevier Synthetic aperture radar Elsevier Yokoya, Naoto oth Xia, Gui-Song oth Chanussot, Jocelyn oth Zhu, Xiao Xiang oth Enthalten in Elsevier Skiadopoulos, V. ELSEVIER In Vitro and In Vivo UV Light Skin Protection by an Antioxidant Derivative of NSAID Tolfenamic Acid 2013 official publication of the International Society for Photogrammetry and Remote Sensing (ISPRS) Amsterdam [u.a.] (DE-627)ELV016966376 volume:167 year:2020 pages:12-23 extent:12 https://doi.org/10.1016/j.isprsjprs.2020.06.014 Volltext GBV_USEFLAG_U GBV_ELV SYSFLAG_U GBV_ILN_70 52.57 Energiespeicherung VZ 53.36 Energiedirektumwandler elektrische Energiespeicher VZ AR 167 2020 12-23 12 |
allfieldsGer |
10.1016/j.isprsjprs.2020.06.014 doi /cbs_pica/cbs_olc/import_discovery/elsevier/einzuspielen/GBV00000000001111.pica (DE-627)ELV051149109 (ELSEVIER)S0924-2716(20)30172-6 DE-627 ger DE-627 rakwb eng 570 VZ 610 VZ 620 VZ 52.57 bkl 53.36 bkl Hong, Danfeng verfasserin aut X-ModalNet: A semi-supervised deep cross-modal network for classification of remote sensing data 2020transfer abstract 12 nicht spezifiziert zzz rdacontent nicht spezifiziert z rdamedia nicht spezifiziert zu rdacarrier This paper addresses the problem of semi-supervised transfer learning with limited cross-modality data in remote sensing. A large amount of multi-modal earth observation images, such as multispectral imagery (MSI) or synthetic aperture radar (SAR) data, are openly available on a global scale, enabling parsing global urban scenes through remote sensing imagery. However, their ability in identifying materials (pixel-wise classification) remains limited, due to the noisy collection environment and poor discriminative information as well as limited number of well-annotated training images. To this end, we propose a novel cross-modal deep-learning framework, called X-ModalNet, with three well-designed modules: self-adversarial module, interactive learning module, and label propagation module, by learning to transfer more discriminative information from a small-scale hyperspectral image (HSI) into the classification task using a large-scale MSI or SAR data. Significantly, X-ModalNet generalizes well, owing to propagating labels on an updatable graph constructed by high-level features on the top of the network, yielding semi-supervised cross-modality learning. We evaluate X-ModalNet on two multi-modal remote sensing datasets (HSI-MSI and HSI-SAR) and achieve a significant improvement in comparison with several state-of-the-art methods. This paper addresses the problem of semi-supervised transfer learning with limited cross-modality data in remote sensing. A large amount of multi-modal earth observation images, such as multispectral imagery (MSI) or synthetic aperture radar (SAR) data, are openly available on a global scale, enabling parsing global urban scenes through remote sensing imagery. However, their ability in identifying materials (pixel-wise classification) remains limited, due to the noisy collection environment and poor discriminative information as well as limited number of well-annotated training images. To this end, we propose a novel cross-modal deep-learning framework, called X-ModalNet, with three well-designed modules: self-adversarial module, interactive learning module, and label propagation module, by learning to transfer more discriminative information from a small-scale hyperspectral image (HSI) into the classification task using a large-scale MSI or SAR data. Significantly, X-ModalNet generalizes well, owing to propagating labels on an updatable graph constructed by high-level features on the top of the network, yielding semi-supervised cross-modality learning. We evaluate X-ModalNet on two multi-modal remote sensing datasets (HSI-MSI and HSI-SAR) and achieve a significant improvement in comparison with several state-of-the-art methods. Deep learning Elsevier Cross-modality Elsevier Deep neural network Elsevier Adversarial Elsevier Hyperspectral Elsevier Semi-supervised Elsevier Mutual learning Elsevier Multispectral Elsevier Label propagation Elsevier Fusion Elsevier Remote sensing Elsevier Synthetic aperture radar Elsevier Yokoya, Naoto oth Xia, Gui-Song oth Chanussot, Jocelyn oth Zhu, Xiao Xiang oth Enthalten in Elsevier Skiadopoulos, V. ELSEVIER In Vitro and In Vivo UV Light Skin Protection by an Antioxidant Derivative of NSAID Tolfenamic Acid 2013 official publication of the International Society for Photogrammetry and Remote Sensing (ISPRS) Amsterdam [u.a.] (DE-627)ELV016966376 volume:167 year:2020 pages:12-23 extent:12 https://doi.org/10.1016/j.isprsjprs.2020.06.014 Volltext GBV_USEFLAG_U GBV_ELV SYSFLAG_U GBV_ILN_70 52.57 Energiespeicherung VZ 53.36 Energiedirektumwandler elektrische Energiespeicher VZ AR 167 2020 12-23 12 |
allfieldsSound |
10.1016/j.isprsjprs.2020.06.014 doi /cbs_pica/cbs_olc/import_discovery/elsevier/einzuspielen/GBV00000000001111.pica (DE-627)ELV051149109 (ELSEVIER)S0924-2716(20)30172-6 DE-627 ger DE-627 rakwb eng 570 VZ 610 VZ 620 VZ 52.57 bkl 53.36 bkl Hong, Danfeng verfasserin aut X-ModalNet: A semi-supervised deep cross-modal network for classification of remote sensing data 2020transfer abstract 12 nicht spezifiziert zzz rdacontent nicht spezifiziert z rdamedia nicht spezifiziert zu rdacarrier This paper addresses the problem of semi-supervised transfer learning with limited cross-modality data in remote sensing. A large amount of multi-modal earth observation images, such as multispectral imagery (MSI) or synthetic aperture radar (SAR) data, are openly available on a global scale, enabling parsing global urban scenes through remote sensing imagery. However, their ability in identifying materials (pixel-wise classification) remains limited, due to the noisy collection environment and poor discriminative information as well as limited number of well-annotated training images. To this end, we propose a novel cross-modal deep-learning framework, called X-ModalNet, with three well-designed modules: self-adversarial module, interactive learning module, and label propagation module, by learning to transfer more discriminative information from a small-scale hyperspectral image (HSI) into the classification task using a large-scale MSI or SAR data. Significantly, X-ModalNet generalizes well, owing to propagating labels on an updatable graph constructed by high-level features on the top of the network, yielding semi-supervised cross-modality learning. We evaluate X-ModalNet on two multi-modal remote sensing datasets (HSI-MSI and HSI-SAR) and achieve a significant improvement in comparison with several state-of-the-art methods. This paper addresses the problem of semi-supervised transfer learning with limited cross-modality data in remote sensing. A large amount of multi-modal earth observation images, such as multispectral imagery (MSI) or synthetic aperture radar (SAR) data, are openly available on a global scale, enabling parsing global urban scenes through remote sensing imagery. However, their ability in identifying materials (pixel-wise classification) remains limited, due to the noisy collection environment and poor discriminative information as well as limited number of well-annotated training images. To this end, we propose a novel cross-modal deep-learning framework, called X-ModalNet, with three well-designed modules: self-adversarial module, interactive learning module, and label propagation module, by learning to transfer more discriminative information from a small-scale hyperspectral image (HSI) into the classification task using a large-scale MSI or SAR data. Significantly, X-ModalNet generalizes well, owing to propagating labels on an updatable graph constructed by high-level features on the top of the network, yielding semi-supervised cross-modality learning. We evaluate X-ModalNet on two multi-modal remote sensing datasets (HSI-MSI and HSI-SAR) and achieve a significant improvement in comparison with several state-of-the-art methods. Deep learning Elsevier Cross-modality Elsevier Deep neural network Elsevier Adversarial Elsevier Hyperspectral Elsevier Semi-supervised Elsevier Mutual learning Elsevier Multispectral Elsevier Label propagation Elsevier Fusion Elsevier Remote sensing Elsevier Synthetic aperture radar Elsevier Yokoya, Naoto oth Xia, Gui-Song oth Chanussot, Jocelyn oth Zhu, Xiao Xiang oth Enthalten in Elsevier Skiadopoulos, V. ELSEVIER In Vitro and In Vivo UV Light Skin Protection by an Antioxidant Derivative of NSAID Tolfenamic Acid 2013 official publication of the International Society for Photogrammetry and Remote Sensing (ISPRS) Amsterdam [u.a.] (DE-627)ELV016966376 volume:167 year:2020 pages:12-23 extent:12 https://doi.org/10.1016/j.isprsjprs.2020.06.014 Volltext GBV_USEFLAG_U GBV_ELV SYSFLAG_U GBV_ILN_70 52.57 Energiespeicherung VZ 53.36 Energiedirektumwandler elektrische Energiespeicher VZ AR 167 2020 12-23 12 |
language |
English |
source |
Enthalten in In Vitro and In Vivo UV Light Skin Protection by an Antioxidant Derivative of NSAID Tolfenamic Acid Amsterdam [u.a.] volume:167 year:2020 pages:12-23 extent:12 |
sourceStr |
Enthalten in In Vitro and In Vivo UV Light Skin Protection by an Antioxidant Derivative of NSAID Tolfenamic Acid Amsterdam [u.a.] volume:167 year:2020 pages:12-23 extent:12 |
format_phy_str_mv |
Article |
bklname |
Energiespeicherung Energiedirektumwandler elektrische Energiespeicher |
institution |
findex.gbv.de |
topic_facet |
Deep learning Cross-modality Deep neural network Adversarial Hyperspectral Semi-supervised Mutual learning Multispectral Label propagation Fusion Remote sensing Synthetic aperture radar |
dewey-raw |
570 |
isfreeaccess_bool |
false |
container_title |
In Vitro and In Vivo UV Light Skin Protection by an Antioxidant Derivative of NSAID Tolfenamic Acid |
authorswithroles_txt_mv |
Hong, Danfeng @@aut@@ Yokoya, Naoto @@oth@@ Xia, Gui-Song @@oth@@ Chanussot, Jocelyn @@oth@@ Zhu, Xiao Xiang @@oth@@ |
publishDateDaySort_date |
2020-01-01T00:00:00Z |
hierarchy_top_id |
ELV016966376 |
dewey-sort |
3570 |
id |
ELV051149109 |
language_de |
englisch |
fullrecord |
<?xml version="1.0" encoding="UTF-8"?><collection xmlns="http://www.loc.gov/MARC21/slim"><record><leader>01000caa a22002652 4500</leader><controlfield tag="001">ELV051149109</controlfield><controlfield tag="003">DE-627</controlfield><controlfield tag="005">20230626031629.0</controlfield><controlfield tag="007">cr uuu---uuuuu</controlfield><controlfield tag="008">210910s2020 xx |||||o 00| ||eng c</controlfield><datafield tag="024" ind1="7" ind2=" "><subfield code="a">10.1016/j.isprsjprs.2020.06.014</subfield><subfield code="2">doi</subfield></datafield><datafield tag="028" ind1="5" ind2="2"><subfield code="a">/cbs_pica/cbs_olc/import_discovery/elsevier/einzuspielen/GBV00000000001111.pica</subfield></datafield><datafield tag="035" ind1=" " ind2=" "><subfield code="a">(DE-627)ELV051149109</subfield></datafield><datafield tag="035" ind1=" " ind2=" "><subfield code="a">(ELSEVIER)S0924-2716(20)30172-6</subfield></datafield><datafield tag="040" ind1=" " ind2=" "><subfield code="a">DE-627</subfield><subfield code="b">ger</subfield><subfield code="c">DE-627</subfield><subfield code="e">rakwb</subfield></datafield><datafield tag="041" ind1=" " ind2=" "><subfield code="a">eng</subfield></datafield><datafield tag="082" ind1="0" ind2="4"><subfield code="a">570</subfield><subfield code="q">VZ</subfield></datafield><datafield tag="082" ind1="0" ind2="4"><subfield code="a">610</subfield><subfield code="q">VZ</subfield></datafield><datafield tag="082" ind1="0" ind2="4"><subfield code="a">620</subfield><subfield code="q">VZ</subfield></datafield><datafield tag="084" ind1=" " ind2=" "><subfield code="a">52.57</subfield><subfield code="2">bkl</subfield></datafield><datafield tag="084" ind1=" " ind2=" "><subfield code="a">53.36</subfield><subfield code="2">bkl</subfield></datafield><datafield tag="100" ind1="1" ind2=" "><subfield code="a">Hong, Danfeng</subfield><subfield code="e">verfasserin</subfield><subfield code="4">aut</subfield></datafield><datafield tag="245" ind1="1" ind2="0"><subfield code="a">X-ModalNet: A semi-supervised deep cross-modal network for classification of remote sensing data</subfield></datafield><datafield tag="264" ind1=" " ind2="1"><subfield code="c">2020transfer abstract</subfield></datafield><datafield tag="300" ind1=" " ind2=" "><subfield code="a">12</subfield></datafield><datafield tag="336" ind1=" " ind2=" "><subfield code="a">nicht spezifiziert</subfield><subfield code="b">zzz</subfield><subfield code="2">rdacontent</subfield></datafield><datafield tag="337" ind1=" " ind2=" "><subfield code="a">nicht spezifiziert</subfield><subfield code="b">z</subfield><subfield code="2">rdamedia</subfield></datafield><datafield tag="338" ind1=" " ind2=" "><subfield code="a">nicht spezifiziert</subfield><subfield code="b">zu</subfield><subfield code="2">rdacarrier</subfield></datafield><datafield tag="520" ind1=" " ind2=" "><subfield code="a">This paper addresses the problem of semi-supervised transfer learning with limited cross-modality data in remote sensing. A large amount of multi-modal earth observation images, such as multispectral imagery (MSI) or synthetic aperture radar (SAR) data, are openly available on a global scale, enabling parsing global urban scenes through remote sensing imagery. However, their ability in identifying materials (pixel-wise classification) remains limited, due to the noisy collection environment and poor discriminative information as well as limited number of well-annotated training images. To this end, we propose a novel cross-modal deep-learning framework, called X-ModalNet, with three well-designed modules: self-adversarial module, interactive learning module, and label propagation module, by learning to transfer more discriminative information from a small-scale hyperspectral image (HSI) into the classification task using a large-scale MSI or SAR data. Significantly, X-ModalNet generalizes well, owing to propagating labels on an updatable graph constructed by high-level features on the top of the network, yielding semi-supervised cross-modality learning. We evaluate X-ModalNet on two multi-modal remote sensing datasets (HSI-MSI and HSI-SAR) and achieve a significant improvement in comparison with several state-of-the-art methods.</subfield></datafield><datafield tag="520" ind1=" " ind2=" "><subfield code="a">This paper addresses the problem of semi-supervised transfer learning with limited cross-modality data in remote sensing. A large amount of multi-modal earth observation images, such as multispectral imagery (MSI) or synthetic aperture radar (SAR) data, are openly available on a global scale, enabling parsing global urban scenes through remote sensing imagery. However, their ability in identifying materials (pixel-wise classification) remains limited, due to the noisy collection environment and poor discriminative information as well as limited number of well-annotated training images. To this end, we propose a novel cross-modal deep-learning framework, called X-ModalNet, with three well-designed modules: self-adversarial module, interactive learning module, and label propagation module, by learning to transfer more discriminative information from a small-scale hyperspectral image (HSI) into the classification task using a large-scale MSI or SAR data. Significantly, X-ModalNet generalizes well, owing to propagating labels on an updatable graph constructed by high-level features on the top of the network, yielding semi-supervised cross-modality learning. We evaluate X-ModalNet on two multi-modal remote sensing datasets (HSI-MSI and HSI-SAR) and achieve a significant improvement in comparison with several state-of-the-art methods.</subfield></datafield><datafield tag="650" ind1=" " ind2="7"><subfield code="a">Deep learning</subfield><subfield code="2">Elsevier</subfield></datafield><datafield tag="650" ind1=" " ind2="7"><subfield code="a">Cross-modality</subfield><subfield code="2">Elsevier</subfield></datafield><datafield tag="650" ind1=" " ind2="7"><subfield code="a">Deep neural network</subfield><subfield code="2">Elsevier</subfield></datafield><datafield tag="650" ind1=" " ind2="7"><subfield code="a">Adversarial</subfield><subfield code="2">Elsevier</subfield></datafield><datafield tag="650" ind1=" " ind2="7"><subfield code="a">Hyperspectral</subfield><subfield code="2">Elsevier</subfield></datafield><datafield tag="650" ind1=" " ind2="7"><subfield code="a">Semi-supervised</subfield><subfield code="2">Elsevier</subfield></datafield><datafield tag="650" ind1=" " ind2="7"><subfield code="a">Mutual learning</subfield><subfield code="2">Elsevier</subfield></datafield><datafield tag="650" ind1=" " ind2="7"><subfield code="a">Multispectral</subfield><subfield code="2">Elsevier</subfield></datafield><datafield tag="650" ind1=" " ind2="7"><subfield code="a">Label propagation</subfield><subfield code="2">Elsevier</subfield></datafield><datafield tag="650" ind1=" " ind2="7"><subfield code="a">Fusion</subfield><subfield code="2">Elsevier</subfield></datafield><datafield tag="650" ind1=" " ind2="7"><subfield code="a">Remote sensing</subfield><subfield code="2">Elsevier</subfield></datafield><datafield tag="650" ind1=" " ind2="7"><subfield code="a">Synthetic aperture radar</subfield><subfield code="2">Elsevier</subfield></datafield><datafield tag="700" ind1="1" ind2=" "><subfield code="a">Yokoya, Naoto</subfield><subfield code="4">oth</subfield></datafield><datafield tag="700" ind1="1" ind2=" "><subfield code="a">Xia, Gui-Song</subfield><subfield code="4">oth</subfield></datafield><datafield tag="700" ind1="1" ind2=" "><subfield code="a">Chanussot, Jocelyn</subfield><subfield code="4">oth</subfield></datafield><datafield tag="700" ind1="1" ind2=" "><subfield code="a">Zhu, Xiao Xiang</subfield><subfield code="4">oth</subfield></datafield><datafield tag="773" ind1="0" ind2="8"><subfield code="i">Enthalten in</subfield><subfield code="n">Elsevier</subfield><subfield code="a">Skiadopoulos, V. ELSEVIER</subfield><subfield code="t">In Vitro and In Vivo UV Light Skin Protection by an Antioxidant Derivative of NSAID Tolfenamic Acid</subfield><subfield code="d">2013</subfield><subfield code="d">official publication of the International Society for Photogrammetry and Remote Sensing (ISPRS)</subfield><subfield code="g">Amsterdam [u.a.]</subfield><subfield code="w">(DE-627)ELV016966376</subfield></datafield><datafield tag="773" ind1="1" ind2="8"><subfield code="g">volume:167</subfield><subfield code="g">year:2020</subfield><subfield code="g">pages:12-23</subfield><subfield code="g">extent:12</subfield></datafield><datafield tag="856" ind1="4" ind2="0"><subfield code="u">https://doi.org/10.1016/j.isprsjprs.2020.06.014</subfield><subfield code="3">Volltext</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_USEFLAG_U</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ELV</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">SYSFLAG_U</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_70</subfield></datafield><datafield tag="936" ind1="b" ind2="k"><subfield code="a">52.57</subfield><subfield code="j">Energiespeicherung</subfield><subfield code="q">VZ</subfield></datafield><datafield tag="936" ind1="b" ind2="k"><subfield code="a">53.36</subfield><subfield code="j">Energiedirektumwandler</subfield><subfield code="j">elektrische Energiespeicher</subfield><subfield code="q">VZ</subfield></datafield><datafield tag="951" ind1=" " ind2=" "><subfield code="a">AR</subfield></datafield><datafield tag="952" ind1=" " ind2=" "><subfield code="d">167</subfield><subfield code="j">2020</subfield><subfield code="h">12-23</subfield><subfield code="g">12</subfield></datafield></record></collection>
|
author |
Hong, Danfeng |
spellingShingle |
Hong, Danfeng ddc 570 ddc 610 ddc 620 bkl 52.57 bkl 53.36 Elsevier Deep learning Elsevier Cross-modality Elsevier Deep neural network Elsevier Adversarial Elsevier Hyperspectral Elsevier Semi-supervised Elsevier Mutual learning Elsevier Multispectral Elsevier Label propagation Elsevier Fusion Elsevier Remote sensing Elsevier Synthetic aperture radar X-ModalNet: A semi-supervised deep cross-modal network for classification of remote sensing data |
authorStr |
Hong, Danfeng |
ppnlink_with_tag_str_mv |
@@773@@(DE-627)ELV016966376 |
format |
electronic Article |
dewey-ones |
570 - Life sciences; biology 610 - Medicine & health 620 - Engineering & allied operations |
delete_txt_mv |
keep |
author_role |
aut |
collection |
elsevier |
remote_str |
true |
illustrated |
Not Illustrated |
topic_title |
570 VZ 610 VZ 620 VZ 52.57 bkl 53.36 bkl X-ModalNet: A semi-supervised deep cross-modal network for classification of remote sensing data Deep learning Elsevier Cross-modality Elsevier Deep neural network Elsevier Adversarial Elsevier Hyperspectral Elsevier Semi-supervised Elsevier Mutual learning Elsevier Multispectral Elsevier Label propagation Elsevier Fusion Elsevier Remote sensing Elsevier Synthetic aperture radar Elsevier |
topic |
ddc 570 ddc 610 ddc 620 bkl 52.57 bkl 53.36 Elsevier Deep learning Elsevier Cross-modality Elsevier Deep neural network Elsevier Adversarial Elsevier Hyperspectral Elsevier Semi-supervised Elsevier Mutual learning Elsevier Multispectral Elsevier Label propagation Elsevier Fusion Elsevier Remote sensing Elsevier Synthetic aperture radar |
topic_unstemmed |
ddc 570 ddc 610 ddc 620 bkl 52.57 bkl 53.36 Elsevier Deep learning Elsevier Cross-modality Elsevier Deep neural network Elsevier Adversarial Elsevier Hyperspectral Elsevier Semi-supervised Elsevier Mutual learning Elsevier Multispectral Elsevier Label propagation Elsevier Fusion Elsevier Remote sensing Elsevier Synthetic aperture radar |
topic_browse |
ddc 570 ddc 610 ddc 620 bkl 52.57 bkl 53.36 Elsevier Deep learning Elsevier Cross-modality Elsevier Deep neural network Elsevier Adversarial Elsevier Hyperspectral Elsevier Semi-supervised Elsevier Mutual learning Elsevier Multispectral Elsevier Label propagation Elsevier Fusion Elsevier Remote sensing Elsevier Synthetic aperture radar |
format_facet |
Elektronische Aufsätze Aufsätze Elektronische Ressource |
format_main_str_mv |
Text Zeitschrift/Artikel |
carriertype_str_mv |
zu |
author2_variant |
n y ny g s x gsx j c jc x x z xx xxz |
hierarchy_parent_title |
In Vitro and In Vivo UV Light Skin Protection by an Antioxidant Derivative of NSAID Tolfenamic Acid |
hierarchy_parent_id |
ELV016966376 |
dewey-tens |
570 - Life sciences; biology 610 - Medicine & health 620 - Engineering |
hierarchy_top_title |
In Vitro and In Vivo UV Light Skin Protection by an Antioxidant Derivative of NSAID Tolfenamic Acid |
isfreeaccess_txt |
false |
familylinks_str_mv |
(DE-627)ELV016966376 |
title |
X-ModalNet: A semi-supervised deep cross-modal network for classification of remote sensing data |
ctrlnum |
(DE-627)ELV051149109 (ELSEVIER)S0924-2716(20)30172-6 |
title_full |
X-ModalNet: A semi-supervised deep cross-modal network for classification of remote sensing data |
author_sort |
Hong, Danfeng |
journal |
In Vitro and In Vivo UV Light Skin Protection by an Antioxidant Derivative of NSAID Tolfenamic Acid |
journalStr |
In Vitro and In Vivo UV Light Skin Protection by an Antioxidant Derivative of NSAID Tolfenamic Acid |
lang_code |
eng |
isOA_bool |
false |
dewey-hundreds |
500 - Science 600 - Technology |
recordtype |
marc |
publishDateSort |
2020 |
contenttype_str_mv |
zzz |
container_start_page |
12 |
author_browse |
Hong, Danfeng |
container_volume |
167 |
physical |
12 |
class |
570 VZ 610 VZ 620 VZ 52.57 bkl 53.36 bkl |
format_se |
Elektronische Aufsätze |
author-letter |
Hong, Danfeng |
doi_str_mv |
10.1016/j.isprsjprs.2020.06.014 |
dewey-full |
570 610 620 |
title_sort |
x-modalnet: a semi-supervised deep cross-modal network for classification of remote sensing data |
title_auth |
X-ModalNet: A semi-supervised deep cross-modal network for classification of remote sensing data |
abstract |
This paper addresses the problem of semi-supervised transfer learning with limited cross-modality data in remote sensing. A large amount of multi-modal earth observation images, such as multispectral imagery (MSI) or synthetic aperture radar (SAR) data, are openly available on a global scale, enabling parsing global urban scenes through remote sensing imagery. However, their ability in identifying materials (pixel-wise classification) remains limited, due to the noisy collection environment and poor discriminative information as well as limited number of well-annotated training images. To this end, we propose a novel cross-modal deep-learning framework, called X-ModalNet, with three well-designed modules: self-adversarial module, interactive learning module, and label propagation module, by learning to transfer more discriminative information from a small-scale hyperspectral image (HSI) into the classification task using a large-scale MSI or SAR data. Significantly, X-ModalNet generalizes well, owing to propagating labels on an updatable graph constructed by high-level features on the top of the network, yielding semi-supervised cross-modality learning. We evaluate X-ModalNet on two multi-modal remote sensing datasets (HSI-MSI and HSI-SAR) and achieve a significant improvement in comparison with several state-of-the-art methods. |
abstractGer |
This paper addresses the problem of semi-supervised transfer learning with limited cross-modality data in remote sensing. A large amount of multi-modal earth observation images, such as multispectral imagery (MSI) or synthetic aperture radar (SAR) data, are openly available on a global scale, enabling parsing global urban scenes through remote sensing imagery. However, their ability in identifying materials (pixel-wise classification) remains limited, due to the noisy collection environment and poor discriminative information as well as limited number of well-annotated training images. To this end, we propose a novel cross-modal deep-learning framework, called X-ModalNet, with three well-designed modules: self-adversarial module, interactive learning module, and label propagation module, by learning to transfer more discriminative information from a small-scale hyperspectral image (HSI) into the classification task using a large-scale MSI or SAR data. Significantly, X-ModalNet generalizes well, owing to propagating labels on an updatable graph constructed by high-level features on the top of the network, yielding semi-supervised cross-modality learning. We evaluate X-ModalNet on two multi-modal remote sensing datasets (HSI-MSI and HSI-SAR) and achieve a significant improvement in comparison with several state-of-the-art methods. |
abstract_unstemmed |
This paper addresses the problem of semi-supervised transfer learning with limited cross-modality data in remote sensing. A large amount of multi-modal earth observation images, such as multispectral imagery (MSI) or synthetic aperture radar (SAR) data, are openly available on a global scale, enabling parsing global urban scenes through remote sensing imagery. However, their ability in identifying materials (pixel-wise classification) remains limited, due to the noisy collection environment and poor discriminative information as well as limited number of well-annotated training images. To this end, we propose a novel cross-modal deep-learning framework, called X-ModalNet, with three well-designed modules: self-adversarial module, interactive learning module, and label propagation module, by learning to transfer more discriminative information from a small-scale hyperspectral image (HSI) into the classification task using a large-scale MSI or SAR data. Significantly, X-ModalNet generalizes well, owing to propagating labels on an updatable graph constructed by high-level features on the top of the network, yielding semi-supervised cross-modality learning. We evaluate X-ModalNet on two multi-modal remote sensing datasets (HSI-MSI and HSI-SAR) and achieve a significant improvement in comparison with several state-of-the-art methods. |
collection_details |
GBV_USEFLAG_U GBV_ELV SYSFLAG_U GBV_ILN_70 |
title_short |
X-ModalNet: A semi-supervised deep cross-modal network for classification of remote sensing data |
url |
https://doi.org/10.1016/j.isprsjprs.2020.06.014 |
remote_bool |
true |
author2 |
Yokoya, Naoto Xia, Gui-Song Chanussot, Jocelyn Zhu, Xiao Xiang |
author2Str |
Yokoya, Naoto Xia, Gui-Song Chanussot, Jocelyn Zhu, Xiao Xiang |
ppnlink |
ELV016966376 |
mediatype_str_mv |
z |
isOA_txt |
false |
hochschulschrift_bool |
false |
author2_role |
oth oth oth oth |
doi_str |
10.1016/j.isprsjprs.2020.06.014 |
up_date |
2024-07-06T19:27:56.852Z |
_version_ |
1803859081929162752 |
fullrecord_marcxml |
<?xml version="1.0" encoding="UTF-8"?><collection xmlns="http://www.loc.gov/MARC21/slim"><record><leader>01000caa a22002652 4500</leader><controlfield tag="001">ELV051149109</controlfield><controlfield tag="003">DE-627</controlfield><controlfield tag="005">20230626031629.0</controlfield><controlfield tag="007">cr uuu---uuuuu</controlfield><controlfield tag="008">210910s2020 xx |||||o 00| ||eng c</controlfield><datafield tag="024" ind1="7" ind2=" "><subfield code="a">10.1016/j.isprsjprs.2020.06.014</subfield><subfield code="2">doi</subfield></datafield><datafield tag="028" ind1="5" ind2="2"><subfield code="a">/cbs_pica/cbs_olc/import_discovery/elsevier/einzuspielen/GBV00000000001111.pica</subfield></datafield><datafield tag="035" ind1=" " ind2=" "><subfield code="a">(DE-627)ELV051149109</subfield></datafield><datafield tag="035" ind1=" " ind2=" "><subfield code="a">(ELSEVIER)S0924-2716(20)30172-6</subfield></datafield><datafield tag="040" ind1=" " ind2=" "><subfield code="a">DE-627</subfield><subfield code="b">ger</subfield><subfield code="c">DE-627</subfield><subfield code="e">rakwb</subfield></datafield><datafield tag="041" ind1=" " ind2=" "><subfield code="a">eng</subfield></datafield><datafield tag="082" ind1="0" ind2="4"><subfield code="a">570</subfield><subfield code="q">VZ</subfield></datafield><datafield tag="082" ind1="0" ind2="4"><subfield code="a">610</subfield><subfield code="q">VZ</subfield></datafield><datafield tag="082" ind1="0" ind2="4"><subfield code="a">620</subfield><subfield code="q">VZ</subfield></datafield><datafield tag="084" ind1=" " ind2=" "><subfield code="a">52.57</subfield><subfield code="2">bkl</subfield></datafield><datafield tag="084" ind1=" " ind2=" "><subfield code="a">53.36</subfield><subfield code="2">bkl</subfield></datafield><datafield tag="100" ind1="1" ind2=" "><subfield code="a">Hong, Danfeng</subfield><subfield code="e">verfasserin</subfield><subfield code="4">aut</subfield></datafield><datafield tag="245" ind1="1" ind2="0"><subfield code="a">X-ModalNet: A semi-supervised deep cross-modal network for classification of remote sensing data</subfield></datafield><datafield tag="264" ind1=" " ind2="1"><subfield code="c">2020transfer abstract</subfield></datafield><datafield tag="300" ind1=" " ind2=" "><subfield code="a">12</subfield></datafield><datafield tag="336" ind1=" " ind2=" "><subfield code="a">nicht spezifiziert</subfield><subfield code="b">zzz</subfield><subfield code="2">rdacontent</subfield></datafield><datafield tag="337" ind1=" " ind2=" "><subfield code="a">nicht spezifiziert</subfield><subfield code="b">z</subfield><subfield code="2">rdamedia</subfield></datafield><datafield tag="338" ind1=" " ind2=" "><subfield code="a">nicht spezifiziert</subfield><subfield code="b">zu</subfield><subfield code="2">rdacarrier</subfield></datafield><datafield tag="520" ind1=" " ind2=" "><subfield code="a">This paper addresses the problem of semi-supervised transfer learning with limited cross-modality data in remote sensing. A large amount of multi-modal earth observation images, such as multispectral imagery (MSI) or synthetic aperture radar (SAR) data, are openly available on a global scale, enabling parsing global urban scenes through remote sensing imagery. However, their ability in identifying materials (pixel-wise classification) remains limited, due to the noisy collection environment and poor discriminative information as well as limited number of well-annotated training images. To this end, we propose a novel cross-modal deep-learning framework, called X-ModalNet, with three well-designed modules: self-adversarial module, interactive learning module, and label propagation module, by learning to transfer more discriminative information from a small-scale hyperspectral image (HSI) into the classification task using a large-scale MSI or SAR data. Significantly, X-ModalNet generalizes well, owing to propagating labels on an updatable graph constructed by high-level features on the top of the network, yielding semi-supervised cross-modality learning. We evaluate X-ModalNet on two multi-modal remote sensing datasets (HSI-MSI and HSI-SAR) and achieve a significant improvement in comparison with several state-of-the-art methods.</subfield></datafield><datafield tag="520" ind1=" " ind2=" "><subfield code="a">This paper addresses the problem of semi-supervised transfer learning with limited cross-modality data in remote sensing. A large amount of multi-modal earth observation images, such as multispectral imagery (MSI) or synthetic aperture radar (SAR) data, are openly available on a global scale, enabling parsing global urban scenes through remote sensing imagery. However, their ability in identifying materials (pixel-wise classification) remains limited, due to the noisy collection environment and poor discriminative information as well as limited number of well-annotated training images. To this end, we propose a novel cross-modal deep-learning framework, called X-ModalNet, with three well-designed modules: self-adversarial module, interactive learning module, and label propagation module, by learning to transfer more discriminative information from a small-scale hyperspectral image (HSI) into the classification task using a large-scale MSI or SAR data. Significantly, X-ModalNet generalizes well, owing to propagating labels on an updatable graph constructed by high-level features on the top of the network, yielding semi-supervised cross-modality learning. We evaluate X-ModalNet on two multi-modal remote sensing datasets (HSI-MSI and HSI-SAR) and achieve a significant improvement in comparison with several state-of-the-art methods.</subfield></datafield><datafield tag="650" ind1=" " ind2="7"><subfield code="a">Deep learning</subfield><subfield code="2">Elsevier</subfield></datafield><datafield tag="650" ind1=" " ind2="7"><subfield code="a">Cross-modality</subfield><subfield code="2">Elsevier</subfield></datafield><datafield tag="650" ind1=" " ind2="7"><subfield code="a">Deep neural network</subfield><subfield code="2">Elsevier</subfield></datafield><datafield tag="650" ind1=" " ind2="7"><subfield code="a">Adversarial</subfield><subfield code="2">Elsevier</subfield></datafield><datafield tag="650" ind1=" " ind2="7"><subfield code="a">Hyperspectral</subfield><subfield code="2">Elsevier</subfield></datafield><datafield tag="650" ind1=" " ind2="7"><subfield code="a">Semi-supervised</subfield><subfield code="2">Elsevier</subfield></datafield><datafield tag="650" ind1=" " ind2="7"><subfield code="a">Mutual learning</subfield><subfield code="2">Elsevier</subfield></datafield><datafield tag="650" ind1=" " ind2="7"><subfield code="a">Multispectral</subfield><subfield code="2">Elsevier</subfield></datafield><datafield tag="650" ind1=" " ind2="7"><subfield code="a">Label propagation</subfield><subfield code="2">Elsevier</subfield></datafield><datafield tag="650" ind1=" " ind2="7"><subfield code="a">Fusion</subfield><subfield code="2">Elsevier</subfield></datafield><datafield tag="650" ind1=" " ind2="7"><subfield code="a">Remote sensing</subfield><subfield code="2">Elsevier</subfield></datafield><datafield tag="650" ind1=" " ind2="7"><subfield code="a">Synthetic aperture radar</subfield><subfield code="2">Elsevier</subfield></datafield><datafield tag="700" ind1="1" ind2=" "><subfield code="a">Yokoya, Naoto</subfield><subfield code="4">oth</subfield></datafield><datafield tag="700" ind1="1" ind2=" "><subfield code="a">Xia, Gui-Song</subfield><subfield code="4">oth</subfield></datafield><datafield tag="700" ind1="1" ind2=" "><subfield code="a">Chanussot, Jocelyn</subfield><subfield code="4">oth</subfield></datafield><datafield tag="700" ind1="1" ind2=" "><subfield code="a">Zhu, Xiao Xiang</subfield><subfield code="4">oth</subfield></datafield><datafield tag="773" ind1="0" ind2="8"><subfield code="i">Enthalten in</subfield><subfield code="n">Elsevier</subfield><subfield code="a">Skiadopoulos, V. ELSEVIER</subfield><subfield code="t">In Vitro and In Vivo UV Light Skin Protection by an Antioxidant Derivative of NSAID Tolfenamic Acid</subfield><subfield code="d">2013</subfield><subfield code="d">official publication of the International Society for Photogrammetry and Remote Sensing (ISPRS)</subfield><subfield code="g">Amsterdam [u.a.]</subfield><subfield code="w">(DE-627)ELV016966376</subfield></datafield><datafield tag="773" ind1="1" ind2="8"><subfield code="g">volume:167</subfield><subfield code="g">year:2020</subfield><subfield code="g">pages:12-23</subfield><subfield code="g">extent:12</subfield></datafield><datafield tag="856" ind1="4" ind2="0"><subfield code="u">https://doi.org/10.1016/j.isprsjprs.2020.06.014</subfield><subfield code="3">Volltext</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_USEFLAG_U</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ELV</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">SYSFLAG_U</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_70</subfield></datafield><datafield tag="936" ind1="b" ind2="k"><subfield code="a">52.57</subfield><subfield code="j">Energiespeicherung</subfield><subfield code="q">VZ</subfield></datafield><datafield tag="936" ind1="b" ind2="k"><subfield code="a">53.36</subfield><subfield code="j">Energiedirektumwandler</subfield><subfield code="j">elektrische Energiespeicher</subfield><subfield code="q">VZ</subfield></datafield><datafield tag="951" ind1=" " ind2=" "><subfield code="a">AR</subfield></datafield><datafield tag="952" ind1=" " ind2=" "><subfield code="d">167</subfield><subfield code="j">2020</subfield><subfield code="h">12-23</subfield><subfield code="g">12</subfield></datafield></record></collection>
|
score |
7.4009514 |