Beyond pixels: Learning from multimodal hyperspectral superpixels for land cover classification
Abstract Despite tons of advanced classification models that have recently been developed for the land cover mapping task, the monotonicity of a single remote sensing data source, such as only using hyperspectral data or multispectral data, hinders the classification accuracy from being further impr...
Ausführliche Beschreibung
Autor*in: |
Hong, DanFeng [verfasserIn] |
---|
Format: |
E-Artikel |
---|---|
Sprache: |
Englisch |
Erschienen: |
2022 |
---|
Schlagwörter: |
---|
Anmerkung: |
© Science China Press and Springer-Verlag GmbH Germany, part of Springer Nature 2022 |
---|
Übergeordnetes Werk: |
Enthalten in: Science in China - Heidelberg : Springer, 1997, 65(2022), 4 vom: 18. März, Seite 802-808 |
---|---|
Übergeordnetes Werk: |
volume:65 ; year:2022 ; number:4 ; day:18 ; month:03 ; pages:802-808 |
Links: |
---|
DOI / URN: |
10.1007/s11431-021-1988-y |
---|
Katalog-ID: |
SPR050719297 |
---|
LEADER | 01000naa a22002652 4500 | ||
---|---|---|---|
001 | SPR050719297 | ||
003 | DE-627 | ||
005 | 20230507184854.0 | ||
007 | cr uuu---uuuuu | ||
008 | 230507s2022 xx |||||o 00| ||eng c | ||
024 | 7 | |a 10.1007/s11431-021-1988-y |2 doi | |
035 | |a (DE-627)SPR050719297 | ||
035 | |a (SPR)s11431-021-1988-y-e | ||
040 | |a DE-627 |b ger |c DE-627 |e rakwb | ||
041 | |a eng | ||
100 | 1 | |a Hong, DanFeng |e verfasserin |4 aut | |
245 | 1 | 0 | |a Beyond pixels: Learning from multimodal hyperspectral superpixels for land cover classification |
264 | 1 | |c 2022 | |
336 | |a Text |b txt |2 rdacontent | ||
337 | |a Computermedien |b c |2 rdamedia | ||
338 | |a Online-Ressource |b cr |2 rdacarrier | ||
500 | |a © Science China Press and Springer-Verlag GmbH Germany, part of Springer Nature 2022 | ||
520 | |a Abstract Despite tons of advanced classification models that have recently been developed for the land cover mapping task, the monotonicity of a single remote sensing data source, such as only using hyperspectral data or multispectral data, hinders the classification accuracy from being further improved and tends to meet the performance bottleneck. For this reason, we develop a novel superpixel-based subspace learning model, called Supace, by jointly learning multimodal feature representations from HS and MS superpixels for more accurate LCC results. Supace can learn a common subspace across multimodal RS data, where the diverse and complementary information from different modalities can be better combined, being capable of enhancing the discriminative ability of to-be-learned features in a more effective way. To better capture semantic information of objects in the feature learning process, superpixels that beyond pixels are regarded as the study object in our Supace for LCC. Extensive experiments have been conducted on two popular hyperspectral and multispectral datasets, demonstrating the superiority of the proposed Supace in the land cover classification task compared with several well-known baselines related to multimodal remote sensing image feature learning. | ||
650 | 4 | |a classification |7 (dpeaa)DE-He213 | |
650 | 4 | |a hyperspectral image |7 (dpeaa)DE-He213 | |
650 | 4 | |a land cover |7 (dpeaa)DE-He213 | |
650 | 4 | |a multimodal |7 (dpeaa)DE-He213 | |
650 | 4 | |a multispectral image |7 (dpeaa)DE-He213 | |
650 | 4 | |a remote sensing |7 (dpeaa)DE-He213 | |
650 | 4 | |a subspace learning |7 (dpeaa)DE-He213 | |
650 | 4 | |a superpixels |7 (dpeaa)DE-He213 | |
700 | 1 | |a Wu, Xin |4 aut | |
700 | 1 | |a Yao, Jing |4 aut | |
700 | 1 | |a Zhu, XiaoXiang |4 aut | |
773 | 0 | 8 | |i Enthalten in |t Science in China |d Heidelberg : Springer, 1997 |g 65(2022), 4 vom: 18. März, Seite 802-808 |w (DE-627)385614756 |w (DE-600)2142897-9 |x 1862-281X |7 nnns |
773 | 1 | 8 | |g volume:65 |g year:2022 |g number:4 |g day:18 |g month:03 |g pages:802-808 |
856 | 4 | 0 | |u https://dx.doi.org/10.1007/s11431-021-1988-y |z lizenzpflichtig |3 Volltext |
912 | |a GBV_USEFLAG_A | ||
912 | |a SYSFLAG_A | ||
912 | |a GBV_SPRINGER | ||
912 | |a GBV_ILN_20 | ||
912 | |a GBV_ILN_22 | ||
912 | |a GBV_ILN_23 | ||
912 | |a GBV_ILN_24 | ||
912 | |a GBV_ILN_31 | ||
912 | |a GBV_ILN_32 | ||
912 | |a GBV_ILN_39 | ||
912 | |a GBV_ILN_40 | ||
912 | |a GBV_ILN_60 | ||
912 | |a GBV_ILN_62 | ||
912 | |a GBV_ILN_65 | ||
912 | |a GBV_ILN_69 | ||
912 | |a GBV_ILN_70 | ||
912 | |a GBV_ILN_73 | ||
912 | |a GBV_ILN_74 | ||
912 | |a GBV_ILN_90 | ||
912 | |a GBV_ILN_95 | ||
912 | |a GBV_ILN_100 | ||
912 | |a GBV_ILN_105 | ||
912 | |a GBV_ILN_110 | ||
912 | |a GBV_ILN_120 | ||
912 | |a GBV_ILN_138 | ||
912 | |a GBV_ILN_152 | ||
912 | |a GBV_ILN_161 | ||
912 | |a GBV_ILN_171 | ||
912 | |a GBV_ILN_187 | ||
912 | |a GBV_ILN_224 | ||
912 | |a GBV_ILN_250 | ||
912 | |a GBV_ILN_281 | ||
912 | |a GBV_ILN_285 | ||
912 | |a GBV_ILN_293 | ||
912 | |a GBV_ILN_370 | ||
912 | |a GBV_ILN_602 | ||
912 | |a GBV_ILN_702 | ||
951 | |a AR | ||
952 | |d 65 |j 2022 |e 4 |b 18 |c 03 |h 802-808 |
author_variant |
d h dh x w xw j y jy x z xz |
---|---|
matchkey_str |
article:1862281X:2022----::eodieserigrmutmdlyesetaspriesol |
hierarchy_sort_str |
2022 |
publishDate |
2022 |
allfields |
10.1007/s11431-021-1988-y doi (DE-627)SPR050719297 (SPR)s11431-021-1988-y-e DE-627 ger DE-627 rakwb eng Hong, DanFeng verfasserin aut Beyond pixels: Learning from multimodal hyperspectral superpixels for land cover classification 2022 Text txt rdacontent Computermedien c rdamedia Online-Ressource cr rdacarrier © Science China Press and Springer-Verlag GmbH Germany, part of Springer Nature 2022 Abstract Despite tons of advanced classification models that have recently been developed for the land cover mapping task, the monotonicity of a single remote sensing data source, such as only using hyperspectral data or multispectral data, hinders the classification accuracy from being further improved and tends to meet the performance bottleneck. For this reason, we develop a novel superpixel-based subspace learning model, called Supace, by jointly learning multimodal feature representations from HS and MS superpixels for more accurate LCC results. Supace can learn a common subspace across multimodal RS data, where the diverse and complementary information from different modalities can be better combined, being capable of enhancing the discriminative ability of to-be-learned features in a more effective way. To better capture semantic information of objects in the feature learning process, superpixels that beyond pixels are regarded as the study object in our Supace for LCC. Extensive experiments have been conducted on two popular hyperspectral and multispectral datasets, demonstrating the superiority of the proposed Supace in the land cover classification task compared with several well-known baselines related to multimodal remote sensing image feature learning. classification (dpeaa)DE-He213 hyperspectral image (dpeaa)DE-He213 land cover (dpeaa)DE-He213 multimodal (dpeaa)DE-He213 multispectral image (dpeaa)DE-He213 remote sensing (dpeaa)DE-He213 subspace learning (dpeaa)DE-He213 superpixels (dpeaa)DE-He213 Wu, Xin aut Yao, Jing aut Zhu, XiaoXiang aut Enthalten in Science in China Heidelberg : Springer, 1997 65(2022), 4 vom: 18. März, Seite 802-808 (DE-627)385614756 (DE-600)2142897-9 1862-281X nnns volume:65 year:2022 number:4 day:18 month:03 pages:802-808 https://dx.doi.org/10.1007/s11431-021-1988-y lizenzpflichtig Volltext GBV_USEFLAG_A SYSFLAG_A GBV_SPRINGER GBV_ILN_20 GBV_ILN_22 GBV_ILN_23 GBV_ILN_24 GBV_ILN_31 GBV_ILN_32 GBV_ILN_39 GBV_ILN_40 GBV_ILN_60 GBV_ILN_62 GBV_ILN_65 GBV_ILN_69 GBV_ILN_70 GBV_ILN_73 GBV_ILN_74 GBV_ILN_90 GBV_ILN_95 GBV_ILN_100 GBV_ILN_105 GBV_ILN_110 GBV_ILN_120 GBV_ILN_138 GBV_ILN_152 GBV_ILN_161 GBV_ILN_171 GBV_ILN_187 GBV_ILN_224 GBV_ILN_250 GBV_ILN_281 GBV_ILN_285 GBV_ILN_293 GBV_ILN_370 GBV_ILN_602 GBV_ILN_702 AR 65 2022 4 18 03 802-808 |
spelling |
10.1007/s11431-021-1988-y doi (DE-627)SPR050719297 (SPR)s11431-021-1988-y-e DE-627 ger DE-627 rakwb eng Hong, DanFeng verfasserin aut Beyond pixels: Learning from multimodal hyperspectral superpixels for land cover classification 2022 Text txt rdacontent Computermedien c rdamedia Online-Ressource cr rdacarrier © Science China Press and Springer-Verlag GmbH Germany, part of Springer Nature 2022 Abstract Despite tons of advanced classification models that have recently been developed for the land cover mapping task, the monotonicity of a single remote sensing data source, such as only using hyperspectral data or multispectral data, hinders the classification accuracy from being further improved and tends to meet the performance bottleneck. For this reason, we develop a novel superpixel-based subspace learning model, called Supace, by jointly learning multimodal feature representations from HS and MS superpixels for more accurate LCC results. Supace can learn a common subspace across multimodal RS data, where the diverse and complementary information from different modalities can be better combined, being capable of enhancing the discriminative ability of to-be-learned features in a more effective way. To better capture semantic information of objects in the feature learning process, superpixels that beyond pixels are regarded as the study object in our Supace for LCC. Extensive experiments have been conducted on two popular hyperspectral and multispectral datasets, demonstrating the superiority of the proposed Supace in the land cover classification task compared with several well-known baselines related to multimodal remote sensing image feature learning. classification (dpeaa)DE-He213 hyperspectral image (dpeaa)DE-He213 land cover (dpeaa)DE-He213 multimodal (dpeaa)DE-He213 multispectral image (dpeaa)DE-He213 remote sensing (dpeaa)DE-He213 subspace learning (dpeaa)DE-He213 superpixels (dpeaa)DE-He213 Wu, Xin aut Yao, Jing aut Zhu, XiaoXiang aut Enthalten in Science in China Heidelberg : Springer, 1997 65(2022), 4 vom: 18. März, Seite 802-808 (DE-627)385614756 (DE-600)2142897-9 1862-281X nnns volume:65 year:2022 number:4 day:18 month:03 pages:802-808 https://dx.doi.org/10.1007/s11431-021-1988-y lizenzpflichtig Volltext GBV_USEFLAG_A SYSFLAG_A GBV_SPRINGER GBV_ILN_20 GBV_ILN_22 GBV_ILN_23 GBV_ILN_24 GBV_ILN_31 GBV_ILN_32 GBV_ILN_39 GBV_ILN_40 GBV_ILN_60 GBV_ILN_62 GBV_ILN_65 GBV_ILN_69 GBV_ILN_70 GBV_ILN_73 GBV_ILN_74 GBV_ILN_90 GBV_ILN_95 GBV_ILN_100 GBV_ILN_105 GBV_ILN_110 GBV_ILN_120 GBV_ILN_138 GBV_ILN_152 GBV_ILN_161 GBV_ILN_171 GBV_ILN_187 GBV_ILN_224 GBV_ILN_250 GBV_ILN_281 GBV_ILN_285 GBV_ILN_293 GBV_ILN_370 GBV_ILN_602 GBV_ILN_702 AR 65 2022 4 18 03 802-808 |
allfields_unstemmed |
10.1007/s11431-021-1988-y doi (DE-627)SPR050719297 (SPR)s11431-021-1988-y-e DE-627 ger DE-627 rakwb eng Hong, DanFeng verfasserin aut Beyond pixels: Learning from multimodal hyperspectral superpixels for land cover classification 2022 Text txt rdacontent Computermedien c rdamedia Online-Ressource cr rdacarrier © Science China Press and Springer-Verlag GmbH Germany, part of Springer Nature 2022 Abstract Despite tons of advanced classification models that have recently been developed for the land cover mapping task, the monotonicity of a single remote sensing data source, such as only using hyperspectral data or multispectral data, hinders the classification accuracy from being further improved and tends to meet the performance bottleneck. For this reason, we develop a novel superpixel-based subspace learning model, called Supace, by jointly learning multimodal feature representations from HS and MS superpixels for more accurate LCC results. Supace can learn a common subspace across multimodal RS data, where the diverse and complementary information from different modalities can be better combined, being capable of enhancing the discriminative ability of to-be-learned features in a more effective way. To better capture semantic information of objects in the feature learning process, superpixels that beyond pixels are regarded as the study object in our Supace for LCC. Extensive experiments have been conducted on two popular hyperspectral and multispectral datasets, demonstrating the superiority of the proposed Supace in the land cover classification task compared with several well-known baselines related to multimodal remote sensing image feature learning. classification (dpeaa)DE-He213 hyperspectral image (dpeaa)DE-He213 land cover (dpeaa)DE-He213 multimodal (dpeaa)DE-He213 multispectral image (dpeaa)DE-He213 remote sensing (dpeaa)DE-He213 subspace learning (dpeaa)DE-He213 superpixels (dpeaa)DE-He213 Wu, Xin aut Yao, Jing aut Zhu, XiaoXiang aut Enthalten in Science in China Heidelberg : Springer, 1997 65(2022), 4 vom: 18. März, Seite 802-808 (DE-627)385614756 (DE-600)2142897-9 1862-281X nnns volume:65 year:2022 number:4 day:18 month:03 pages:802-808 https://dx.doi.org/10.1007/s11431-021-1988-y lizenzpflichtig Volltext GBV_USEFLAG_A SYSFLAG_A GBV_SPRINGER GBV_ILN_20 GBV_ILN_22 GBV_ILN_23 GBV_ILN_24 GBV_ILN_31 GBV_ILN_32 GBV_ILN_39 GBV_ILN_40 GBV_ILN_60 GBV_ILN_62 GBV_ILN_65 GBV_ILN_69 GBV_ILN_70 GBV_ILN_73 GBV_ILN_74 GBV_ILN_90 GBV_ILN_95 GBV_ILN_100 GBV_ILN_105 GBV_ILN_110 GBV_ILN_120 GBV_ILN_138 GBV_ILN_152 GBV_ILN_161 GBV_ILN_171 GBV_ILN_187 GBV_ILN_224 GBV_ILN_250 GBV_ILN_281 GBV_ILN_285 GBV_ILN_293 GBV_ILN_370 GBV_ILN_602 GBV_ILN_702 AR 65 2022 4 18 03 802-808 |
allfieldsGer |
10.1007/s11431-021-1988-y doi (DE-627)SPR050719297 (SPR)s11431-021-1988-y-e DE-627 ger DE-627 rakwb eng Hong, DanFeng verfasserin aut Beyond pixels: Learning from multimodal hyperspectral superpixels for land cover classification 2022 Text txt rdacontent Computermedien c rdamedia Online-Ressource cr rdacarrier © Science China Press and Springer-Verlag GmbH Germany, part of Springer Nature 2022 Abstract Despite tons of advanced classification models that have recently been developed for the land cover mapping task, the monotonicity of a single remote sensing data source, such as only using hyperspectral data or multispectral data, hinders the classification accuracy from being further improved and tends to meet the performance bottleneck. For this reason, we develop a novel superpixel-based subspace learning model, called Supace, by jointly learning multimodal feature representations from HS and MS superpixels for more accurate LCC results. Supace can learn a common subspace across multimodal RS data, where the diverse and complementary information from different modalities can be better combined, being capable of enhancing the discriminative ability of to-be-learned features in a more effective way. To better capture semantic information of objects in the feature learning process, superpixels that beyond pixels are regarded as the study object in our Supace for LCC. Extensive experiments have been conducted on two popular hyperspectral and multispectral datasets, demonstrating the superiority of the proposed Supace in the land cover classification task compared with several well-known baselines related to multimodal remote sensing image feature learning. classification (dpeaa)DE-He213 hyperspectral image (dpeaa)DE-He213 land cover (dpeaa)DE-He213 multimodal (dpeaa)DE-He213 multispectral image (dpeaa)DE-He213 remote sensing (dpeaa)DE-He213 subspace learning (dpeaa)DE-He213 superpixels (dpeaa)DE-He213 Wu, Xin aut Yao, Jing aut Zhu, XiaoXiang aut Enthalten in Science in China Heidelberg : Springer, 1997 65(2022), 4 vom: 18. März, Seite 802-808 (DE-627)385614756 (DE-600)2142897-9 1862-281X nnns volume:65 year:2022 number:4 day:18 month:03 pages:802-808 https://dx.doi.org/10.1007/s11431-021-1988-y lizenzpflichtig Volltext GBV_USEFLAG_A SYSFLAG_A GBV_SPRINGER GBV_ILN_20 GBV_ILN_22 GBV_ILN_23 GBV_ILN_24 GBV_ILN_31 GBV_ILN_32 GBV_ILN_39 GBV_ILN_40 GBV_ILN_60 GBV_ILN_62 GBV_ILN_65 GBV_ILN_69 GBV_ILN_70 GBV_ILN_73 GBV_ILN_74 GBV_ILN_90 GBV_ILN_95 GBV_ILN_100 GBV_ILN_105 GBV_ILN_110 GBV_ILN_120 GBV_ILN_138 GBV_ILN_152 GBV_ILN_161 GBV_ILN_171 GBV_ILN_187 GBV_ILN_224 GBV_ILN_250 GBV_ILN_281 GBV_ILN_285 GBV_ILN_293 GBV_ILN_370 GBV_ILN_602 GBV_ILN_702 AR 65 2022 4 18 03 802-808 |
allfieldsSound |
10.1007/s11431-021-1988-y doi (DE-627)SPR050719297 (SPR)s11431-021-1988-y-e DE-627 ger DE-627 rakwb eng Hong, DanFeng verfasserin aut Beyond pixels: Learning from multimodal hyperspectral superpixels for land cover classification 2022 Text txt rdacontent Computermedien c rdamedia Online-Ressource cr rdacarrier © Science China Press and Springer-Verlag GmbH Germany, part of Springer Nature 2022 Abstract Despite tons of advanced classification models that have recently been developed for the land cover mapping task, the monotonicity of a single remote sensing data source, such as only using hyperspectral data or multispectral data, hinders the classification accuracy from being further improved and tends to meet the performance bottleneck. For this reason, we develop a novel superpixel-based subspace learning model, called Supace, by jointly learning multimodal feature representations from HS and MS superpixels for more accurate LCC results. Supace can learn a common subspace across multimodal RS data, where the diverse and complementary information from different modalities can be better combined, being capable of enhancing the discriminative ability of to-be-learned features in a more effective way. To better capture semantic information of objects in the feature learning process, superpixels that beyond pixels are regarded as the study object in our Supace for LCC. Extensive experiments have been conducted on two popular hyperspectral and multispectral datasets, demonstrating the superiority of the proposed Supace in the land cover classification task compared with several well-known baselines related to multimodal remote sensing image feature learning. classification (dpeaa)DE-He213 hyperspectral image (dpeaa)DE-He213 land cover (dpeaa)DE-He213 multimodal (dpeaa)DE-He213 multispectral image (dpeaa)DE-He213 remote sensing (dpeaa)DE-He213 subspace learning (dpeaa)DE-He213 superpixels (dpeaa)DE-He213 Wu, Xin aut Yao, Jing aut Zhu, XiaoXiang aut Enthalten in Science in China Heidelberg : Springer, 1997 65(2022), 4 vom: 18. März, Seite 802-808 (DE-627)385614756 (DE-600)2142897-9 1862-281X nnns volume:65 year:2022 number:4 day:18 month:03 pages:802-808 https://dx.doi.org/10.1007/s11431-021-1988-y lizenzpflichtig Volltext GBV_USEFLAG_A SYSFLAG_A GBV_SPRINGER GBV_ILN_20 GBV_ILN_22 GBV_ILN_23 GBV_ILN_24 GBV_ILN_31 GBV_ILN_32 GBV_ILN_39 GBV_ILN_40 GBV_ILN_60 GBV_ILN_62 GBV_ILN_65 GBV_ILN_69 GBV_ILN_70 GBV_ILN_73 GBV_ILN_74 GBV_ILN_90 GBV_ILN_95 GBV_ILN_100 GBV_ILN_105 GBV_ILN_110 GBV_ILN_120 GBV_ILN_138 GBV_ILN_152 GBV_ILN_161 GBV_ILN_171 GBV_ILN_187 GBV_ILN_224 GBV_ILN_250 GBV_ILN_281 GBV_ILN_285 GBV_ILN_293 GBV_ILN_370 GBV_ILN_602 GBV_ILN_702 AR 65 2022 4 18 03 802-808 |
language |
English |
source |
Enthalten in Science in China 65(2022), 4 vom: 18. März, Seite 802-808 volume:65 year:2022 number:4 day:18 month:03 pages:802-808 |
sourceStr |
Enthalten in Science in China 65(2022), 4 vom: 18. März, Seite 802-808 volume:65 year:2022 number:4 day:18 month:03 pages:802-808 |
format_phy_str_mv |
Article |
institution |
findex.gbv.de |
topic_facet |
classification hyperspectral image land cover multimodal multispectral image remote sensing subspace learning superpixels |
isfreeaccess_bool |
false |
container_title |
Science in China |
authorswithroles_txt_mv |
Hong, DanFeng @@aut@@ Wu, Xin @@aut@@ Yao, Jing @@aut@@ Zhu, XiaoXiang @@aut@@ |
publishDateDaySort_date |
2022-03-18T00:00:00Z |
hierarchy_top_id |
385614756 |
id |
SPR050719297 |
language_de |
englisch |
fullrecord |
<?xml version="1.0" encoding="UTF-8"?><collection xmlns="http://www.loc.gov/MARC21/slim"><record><leader>01000naa a22002652 4500</leader><controlfield tag="001">SPR050719297</controlfield><controlfield tag="003">DE-627</controlfield><controlfield tag="005">20230507184854.0</controlfield><controlfield tag="007">cr uuu---uuuuu</controlfield><controlfield tag="008">230507s2022 xx |||||o 00| ||eng c</controlfield><datafield tag="024" ind1="7" ind2=" "><subfield code="a">10.1007/s11431-021-1988-y</subfield><subfield code="2">doi</subfield></datafield><datafield tag="035" ind1=" " ind2=" "><subfield code="a">(DE-627)SPR050719297</subfield></datafield><datafield tag="035" ind1=" " ind2=" "><subfield code="a">(SPR)s11431-021-1988-y-e</subfield></datafield><datafield tag="040" ind1=" " ind2=" "><subfield code="a">DE-627</subfield><subfield code="b">ger</subfield><subfield code="c">DE-627</subfield><subfield code="e">rakwb</subfield></datafield><datafield tag="041" ind1=" " ind2=" "><subfield code="a">eng</subfield></datafield><datafield tag="100" ind1="1" ind2=" "><subfield code="a">Hong, DanFeng</subfield><subfield code="e">verfasserin</subfield><subfield code="4">aut</subfield></datafield><datafield tag="245" ind1="1" ind2="0"><subfield code="a">Beyond pixels: Learning from multimodal hyperspectral superpixels for land cover classification</subfield></datafield><datafield tag="264" ind1=" " ind2="1"><subfield code="c">2022</subfield></datafield><datafield tag="336" ind1=" " ind2=" "><subfield code="a">Text</subfield><subfield code="b">txt</subfield><subfield code="2">rdacontent</subfield></datafield><datafield tag="337" ind1=" " ind2=" "><subfield code="a">Computermedien</subfield><subfield code="b">c</subfield><subfield code="2">rdamedia</subfield></datafield><datafield tag="338" ind1=" " ind2=" "><subfield code="a">Online-Ressource</subfield><subfield code="b">cr</subfield><subfield code="2">rdacarrier</subfield></datafield><datafield tag="500" ind1=" " ind2=" "><subfield code="a">© Science China Press and Springer-Verlag GmbH Germany, part of Springer Nature 2022</subfield></datafield><datafield tag="520" ind1=" " ind2=" "><subfield code="a">Abstract Despite tons of advanced classification models that have recently been developed for the land cover mapping task, the monotonicity of a single remote sensing data source, such as only using hyperspectral data or multispectral data, hinders the classification accuracy from being further improved and tends to meet the performance bottleneck. For this reason, we develop a novel superpixel-based subspace learning model, called Supace, by jointly learning multimodal feature representations from HS and MS superpixels for more accurate LCC results. Supace can learn a common subspace across multimodal RS data, where the diverse and complementary information from different modalities can be better combined, being capable of enhancing the discriminative ability of to-be-learned features in a more effective way. To better capture semantic information of objects in the feature learning process, superpixels that beyond pixels are regarded as the study object in our Supace for LCC. Extensive experiments have been conducted on two popular hyperspectral and multispectral datasets, demonstrating the superiority of the proposed Supace in the land cover classification task compared with several well-known baselines related to multimodal remote sensing image feature learning.</subfield></datafield><datafield tag="650" ind1=" " ind2="4"><subfield code="a">classification</subfield><subfield code="7">(dpeaa)DE-He213</subfield></datafield><datafield tag="650" ind1=" " ind2="4"><subfield code="a">hyperspectral image</subfield><subfield code="7">(dpeaa)DE-He213</subfield></datafield><datafield tag="650" ind1=" " ind2="4"><subfield code="a">land cover</subfield><subfield code="7">(dpeaa)DE-He213</subfield></datafield><datafield tag="650" ind1=" " ind2="4"><subfield code="a">multimodal</subfield><subfield code="7">(dpeaa)DE-He213</subfield></datafield><datafield tag="650" ind1=" " ind2="4"><subfield code="a">multispectral image</subfield><subfield code="7">(dpeaa)DE-He213</subfield></datafield><datafield tag="650" ind1=" " ind2="4"><subfield code="a">remote sensing</subfield><subfield code="7">(dpeaa)DE-He213</subfield></datafield><datafield tag="650" ind1=" " ind2="4"><subfield code="a">subspace learning</subfield><subfield code="7">(dpeaa)DE-He213</subfield></datafield><datafield tag="650" ind1=" " ind2="4"><subfield code="a">superpixels</subfield><subfield code="7">(dpeaa)DE-He213</subfield></datafield><datafield tag="700" ind1="1" ind2=" "><subfield code="a">Wu, Xin</subfield><subfield code="4">aut</subfield></datafield><datafield tag="700" ind1="1" ind2=" "><subfield code="a">Yao, Jing</subfield><subfield code="4">aut</subfield></datafield><datafield tag="700" ind1="1" ind2=" "><subfield code="a">Zhu, XiaoXiang</subfield><subfield code="4">aut</subfield></datafield><datafield tag="773" ind1="0" ind2="8"><subfield code="i">Enthalten in</subfield><subfield code="t">Science in China</subfield><subfield code="d">Heidelberg : Springer, 1997</subfield><subfield code="g">65(2022), 4 vom: 18. März, Seite 802-808</subfield><subfield code="w">(DE-627)385614756</subfield><subfield code="w">(DE-600)2142897-9</subfield><subfield code="x">1862-281X</subfield><subfield code="7">nnns</subfield></datafield><datafield tag="773" ind1="1" ind2="8"><subfield code="g">volume:65</subfield><subfield code="g">year:2022</subfield><subfield code="g">number:4</subfield><subfield code="g">day:18</subfield><subfield code="g">month:03</subfield><subfield code="g">pages:802-808</subfield></datafield><datafield tag="856" ind1="4" ind2="0"><subfield code="u">https://dx.doi.org/10.1007/s11431-021-1988-y</subfield><subfield code="z">lizenzpflichtig</subfield><subfield code="3">Volltext</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_USEFLAG_A</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">SYSFLAG_A</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_SPRINGER</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_20</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_22</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_23</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_24</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_31</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_32</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_39</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_40</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_60</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_62</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_65</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_69</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_70</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_73</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_74</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_90</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_95</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_100</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_105</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_110</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_120</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_138</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_152</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_161</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_171</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_187</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_224</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_250</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_281</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_285</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_293</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_370</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_602</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_702</subfield></datafield><datafield tag="951" ind1=" " ind2=" "><subfield code="a">AR</subfield></datafield><datafield tag="952" ind1=" " ind2=" "><subfield code="d">65</subfield><subfield code="j">2022</subfield><subfield code="e">4</subfield><subfield code="b">18</subfield><subfield code="c">03</subfield><subfield code="h">802-808</subfield></datafield></record></collection>
|
author |
Hong, DanFeng |
spellingShingle |
Hong, DanFeng misc classification misc hyperspectral image misc land cover misc multimodal misc multispectral image misc remote sensing misc subspace learning misc superpixels Beyond pixels: Learning from multimodal hyperspectral superpixels for land cover classification |
authorStr |
Hong, DanFeng |
ppnlink_with_tag_str_mv |
@@773@@(DE-627)385614756 |
format |
electronic Article |
delete_txt_mv |
keep |
author_role |
aut aut aut aut |
collection |
springer |
remote_str |
true |
illustrated |
Not Illustrated |
issn |
1862-281X |
topic_title |
Beyond pixels: Learning from multimodal hyperspectral superpixels for land cover classification classification (dpeaa)DE-He213 hyperspectral image (dpeaa)DE-He213 land cover (dpeaa)DE-He213 multimodal (dpeaa)DE-He213 multispectral image (dpeaa)DE-He213 remote sensing (dpeaa)DE-He213 subspace learning (dpeaa)DE-He213 superpixels (dpeaa)DE-He213 |
topic |
misc classification misc hyperspectral image misc land cover misc multimodal misc multispectral image misc remote sensing misc subspace learning misc superpixels |
topic_unstemmed |
misc classification misc hyperspectral image misc land cover misc multimodal misc multispectral image misc remote sensing misc subspace learning misc superpixels |
topic_browse |
misc classification misc hyperspectral image misc land cover misc multimodal misc multispectral image misc remote sensing misc subspace learning misc superpixels |
format_facet |
Elektronische Aufsätze Aufsätze Elektronische Ressource |
format_main_str_mv |
Text Zeitschrift/Artikel |
carriertype_str_mv |
cr |
hierarchy_parent_title |
Science in China |
hierarchy_parent_id |
385614756 |
hierarchy_top_title |
Science in China |
isfreeaccess_txt |
false |
familylinks_str_mv |
(DE-627)385614756 (DE-600)2142897-9 |
title |
Beyond pixels: Learning from multimodal hyperspectral superpixels for land cover classification |
ctrlnum |
(DE-627)SPR050719297 (SPR)s11431-021-1988-y-e |
title_full |
Beyond pixels: Learning from multimodal hyperspectral superpixels for land cover classification |
author_sort |
Hong, DanFeng |
journal |
Science in China |
journalStr |
Science in China |
lang_code |
eng |
isOA_bool |
false |
recordtype |
marc |
publishDateSort |
2022 |
contenttype_str_mv |
txt |
container_start_page |
802 |
author_browse |
Hong, DanFeng Wu, Xin Yao, Jing Zhu, XiaoXiang |
container_volume |
65 |
format_se |
Elektronische Aufsätze |
author-letter |
Hong, DanFeng |
doi_str_mv |
10.1007/s11431-021-1988-y |
title_sort |
beyond pixels: learning from multimodal hyperspectral superpixels for land cover classification |
title_auth |
Beyond pixels: Learning from multimodal hyperspectral superpixels for land cover classification |
abstract |
Abstract Despite tons of advanced classification models that have recently been developed for the land cover mapping task, the monotonicity of a single remote sensing data source, such as only using hyperspectral data or multispectral data, hinders the classification accuracy from being further improved and tends to meet the performance bottleneck. For this reason, we develop a novel superpixel-based subspace learning model, called Supace, by jointly learning multimodal feature representations from HS and MS superpixels for more accurate LCC results. Supace can learn a common subspace across multimodal RS data, where the diverse and complementary information from different modalities can be better combined, being capable of enhancing the discriminative ability of to-be-learned features in a more effective way. To better capture semantic information of objects in the feature learning process, superpixels that beyond pixels are regarded as the study object in our Supace for LCC. Extensive experiments have been conducted on two popular hyperspectral and multispectral datasets, demonstrating the superiority of the proposed Supace in the land cover classification task compared with several well-known baselines related to multimodal remote sensing image feature learning. © Science China Press and Springer-Verlag GmbH Germany, part of Springer Nature 2022 |
abstractGer |
Abstract Despite tons of advanced classification models that have recently been developed for the land cover mapping task, the monotonicity of a single remote sensing data source, such as only using hyperspectral data or multispectral data, hinders the classification accuracy from being further improved and tends to meet the performance bottleneck. For this reason, we develop a novel superpixel-based subspace learning model, called Supace, by jointly learning multimodal feature representations from HS and MS superpixels for more accurate LCC results. Supace can learn a common subspace across multimodal RS data, where the diverse and complementary information from different modalities can be better combined, being capable of enhancing the discriminative ability of to-be-learned features in a more effective way. To better capture semantic information of objects in the feature learning process, superpixels that beyond pixels are regarded as the study object in our Supace for LCC. Extensive experiments have been conducted on two popular hyperspectral and multispectral datasets, demonstrating the superiority of the proposed Supace in the land cover classification task compared with several well-known baselines related to multimodal remote sensing image feature learning. © Science China Press and Springer-Verlag GmbH Germany, part of Springer Nature 2022 |
abstract_unstemmed |
Abstract Despite tons of advanced classification models that have recently been developed for the land cover mapping task, the monotonicity of a single remote sensing data source, such as only using hyperspectral data or multispectral data, hinders the classification accuracy from being further improved and tends to meet the performance bottleneck. For this reason, we develop a novel superpixel-based subspace learning model, called Supace, by jointly learning multimodal feature representations from HS and MS superpixels for more accurate LCC results. Supace can learn a common subspace across multimodal RS data, where the diverse and complementary information from different modalities can be better combined, being capable of enhancing the discriminative ability of to-be-learned features in a more effective way. To better capture semantic information of objects in the feature learning process, superpixels that beyond pixels are regarded as the study object in our Supace for LCC. Extensive experiments have been conducted on two popular hyperspectral and multispectral datasets, demonstrating the superiority of the proposed Supace in the land cover classification task compared with several well-known baselines related to multimodal remote sensing image feature learning. © Science China Press and Springer-Verlag GmbH Germany, part of Springer Nature 2022 |
collection_details |
GBV_USEFLAG_A SYSFLAG_A GBV_SPRINGER GBV_ILN_20 GBV_ILN_22 GBV_ILN_23 GBV_ILN_24 GBV_ILN_31 GBV_ILN_32 GBV_ILN_39 GBV_ILN_40 GBV_ILN_60 GBV_ILN_62 GBV_ILN_65 GBV_ILN_69 GBV_ILN_70 GBV_ILN_73 GBV_ILN_74 GBV_ILN_90 GBV_ILN_95 GBV_ILN_100 GBV_ILN_105 GBV_ILN_110 GBV_ILN_120 GBV_ILN_138 GBV_ILN_152 GBV_ILN_161 GBV_ILN_171 GBV_ILN_187 GBV_ILN_224 GBV_ILN_250 GBV_ILN_281 GBV_ILN_285 GBV_ILN_293 GBV_ILN_370 GBV_ILN_602 GBV_ILN_702 |
container_issue |
4 |
title_short |
Beyond pixels: Learning from multimodal hyperspectral superpixels for land cover classification |
url |
https://dx.doi.org/10.1007/s11431-021-1988-y |
remote_bool |
true |
author2 |
Wu, Xin Yao, Jing Zhu, XiaoXiang |
author2Str |
Wu, Xin Yao, Jing Zhu, XiaoXiang |
ppnlink |
385614756 |
mediatype_str_mv |
c |
isOA_txt |
false |
hochschulschrift_bool |
false |
doi_str |
10.1007/s11431-021-1988-y |
up_date |
2024-07-03T17:20:47.702Z |
_version_ |
1803579291288469504 |
fullrecord_marcxml |
<?xml version="1.0" encoding="UTF-8"?><collection xmlns="http://www.loc.gov/MARC21/slim"><record><leader>01000naa a22002652 4500</leader><controlfield tag="001">SPR050719297</controlfield><controlfield tag="003">DE-627</controlfield><controlfield tag="005">20230507184854.0</controlfield><controlfield tag="007">cr uuu---uuuuu</controlfield><controlfield tag="008">230507s2022 xx |||||o 00| ||eng c</controlfield><datafield tag="024" ind1="7" ind2=" "><subfield code="a">10.1007/s11431-021-1988-y</subfield><subfield code="2">doi</subfield></datafield><datafield tag="035" ind1=" " ind2=" "><subfield code="a">(DE-627)SPR050719297</subfield></datafield><datafield tag="035" ind1=" " ind2=" "><subfield code="a">(SPR)s11431-021-1988-y-e</subfield></datafield><datafield tag="040" ind1=" " ind2=" "><subfield code="a">DE-627</subfield><subfield code="b">ger</subfield><subfield code="c">DE-627</subfield><subfield code="e">rakwb</subfield></datafield><datafield tag="041" ind1=" " ind2=" "><subfield code="a">eng</subfield></datafield><datafield tag="100" ind1="1" ind2=" "><subfield code="a">Hong, DanFeng</subfield><subfield code="e">verfasserin</subfield><subfield code="4">aut</subfield></datafield><datafield tag="245" ind1="1" ind2="0"><subfield code="a">Beyond pixels: Learning from multimodal hyperspectral superpixels for land cover classification</subfield></datafield><datafield tag="264" ind1=" " ind2="1"><subfield code="c">2022</subfield></datafield><datafield tag="336" ind1=" " ind2=" "><subfield code="a">Text</subfield><subfield code="b">txt</subfield><subfield code="2">rdacontent</subfield></datafield><datafield tag="337" ind1=" " ind2=" "><subfield code="a">Computermedien</subfield><subfield code="b">c</subfield><subfield code="2">rdamedia</subfield></datafield><datafield tag="338" ind1=" " ind2=" "><subfield code="a">Online-Ressource</subfield><subfield code="b">cr</subfield><subfield code="2">rdacarrier</subfield></datafield><datafield tag="500" ind1=" " ind2=" "><subfield code="a">© Science China Press and Springer-Verlag GmbH Germany, part of Springer Nature 2022</subfield></datafield><datafield tag="520" ind1=" " ind2=" "><subfield code="a">Abstract Despite tons of advanced classification models that have recently been developed for the land cover mapping task, the monotonicity of a single remote sensing data source, such as only using hyperspectral data or multispectral data, hinders the classification accuracy from being further improved and tends to meet the performance bottleneck. For this reason, we develop a novel superpixel-based subspace learning model, called Supace, by jointly learning multimodal feature representations from HS and MS superpixels for more accurate LCC results. Supace can learn a common subspace across multimodal RS data, where the diverse and complementary information from different modalities can be better combined, being capable of enhancing the discriminative ability of to-be-learned features in a more effective way. To better capture semantic information of objects in the feature learning process, superpixels that beyond pixels are regarded as the study object in our Supace for LCC. Extensive experiments have been conducted on two popular hyperspectral and multispectral datasets, demonstrating the superiority of the proposed Supace in the land cover classification task compared with several well-known baselines related to multimodal remote sensing image feature learning.</subfield></datafield><datafield tag="650" ind1=" " ind2="4"><subfield code="a">classification</subfield><subfield code="7">(dpeaa)DE-He213</subfield></datafield><datafield tag="650" ind1=" " ind2="4"><subfield code="a">hyperspectral image</subfield><subfield code="7">(dpeaa)DE-He213</subfield></datafield><datafield tag="650" ind1=" " ind2="4"><subfield code="a">land cover</subfield><subfield code="7">(dpeaa)DE-He213</subfield></datafield><datafield tag="650" ind1=" " ind2="4"><subfield code="a">multimodal</subfield><subfield code="7">(dpeaa)DE-He213</subfield></datafield><datafield tag="650" ind1=" " ind2="4"><subfield code="a">multispectral image</subfield><subfield code="7">(dpeaa)DE-He213</subfield></datafield><datafield tag="650" ind1=" " ind2="4"><subfield code="a">remote sensing</subfield><subfield code="7">(dpeaa)DE-He213</subfield></datafield><datafield tag="650" ind1=" " ind2="4"><subfield code="a">subspace learning</subfield><subfield code="7">(dpeaa)DE-He213</subfield></datafield><datafield tag="650" ind1=" " ind2="4"><subfield code="a">superpixels</subfield><subfield code="7">(dpeaa)DE-He213</subfield></datafield><datafield tag="700" ind1="1" ind2=" "><subfield code="a">Wu, Xin</subfield><subfield code="4">aut</subfield></datafield><datafield tag="700" ind1="1" ind2=" "><subfield code="a">Yao, Jing</subfield><subfield code="4">aut</subfield></datafield><datafield tag="700" ind1="1" ind2=" "><subfield code="a">Zhu, XiaoXiang</subfield><subfield code="4">aut</subfield></datafield><datafield tag="773" ind1="0" ind2="8"><subfield code="i">Enthalten in</subfield><subfield code="t">Science in China</subfield><subfield code="d">Heidelberg : Springer, 1997</subfield><subfield code="g">65(2022), 4 vom: 18. März, Seite 802-808</subfield><subfield code="w">(DE-627)385614756</subfield><subfield code="w">(DE-600)2142897-9</subfield><subfield code="x">1862-281X</subfield><subfield code="7">nnns</subfield></datafield><datafield tag="773" ind1="1" ind2="8"><subfield code="g">volume:65</subfield><subfield code="g">year:2022</subfield><subfield code="g">number:4</subfield><subfield code="g">day:18</subfield><subfield code="g">month:03</subfield><subfield code="g">pages:802-808</subfield></datafield><datafield tag="856" ind1="4" ind2="0"><subfield code="u">https://dx.doi.org/10.1007/s11431-021-1988-y</subfield><subfield code="z">lizenzpflichtig</subfield><subfield code="3">Volltext</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_USEFLAG_A</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">SYSFLAG_A</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_SPRINGER</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_20</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_22</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_23</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_24</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_31</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_32</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_39</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_40</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_60</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_62</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_65</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_69</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_70</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_73</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_74</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_90</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_95</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_100</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_105</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_110</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_120</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_138</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_152</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_161</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_171</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_187</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_224</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_250</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_281</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_285</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_293</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_370</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_602</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_702</subfield></datafield><datafield tag="951" ind1=" " ind2=" "><subfield code="a">AR</subfield></datafield><datafield tag="952" ind1=" " ind2=" "><subfield code="d">65</subfield><subfield code="j">2022</subfield><subfield code="e">4</subfield><subfield code="b">18</subfield><subfield code="c">03</subfield><subfield code="h">802-808</subfield></datafield></record></collection>
|
score |
7.4014635 |