Cross-city matters : a multimodal remote sensing benchmark dataset for cross-city semantic segmentation using high-resolution domain adaptation networks
Artificial intelligence (AI) approaches nowadays have gained remarkable success in single-modality-dominated remote sensing (RS) applications, especially with an emphasis on individual urban environments (e.g., single cities or regions). Yet these AI models tend to meet the performance bottleneck in...
Ausführliche Beschreibung
Autor*in: |
Hong, Danfeng [verfasserIn] Zhang, Bing [verfasserIn] Li, Hao [verfasserIn] Li, Yuxuan [verfasserIn] Yao, Jing [verfasserIn] Li, Chenyu [verfasserIn] Werner, Martin [verfasserIn] Chanussot, Jocelyn [verfasserIn] Zipf, Alexander - 1971- [verfasserIn] Zhu, Xiao Xiang [verfasserIn] |
---|
Format: |
E-Artikel |
---|---|
Sprache: |
Englisch |
Erschienen: |
15 December 2023 |
---|
Schlagwörter: |
---|
Anmerkung: |
Online verfügbar 30 October 2023, Version des Artikels 30 October 2023 Gesehen am 24.01.2024 |
---|---|
Umfang: |
Illustrationen 17 |
Übergeordnetes Werk: |
Enthalten in: Remote sensing of environment - Amsterdam [u.a.] : Elsevier Science, 1969, 299(2023) vom: Dez., Artikel-ID 113856, Seite 1-17 |
---|---|
Übergeordnetes Werk: |
volume:299 ; year:2023 ; month:12 ; elocationid:113856 ; pages:1-17 ; extent:17 |
Links: |
---|
DOI / URN: |
10.1016/j.rse.2023.113856 |
---|
Katalog-ID: |
1878862448 |
---|
LEADER | 01000caa a2200265 4500 | ||
---|---|---|---|
001 | 1878862448 | ||
003 | DE-627 | ||
005 | 20240307030141.0 | ||
007 | cr uuu---uuuuu | ||
008 | 240124s2023 xx |||||o 00| ||eng c | ||
024 | 7 | |a 10.1016/j.rse.2023.113856 |2 doi | |
035 | |a (DE-627)1878862448 | ||
035 | |a (DE-599)KXP1878862448 | ||
035 | |a (OCoLC)1425207759 | ||
040 | |a DE-627 |b ger |c DE-627 |e rda | ||
041 | |a eng | ||
100 | 1 | |a Hong, Danfeng |e verfasserin |0 (DE-588)1229101713 |0 (DE-627)1751087808 |4 aut | |
245 | 1 | 0 | |a Cross-city matters |b a multimodal remote sensing benchmark dataset for cross-city semantic segmentation using high-resolution domain adaptation networks |c Danfeng Hong, Bing Zhang, Hao Li, Yuxuan Li, Jing Yao, Chenyu Li, Martin Werner, Jocelyn Chanussot, Alexander Zipf, Xiao Xiang Zhu |
264 | 1 | |c 15 December 2023 | |
300 | |b Illustrationen | ||
300 | |a 17 | ||
336 | |a Text |b txt |2 rdacontent | ||
337 | |a Computermedien |b c |2 rdamedia | ||
338 | |a Online-Ressource |b cr |2 rdacarrier | ||
500 | |a Online verfügbar 30 October 2023, Version des Artikels 30 October 2023 | ||
500 | |a Gesehen am 24.01.2024 | ||
520 | |a Artificial intelligence (AI) approaches nowadays have gained remarkable success in single-modality-dominated remote sensing (RS) applications, especially with an emphasis on individual urban environments (e.g., single cities or regions). Yet these AI models tend to meet the performance bottleneck in the case studies across cities or regions, due to the lack of diverse RS information and cutting-edge solutions with high generalization ability. To this end, we build a new set of multimodal remote sensing benchmark datasets (including hyperspectral, multispectral, SAR) for the study purpose of the cross-city semantic segmentation task (called C2Seg dataset), which consists of two cross-city scenes, i.e., Berlin-Augsburg (in Germany) and Beijing-Wuhan (in China). Beyond the single city, we propose a high-resolution domain adaptation network, HighDAN for short, to promote the AI model's generalization ability from the multi-city environments. HighDAN is capable of retaining the spatially topological structure of the studied urban scene well in a parallel high-to-low resolution fusion fashion but also closing the gap derived from enormous differences of RS image representations between different cities by means of adversarial learning. In addition, the Dice loss is considered in HighDAN to alleviate the class imbalance issue caused by factors across cities. Extensive experiments conducted on the C2Seg dataset show the superiority of our HighDAN in terms of segmentation performance and generalization ability, compared to state-of-the-art competitors. The C2Seg dataset and the semantic segmentation toolbox (involving the proposed HighDAN) will be available publicly at https://github.com/danfenghong/RSE_Cross-city. | ||
650 | 4 | |a Cross-city | |
650 | 4 | |a Deep learning | |
650 | 4 | |a Dice loss | |
650 | 4 | |a Domain adaptation | |
650 | 4 | |a High-resolution network | |
650 | 4 | |a Land cover | |
650 | 4 | |a Multimodal benchmark datasets | |
650 | 4 | |a Remote sensing | |
650 | 4 | |a Segmentation | |
700 | 1 | |a Zhang, Bing |e verfasserin |4 aut | |
700 | 1 | |a Li, Hao |e verfasserin |4 aut | |
700 | 1 | |a Li, Yuxuan |e verfasserin |4 aut | |
700 | 1 | |a Yao, Jing |e verfasserin |4 aut | |
700 | 1 | |a Li, Chenyu |e verfasserin |4 aut | |
700 | 1 | |a Werner, Martin |e verfasserin |4 aut | |
700 | 1 | |a Chanussot, Jocelyn |e verfasserin |4 aut | |
700 | 1 | |a Zipf, Alexander |d 1971- |e verfasserin |0 (DE-588)123246369 |0 (DE-627)082437076 |0 (DE-576)175641056 |4 aut | |
700 | 1 | |a Zhu, Xiao Xiang |e verfasserin |4 aut | |
773 | 0 | 8 | |i Enthalten in |t Remote sensing of environment |d Amsterdam [u.a.] : Elsevier Science, 1969 |g 299(2023) vom: Dez., Artikel-ID 113856, Seite 1-17 |h Online-Ressource |w (DE-627)306591324 |w (DE-600)1498713-2 |w (DE-576)098330268 |x 1879-0704 |7 nnns |
773 | 1 | 8 | |g volume:299 |g year:2023 |g month:12 |g elocationid:113856 |g pages:1-17 |g extent:17 |
856 | 4 | 0 | |u https://doi.org/10.1016/j.rse.2023.113856 |x Verlag |x Resolving-System |z lizenzpflichtig |3 Volltext |
856 | 4 | 0 | |u https://www.sciencedirect.com/science/article/pii/S0034425723004078 |x Verlag |z lizenzpflichtig |3 Volltext |
912 | |a GBV_USEFLAG_U | ||
912 | |a GBV_ILN_2013 | ||
912 | |a ISIL_DE-16-250 | ||
912 | |a SYSFLAG_1 | ||
912 | |a GBV_KXP | ||
912 | |a GBV_ILN_20 | ||
912 | |a GBV_ILN_22 | ||
912 | |a GBV_ILN_23 | ||
912 | |a GBV_ILN_24 | ||
912 | |a GBV_ILN_31 | ||
912 | |a GBV_ILN_32 | ||
912 | |a GBV_ILN_40 | ||
912 | |a GBV_ILN_60 | ||
912 | |a GBV_ILN_62 | ||
912 | |a GBV_ILN_65 | ||
912 | |a GBV_ILN_69 | ||
912 | |a GBV_ILN_70 | ||
912 | |a GBV_ILN_73 | ||
912 | |a GBV_ILN_74 | ||
912 | |a GBV_ILN_90 | ||
912 | |a GBV_ILN_95 | ||
912 | |a GBV_ILN_100 | ||
912 | |a GBV_ILN_105 | ||
912 | |a GBV_ILN_110 | ||
912 | |a GBV_ILN_150 | ||
912 | |a GBV_ILN_151 | ||
912 | |a GBV_ILN_187 | ||
912 | |a GBV_ILN_213 | ||
912 | |a GBV_ILN_224 | ||
912 | |a GBV_ILN_230 | ||
912 | |a GBV_ILN_370 | ||
912 | |a GBV_ILN_602 | ||
912 | |a GBV_ILN_702 | ||
912 | |a GBV_ILN_2001 | ||
912 | |a GBV_ILN_2003 | ||
912 | |a GBV_ILN_2004 | ||
912 | |a GBV_ILN_2005 | ||
912 | |a GBV_ILN_2007 | ||
912 | |a GBV_ILN_2009 | ||
912 | |a GBV_ILN_2010 | ||
912 | |a GBV_ILN_2011 | ||
912 | |a GBV_ILN_2014 | ||
912 | |a GBV_ILN_2015 | ||
912 | |a GBV_ILN_2020 | ||
912 | |a GBV_ILN_2021 | ||
912 | |a GBV_ILN_2025 | ||
912 | |a GBV_ILN_2026 | ||
912 | |a GBV_ILN_2027 | ||
912 | |a GBV_ILN_2034 | ||
912 | |a GBV_ILN_2044 | ||
912 | |a GBV_ILN_2048 | ||
912 | |a GBV_ILN_2049 | ||
912 | |a GBV_ILN_2050 | ||
912 | |a GBV_ILN_2055 | ||
912 | |a GBV_ILN_2056 | ||
912 | |a GBV_ILN_2059 | ||
912 | |a GBV_ILN_2061 | ||
912 | |a GBV_ILN_2064 | ||
912 | |a GBV_ILN_2106 | ||
912 | |a GBV_ILN_2110 | ||
912 | |a GBV_ILN_2111 | ||
912 | |a GBV_ILN_2112 | ||
912 | |a GBV_ILN_2122 | ||
912 | |a GBV_ILN_2129 | ||
912 | |a GBV_ILN_2143 | ||
912 | |a GBV_ILN_2152 | ||
912 | |a GBV_ILN_2153 | ||
912 | |a GBV_ILN_2190 | ||
912 | |a GBV_ILN_2232 | ||
912 | |a GBV_ILN_2336 | ||
912 | |a GBV_ILN_2470 | ||
912 | |a GBV_ILN_2507 | ||
912 | |a GBV_ILN_4035 | ||
912 | |a GBV_ILN_4037 | ||
912 | |a GBV_ILN_4112 | ||
912 | |a GBV_ILN_4125 | ||
912 | |a GBV_ILN_4242 | ||
912 | |a GBV_ILN_4249 | ||
912 | |a GBV_ILN_4251 | ||
912 | |a GBV_ILN_4305 | ||
912 | |a GBV_ILN_4306 | ||
912 | |a GBV_ILN_4307 | ||
912 | |a GBV_ILN_4313 | ||
912 | |a GBV_ILN_4322 | ||
912 | |a GBV_ILN_4323 | ||
912 | |a GBV_ILN_4324 | ||
912 | |a GBV_ILN_4326 | ||
912 | |a GBV_ILN_4333 | ||
912 | |a GBV_ILN_4334 | ||
912 | |a GBV_ILN_4338 | ||
912 | |a GBV_ILN_4393 | ||
912 | |a GBV_ILN_4700 | ||
951 | |a AR | ||
952 | |d 299 |j 2023 |c 12 |i 113856 |h 1-17 |g 17 | ||
980 | |2 2013 |1 01 |x DE-16-250 |b 446730200X |c 00 |f --%%-- |d --%%-- |e --%%-- |j --%%-- |y l01 |z 24-01-24 | ||
982 | |2 2013 |1 01 |x DE-16-250 |8 00 |s s |a hd2023 | ||
982 | |2 2013 |1 01 |x DE-16-250 |8 01 |s s |0 (DE-627)1410508463 |a wissenschaftlicher Artikel (Zeitschrift) | ||
982 | |2 2013 |1 01 |x DE-16-250 |8 02 |s s |a per_10 | ||
982 | |2 2013 |1 01 |x DE-16-250 |8 03 |s s |a s_17 | ||
982 | |2 2013 |1 01 |x DE-16-250 |8 04 |s p |0 (DE-627)1432438840 |a Zipf, Alexander | ||
982 | |2 2013 |1 01 |x DE-16-250 |8 04 |s k |0 (DE-627)1416534997 |a Geographisches Institut | ||
982 | |2 2013 |1 01 |x DE-16-250 |8 04 |s s |0 (DE-627)1410501914 |a Verfasser | ||
982 | |2 2013 |1 01 |x DE-16-250 |8 04 |s s |a pos_9 |
author_variant |
d h dh b z bz h l hl y l yl j y jy c l cl m w mw j c jc a z az x x z xx xxz |
---|---|
matchkey_str |
article:18790704:2023----::rsctmt |
oclc_num |
1425207759 |
hierarchy_sort_str |
15 December 2023 |
publishDate |
2023 |
allfields |
10.1016/j.rse.2023.113856 doi (DE-627)1878862448 (DE-599)KXP1878862448 (OCoLC)1425207759 DE-627 ger DE-627 rda eng Hong, Danfeng verfasserin (DE-588)1229101713 (DE-627)1751087808 aut Cross-city matters a multimodal remote sensing benchmark dataset for cross-city semantic segmentation using high-resolution domain adaptation networks Danfeng Hong, Bing Zhang, Hao Li, Yuxuan Li, Jing Yao, Chenyu Li, Martin Werner, Jocelyn Chanussot, Alexander Zipf, Xiao Xiang Zhu 15 December 2023 Illustrationen 17 Text txt rdacontent Computermedien c rdamedia Online-Ressource cr rdacarrier Online verfügbar 30 October 2023, Version des Artikels 30 October 2023 Gesehen am 24.01.2024 Artificial intelligence (AI) approaches nowadays have gained remarkable success in single-modality-dominated remote sensing (RS) applications, especially with an emphasis on individual urban environments (e.g., single cities or regions). Yet these AI models tend to meet the performance bottleneck in the case studies across cities or regions, due to the lack of diverse RS information and cutting-edge solutions with high generalization ability. To this end, we build a new set of multimodal remote sensing benchmark datasets (including hyperspectral, multispectral, SAR) for the study purpose of the cross-city semantic segmentation task (called C2Seg dataset), which consists of two cross-city scenes, i.e., Berlin-Augsburg (in Germany) and Beijing-Wuhan (in China). Beyond the single city, we propose a high-resolution domain adaptation network, HighDAN for short, to promote the AI model's generalization ability from the multi-city environments. HighDAN is capable of retaining the spatially topological structure of the studied urban scene well in a parallel high-to-low resolution fusion fashion but also closing the gap derived from enormous differences of RS image representations between different cities by means of adversarial learning. In addition, the Dice loss is considered in HighDAN to alleviate the class imbalance issue caused by factors across cities. Extensive experiments conducted on the C2Seg dataset show the superiority of our HighDAN in terms of segmentation performance and generalization ability, compared to state-of-the-art competitors. The C2Seg dataset and the semantic segmentation toolbox (involving the proposed HighDAN) will be available publicly at https://github.com/danfenghong/RSE_Cross-city. Cross-city Deep learning Dice loss Domain adaptation High-resolution network Land cover Multimodal benchmark datasets Remote sensing Segmentation Zhang, Bing verfasserin aut Li, Hao verfasserin aut Li, Yuxuan verfasserin aut Yao, Jing verfasserin aut Li, Chenyu verfasserin aut Werner, Martin verfasserin aut Chanussot, Jocelyn verfasserin aut Zipf, Alexander 1971- verfasserin (DE-588)123246369 (DE-627)082437076 (DE-576)175641056 aut Zhu, Xiao Xiang verfasserin aut Enthalten in Remote sensing of environment Amsterdam [u.a.] : Elsevier Science, 1969 299(2023) vom: Dez., Artikel-ID 113856, Seite 1-17 Online-Ressource (DE-627)306591324 (DE-600)1498713-2 (DE-576)098330268 1879-0704 nnns volume:299 year:2023 month:12 elocationid:113856 pages:1-17 extent:17 https://doi.org/10.1016/j.rse.2023.113856 Verlag Resolving-System lizenzpflichtig Volltext https://www.sciencedirect.com/science/article/pii/S0034425723004078 Verlag lizenzpflichtig Volltext GBV_USEFLAG_U GBV_ILN_2013 ISIL_DE-16-250 SYSFLAG_1 GBV_KXP GBV_ILN_20 GBV_ILN_22 GBV_ILN_23 GBV_ILN_24 GBV_ILN_31 GBV_ILN_32 GBV_ILN_40 GBV_ILN_60 GBV_ILN_62 GBV_ILN_65 GBV_ILN_69 GBV_ILN_70 GBV_ILN_73 GBV_ILN_74 GBV_ILN_90 GBV_ILN_95 GBV_ILN_100 GBV_ILN_105 GBV_ILN_110 GBV_ILN_150 GBV_ILN_151 GBV_ILN_187 GBV_ILN_213 GBV_ILN_224 GBV_ILN_230 GBV_ILN_370 GBV_ILN_602 GBV_ILN_702 GBV_ILN_2001 GBV_ILN_2003 GBV_ILN_2004 GBV_ILN_2005 GBV_ILN_2007 GBV_ILN_2009 GBV_ILN_2010 GBV_ILN_2011 GBV_ILN_2014 GBV_ILN_2015 GBV_ILN_2020 GBV_ILN_2021 GBV_ILN_2025 GBV_ILN_2026 GBV_ILN_2027 GBV_ILN_2034 GBV_ILN_2044 GBV_ILN_2048 GBV_ILN_2049 GBV_ILN_2050 GBV_ILN_2055 GBV_ILN_2056 GBV_ILN_2059 GBV_ILN_2061 GBV_ILN_2064 GBV_ILN_2106 GBV_ILN_2110 GBV_ILN_2111 GBV_ILN_2112 GBV_ILN_2122 GBV_ILN_2129 GBV_ILN_2143 GBV_ILN_2152 GBV_ILN_2153 GBV_ILN_2190 GBV_ILN_2232 GBV_ILN_2336 GBV_ILN_2470 GBV_ILN_2507 GBV_ILN_4035 GBV_ILN_4037 GBV_ILN_4112 GBV_ILN_4125 GBV_ILN_4242 GBV_ILN_4249 GBV_ILN_4251 GBV_ILN_4305 GBV_ILN_4306 GBV_ILN_4307 GBV_ILN_4313 GBV_ILN_4322 GBV_ILN_4323 GBV_ILN_4324 GBV_ILN_4326 GBV_ILN_4333 GBV_ILN_4334 GBV_ILN_4338 GBV_ILN_4393 GBV_ILN_4700 AR 299 2023 12 113856 1-17 17 2013 01 DE-16-250 446730200X 00 --%%-- --%%-- --%%-- --%%-- l01 24-01-24 2013 01 DE-16-250 00 s hd2023 2013 01 DE-16-250 01 s (DE-627)1410508463 wissenschaftlicher Artikel (Zeitschrift) 2013 01 DE-16-250 02 s per_10 2013 01 DE-16-250 03 s s_17 2013 01 DE-16-250 04 p (DE-627)1432438840 Zipf, Alexander 2013 01 DE-16-250 04 k (DE-627)1416534997 Geographisches Institut 2013 01 DE-16-250 04 s (DE-627)1410501914 Verfasser 2013 01 DE-16-250 04 s pos_9 |
spelling |
10.1016/j.rse.2023.113856 doi (DE-627)1878862448 (DE-599)KXP1878862448 (OCoLC)1425207759 DE-627 ger DE-627 rda eng Hong, Danfeng verfasserin (DE-588)1229101713 (DE-627)1751087808 aut Cross-city matters a multimodal remote sensing benchmark dataset for cross-city semantic segmentation using high-resolution domain adaptation networks Danfeng Hong, Bing Zhang, Hao Li, Yuxuan Li, Jing Yao, Chenyu Li, Martin Werner, Jocelyn Chanussot, Alexander Zipf, Xiao Xiang Zhu 15 December 2023 Illustrationen 17 Text txt rdacontent Computermedien c rdamedia Online-Ressource cr rdacarrier Online verfügbar 30 October 2023, Version des Artikels 30 October 2023 Gesehen am 24.01.2024 Artificial intelligence (AI) approaches nowadays have gained remarkable success in single-modality-dominated remote sensing (RS) applications, especially with an emphasis on individual urban environments (e.g., single cities or regions). Yet these AI models tend to meet the performance bottleneck in the case studies across cities or regions, due to the lack of diverse RS information and cutting-edge solutions with high generalization ability. To this end, we build a new set of multimodal remote sensing benchmark datasets (including hyperspectral, multispectral, SAR) for the study purpose of the cross-city semantic segmentation task (called C2Seg dataset), which consists of two cross-city scenes, i.e., Berlin-Augsburg (in Germany) and Beijing-Wuhan (in China). Beyond the single city, we propose a high-resolution domain adaptation network, HighDAN for short, to promote the AI model's generalization ability from the multi-city environments. HighDAN is capable of retaining the spatially topological structure of the studied urban scene well in a parallel high-to-low resolution fusion fashion but also closing the gap derived from enormous differences of RS image representations between different cities by means of adversarial learning. In addition, the Dice loss is considered in HighDAN to alleviate the class imbalance issue caused by factors across cities. Extensive experiments conducted on the C2Seg dataset show the superiority of our HighDAN in terms of segmentation performance and generalization ability, compared to state-of-the-art competitors. The C2Seg dataset and the semantic segmentation toolbox (involving the proposed HighDAN) will be available publicly at https://github.com/danfenghong/RSE_Cross-city. Cross-city Deep learning Dice loss Domain adaptation High-resolution network Land cover Multimodal benchmark datasets Remote sensing Segmentation Zhang, Bing verfasserin aut Li, Hao verfasserin aut Li, Yuxuan verfasserin aut Yao, Jing verfasserin aut Li, Chenyu verfasserin aut Werner, Martin verfasserin aut Chanussot, Jocelyn verfasserin aut Zipf, Alexander 1971- verfasserin (DE-588)123246369 (DE-627)082437076 (DE-576)175641056 aut Zhu, Xiao Xiang verfasserin aut Enthalten in Remote sensing of environment Amsterdam [u.a.] : Elsevier Science, 1969 299(2023) vom: Dez., Artikel-ID 113856, Seite 1-17 Online-Ressource (DE-627)306591324 (DE-600)1498713-2 (DE-576)098330268 1879-0704 nnns volume:299 year:2023 month:12 elocationid:113856 pages:1-17 extent:17 https://doi.org/10.1016/j.rse.2023.113856 Verlag Resolving-System lizenzpflichtig Volltext https://www.sciencedirect.com/science/article/pii/S0034425723004078 Verlag lizenzpflichtig Volltext GBV_USEFLAG_U GBV_ILN_2013 ISIL_DE-16-250 SYSFLAG_1 GBV_KXP GBV_ILN_20 GBV_ILN_22 GBV_ILN_23 GBV_ILN_24 GBV_ILN_31 GBV_ILN_32 GBV_ILN_40 GBV_ILN_60 GBV_ILN_62 GBV_ILN_65 GBV_ILN_69 GBV_ILN_70 GBV_ILN_73 GBV_ILN_74 GBV_ILN_90 GBV_ILN_95 GBV_ILN_100 GBV_ILN_105 GBV_ILN_110 GBV_ILN_150 GBV_ILN_151 GBV_ILN_187 GBV_ILN_213 GBV_ILN_224 GBV_ILN_230 GBV_ILN_370 GBV_ILN_602 GBV_ILN_702 GBV_ILN_2001 GBV_ILN_2003 GBV_ILN_2004 GBV_ILN_2005 GBV_ILN_2007 GBV_ILN_2009 GBV_ILN_2010 GBV_ILN_2011 GBV_ILN_2014 GBV_ILN_2015 GBV_ILN_2020 GBV_ILN_2021 GBV_ILN_2025 GBV_ILN_2026 GBV_ILN_2027 GBV_ILN_2034 GBV_ILN_2044 GBV_ILN_2048 GBV_ILN_2049 GBV_ILN_2050 GBV_ILN_2055 GBV_ILN_2056 GBV_ILN_2059 GBV_ILN_2061 GBV_ILN_2064 GBV_ILN_2106 GBV_ILN_2110 GBV_ILN_2111 GBV_ILN_2112 GBV_ILN_2122 GBV_ILN_2129 GBV_ILN_2143 GBV_ILN_2152 GBV_ILN_2153 GBV_ILN_2190 GBV_ILN_2232 GBV_ILN_2336 GBV_ILN_2470 GBV_ILN_2507 GBV_ILN_4035 GBV_ILN_4037 GBV_ILN_4112 GBV_ILN_4125 GBV_ILN_4242 GBV_ILN_4249 GBV_ILN_4251 GBV_ILN_4305 GBV_ILN_4306 GBV_ILN_4307 GBV_ILN_4313 GBV_ILN_4322 GBV_ILN_4323 GBV_ILN_4324 GBV_ILN_4326 GBV_ILN_4333 GBV_ILN_4334 GBV_ILN_4338 GBV_ILN_4393 GBV_ILN_4700 AR 299 2023 12 113856 1-17 17 2013 01 DE-16-250 446730200X 00 --%%-- --%%-- --%%-- --%%-- l01 24-01-24 2013 01 DE-16-250 00 s hd2023 2013 01 DE-16-250 01 s (DE-627)1410508463 wissenschaftlicher Artikel (Zeitschrift) 2013 01 DE-16-250 02 s per_10 2013 01 DE-16-250 03 s s_17 2013 01 DE-16-250 04 p (DE-627)1432438840 Zipf, Alexander 2013 01 DE-16-250 04 k (DE-627)1416534997 Geographisches Institut 2013 01 DE-16-250 04 s (DE-627)1410501914 Verfasser 2013 01 DE-16-250 04 s pos_9 |
allfields_unstemmed |
10.1016/j.rse.2023.113856 doi (DE-627)1878862448 (DE-599)KXP1878862448 (OCoLC)1425207759 DE-627 ger DE-627 rda eng Hong, Danfeng verfasserin (DE-588)1229101713 (DE-627)1751087808 aut Cross-city matters a multimodal remote sensing benchmark dataset for cross-city semantic segmentation using high-resolution domain adaptation networks Danfeng Hong, Bing Zhang, Hao Li, Yuxuan Li, Jing Yao, Chenyu Li, Martin Werner, Jocelyn Chanussot, Alexander Zipf, Xiao Xiang Zhu 15 December 2023 Illustrationen 17 Text txt rdacontent Computermedien c rdamedia Online-Ressource cr rdacarrier Online verfügbar 30 October 2023, Version des Artikels 30 October 2023 Gesehen am 24.01.2024 Artificial intelligence (AI) approaches nowadays have gained remarkable success in single-modality-dominated remote sensing (RS) applications, especially with an emphasis on individual urban environments (e.g., single cities or regions). Yet these AI models tend to meet the performance bottleneck in the case studies across cities or regions, due to the lack of diverse RS information and cutting-edge solutions with high generalization ability. To this end, we build a new set of multimodal remote sensing benchmark datasets (including hyperspectral, multispectral, SAR) for the study purpose of the cross-city semantic segmentation task (called C2Seg dataset), which consists of two cross-city scenes, i.e., Berlin-Augsburg (in Germany) and Beijing-Wuhan (in China). Beyond the single city, we propose a high-resolution domain adaptation network, HighDAN for short, to promote the AI model's generalization ability from the multi-city environments. HighDAN is capable of retaining the spatially topological structure of the studied urban scene well in a parallel high-to-low resolution fusion fashion but also closing the gap derived from enormous differences of RS image representations between different cities by means of adversarial learning. In addition, the Dice loss is considered in HighDAN to alleviate the class imbalance issue caused by factors across cities. Extensive experiments conducted on the C2Seg dataset show the superiority of our HighDAN in terms of segmentation performance and generalization ability, compared to state-of-the-art competitors. The C2Seg dataset and the semantic segmentation toolbox (involving the proposed HighDAN) will be available publicly at https://github.com/danfenghong/RSE_Cross-city. Cross-city Deep learning Dice loss Domain adaptation High-resolution network Land cover Multimodal benchmark datasets Remote sensing Segmentation Zhang, Bing verfasserin aut Li, Hao verfasserin aut Li, Yuxuan verfasserin aut Yao, Jing verfasserin aut Li, Chenyu verfasserin aut Werner, Martin verfasserin aut Chanussot, Jocelyn verfasserin aut Zipf, Alexander 1971- verfasserin (DE-588)123246369 (DE-627)082437076 (DE-576)175641056 aut Zhu, Xiao Xiang verfasserin aut Enthalten in Remote sensing of environment Amsterdam [u.a.] : Elsevier Science, 1969 299(2023) vom: Dez., Artikel-ID 113856, Seite 1-17 Online-Ressource (DE-627)306591324 (DE-600)1498713-2 (DE-576)098330268 1879-0704 nnns volume:299 year:2023 month:12 elocationid:113856 pages:1-17 extent:17 https://doi.org/10.1016/j.rse.2023.113856 Verlag Resolving-System lizenzpflichtig Volltext https://www.sciencedirect.com/science/article/pii/S0034425723004078 Verlag lizenzpflichtig Volltext GBV_USEFLAG_U GBV_ILN_2013 ISIL_DE-16-250 SYSFLAG_1 GBV_KXP GBV_ILN_20 GBV_ILN_22 GBV_ILN_23 GBV_ILN_24 GBV_ILN_31 GBV_ILN_32 GBV_ILN_40 GBV_ILN_60 GBV_ILN_62 GBV_ILN_65 GBV_ILN_69 GBV_ILN_70 GBV_ILN_73 GBV_ILN_74 GBV_ILN_90 GBV_ILN_95 GBV_ILN_100 GBV_ILN_105 GBV_ILN_110 GBV_ILN_150 GBV_ILN_151 GBV_ILN_187 GBV_ILN_213 GBV_ILN_224 GBV_ILN_230 GBV_ILN_370 GBV_ILN_602 GBV_ILN_702 GBV_ILN_2001 GBV_ILN_2003 GBV_ILN_2004 GBV_ILN_2005 GBV_ILN_2007 GBV_ILN_2009 GBV_ILN_2010 GBV_ILN_2011 GBV_ILN_2014 GBV_ILN_2015 GBV_ILN_2020 GBV_ILN_2021 GBV_ILN_2025 GBV_ILN_2026 GBV_ILN_2027 GBV_ILN_2034 GBV_ILN_2044 GBV_ILN_2048 GBV_ILN_2049 GBV_ILN_2050 GBV_ILN_2055 GBV_ILN_2056 GBV_ILN_2059 GBV_ILN_2061 GBV_ILN_2064 GBV_ILN_2106 GBV_ILN_2110 GBV_ILN_2111 GBV_ILN_2112 GBV_ILN_2122 GBV_ILN_2129 GBV_ILN_2143 GBV_ILN_2152 GBV_ILN_2153 GBV_ILN_2190 GBV_ILN_2232 GBV_ILN_2336 GBV_ILN_2470 GBV_ILN_2507 GBV_ILN_4035 GBV_ILN_4037 GBV_ILN_4112 GBV_ILN_4125 GBV_ILN_4242 GBV_ILN_4249 GBV_ILN_4251 GBV_ILN_4305 GBV_ILN_4306 GBV_ILN_4307 GBV_ILN_4313 GBV_ILN_4322 GBV_ILN_4323 GBV_ILN_4324 GBV_ILN_4326 GBV_ILN_4333 GBV_ILN_4334 GBV_ILN_4338 GBV_ILN_4393 GBV_ILN_4700 AR 299 2023 12 113856 1-17 17 2013 01 DE-16-250 446730200X 00 --%%-- --%%-- --%%-- --%%-- l01 24-01-24 2013 01 DE-16-250 00 s hd2023 2013 01 DE-16-250 01 s (DE-627)1410508463 wissenschaftlicher Artikel (Zeitschrift) 2013 01 DE-16-250 02 s per_10 2013 01 DE-16-250 03 s s_17 2013 01 DE-16-250 04 p (DE-627)1432438840 Zipf, Alexander 2013 01 DE-16-250 04 k (DE-627)1416534997 Geographisches Institut 2013 01 DE-16-250 04 s (DE-627)1410501914 Verfasser 2013 01 DE-16-250 04 s pos_9 |
allfieldsGer |
10.1016/j.rse.2023.113856 doi (DE-627)1878862448 (DE-599)KXP1878862448 (OCoLC)1425207759 DE-627 ger DE-627 rda eng Hong, Danfeng verfasserin (DE-588)1229101713 (DE-627)1751087808 aut Cross-city matters a multimodal remote sensing benchmark dataset for cross-city semantic segmentation using high-resolution domain adaptation networks Danfeng Hong, Bing Zhang, Hao Li, Yuxuan Li, Jing Yao, Chenyu Li, Martin Werner, Jocelyn Chanussot, Alexander Zipf, Xiao Xiang Zhu 15 December 2023 Illustrationen 17 Text txt rdacontent Computermedien c rdamedia Online-Ressource cr rdacarrier Online verfügbar 30 October 2023, Version des Artikels 30 October 2023 Gesehen am 24.01.2024 Artificial intelligence (AI) approaches nowadays have gained remarkable success in single-modality-dominated remote sensing (RS) applications, especially with an emphasis on individual urban environments (e.g., single cities or regions). Yet these AI models tend to meet the performance bottleneck in the case studies across cities or regions, due to the lack of diverse RS information and cutting-edge solutions with high generalization ability. To this end, we build a new set of multimodal remote sensing benchmark datasets (including hyperspectral, multispectral, SAR) for the study purpose of the cross-city semantic segmentation task (called C2Seg dataset), which consists of two cross-city scenes, i.e., Berlin-Augsburg (in Germany) and Beijing-Wuhan (in China). Beyond the single city, we propose a high-resolution domain adaptation network, HighDAN for short, to promote the AI model's generalization ability from the multi-city environments. HighDAN is capable of retaining the spatially topological structure of the studied urban scene well in a parallel high-to-low resolution fusion fashion but also closing the gap derived from enormous differences of RS image representations between different cities by means of adversarial learning. In addition, the Dice loss is considered in HighDAN to alleviate the class imbalance issue caused by factors across cities. Extensive experiments conducted on the C2Seg dataset show the superiority of our HighDAN in terms of segmentation performance and generalization ability, compared to state-of-the-art competitors. The C2Seg dataset and the semantic segmentation toolbox (involving the proposed HighDAN) will be available publicly at https://github.com/danfenghong/RSE_Cross-city. Cross-city Deep learning Dice loss Domain adaptation High-resolution network Land cover Multimodal benchmark datasets Remote sensing Segmentation Zhang, Bing verfasserin aut Li, Hao verfasserin aut Li, Yuxuan verfasserin aut Yao, Jing verfasserin aut Li, Chenyu verfasserin aut Werner, Martin verfasserin aut Chanussot, Jocelyn verfasserin aut Zipf, Alexander 1971- verfasserin (DE-588)123246369 (DE-627)082437076 (DE-576)175641056 aut Zhu, Xiao Xiang verfasserin aut Enthalten in Remote sensing of environment Amsterdam [u.a.] : Elsevier Science, 1969 299(2023) vom: Dez., Artikel-ID 113856, Seite 1-17 Online-Ressource (DE-627)306591324 (DE-600)1498713-2 (DE-576)098330268 1879-0704 nnns volume:299 year:2023 month:12 elocationid:113856 pages:1-17 extent:17 https://doi.org/10.1016/j.rse.2023.113856 Verlag Resolving-System lizenzpflichtig Volltext https://www.sciencedirect.com/science/article/pii/S0034425723004078 Verlag lizenzpflichtig Volltext GBV_USEFLAG_U GBV_ILN_2013 ISIL_DE-16-250 SYSFLAG_1 GBV_KXP GBV_ILN_20 GBV_ILN_22 GBV_ILN_23 GBV_ILN_24 GBV_ILN_31 GBV_ILN_32 GBV_ILN_40 GBV_ILN_60 GBV_ILN_62 GBV_ILN_65 GBV_ILN_69 GBV_ILN_70 GBV_ILN_73 GBV_ILN_74 GBV_ILN_90 GBV_ILN_95 GBV_ILN_100 GBV_ILN_105 GBV_ILN_110 GBV_ILN_150 GBV_ILN_151 GBV_ILN_187 GBV_ILN_213 GBV_ILN_224 GBV_ILN_230 GBV_ILN_370 GBV_ILN_602 GBV_ILN_702 GBV_ILN_2001 GBV_ILN_2003 GBV_ILN_2004 GBV_ILN_2005 GBV_ILN_2007 GBV_ILN_2009 GBV_ILN_2010 GBV_ILN_2011 GBV_ILN_2014 GBV_ILN_2015 GBV_ILN_2020 GBV_ILN_2021 GBV_ILN_2025 GBV_ILN_2026 GBV_ILN_2027 GBV_ILN_2034 GBV_ILN_2044 GBV_ILN_2048 GBV_ILN_2049 GBV_ILN_2050 GBV_ILN_2055 GBV_ILN_2056 GBV_ILN_2059 GBV_ILN_2061 GBV_ILN_2064 GBV_ILN_2106 GBV_ILN_2110 GBV_ILN_2111 GBV_ILN_2112 GBV_ILN_2122 GBV_ILN_2129 GBV_ILN_2143 GBV_ILN_2152 GBV_ILN_2153 GBV_ILN_2190 GBV_ILN_2232 GBV_ILN_2336 GBV_ILN_2470 GBV_ILN_2507 GBV_ILN_4035 GBV_ILN_4037 GBV_ILN_4112 GBV_ILN_4125 GBV_ILN_4242 GBV_ILN_4249 GBV_ILN_4251 GBV_ILN_4305 GBV_ILN_4306 GBV_ILN_4307 GBV_ILN_4313 GBV_ILN_4322 GBV_ILN_4323 GBV_ILN_4324 GBV_ILN_4326 GBV_ILN_4333 GBV_ILN_4334 GBV_ILN_4338 GBV_ILN_4393 GBV_ILN_4700 AR 299 2023 12 113856 1-17 17 2013 01 DE-16-250 446730200X 00 --%%-- --%%-- --%%-- --%%-- l01 24-01-24 2013 01 DE-16-250 00 s hd2023 2013 01 DE-16-250 01 s (DE-627)1410508463 wissenschaftlicher Artikel (Zeitschrift) 2013 01 DE-16-250 02 s per_10 2013 01 DE-16-250 03 s s_17 2013 01 DE-16-250 04 p (DE-627)1432438840 Zipf, Alexander 2013 01 DE-16-250 04 k (DE-627)1416534997 Geographisches Institut 2013 01 DE-16-250 04 s (DE-627)1410501914 Verfasser 2013 01 DE-16-250 04 s pos_9 |
allfieldsSound |
10.1016/j.rse.2023.113856 doi (DE-627)1878862448 (DE-599)KXP1878862448 (OCoLC)1425207759 DE-627 ger DE-627 rda eng Hong, Danfeng verfasserin (DE-588)1229101713 (DE-627)1751087808 aut Cross-city matters a multimodal remote sensing benchmark dataset for cross-city semantic segmentation using high-resolution domain adaptation networks Danfeng Hong, Bing Zhang, Hao Li, Yuxuan Li, Jing Yao, Chenyu Li, Martin Werner, Jocelyn Chanussot, Alexander Zipf, Xiao Xiang Zhu 15 December 2023 Illustrationen 17 Text txt rdacontent Computermedien c rdamedia Online-Ressource cr rdacarrier Online verfügbar 30 October 2023, Version des Artikels 30 October 2023 Gesehen am 24.01.2024 Artificial intelligence (AI) approaches nowadays have gained remarkable success in single-modality-dominated remote sensing (RS) applications, especially with an emphasis on individual urban environments (e.g., single cities or regions). Yet these AI models tend to meet the performance bottleneck in the case studies across cities or regions, due to the lack of diverse RS information and cutting-edge solutions with high generalization ability. To this end, we build a new set of multimodal remote sensing benchmark datasets (including hyperspectral, multispectral, SAR) for the study purpose of the cross-city semantic segmentation task (called C2Seg dataset), which consists of two cross-city scenes, i.e., Berlin-Augsburg (in Germany) and Beijing-Wuhan (in China). Beyond the single city, we propose a high-resolution domain adaptation network, HighDAN for short, to promote the AI model's generalization ability from the multi-city environments. HighDAN is capable of retaining the spatially topological structure of the studied urban scene well in a parallel high-to-low resolution fusion fashion but also closing the gap derived from enormous differences of RS image representations between different cities by means of adversarial learning. In addition, the Dice loss is considered in HighDAN to alleviate the class imbalance issue caused by factors across cities. Extensive experiments conducted on the C2Seg dataset show the superiority of our HighDAN in terms of segmentation performance and generalization ability, compared to state-of-the-art competitors. The C2Seg dataset and the semantic segmentation toolbox (involving the proposed HighDAN) will be available publicly at https://github.com/danfenghong/RSE_Cross-city. Cross-city Deep learning Dice loss Domain adaptation High-resolution network Land cover Multimodal benchmark datasets Remote sensing Segmentation Zhang, Bing verfasserin aut Li, Hao verfasserin aut Li, Yuxuan verfasserin aut Yao, Jing verfasserin aut Li, Chenyu verfasserin aut Werner, Martin verfasserin aut Chanussot, Jocelyn verfasserin aut Zipf, Alexander 1971- verfasserin (DE-588)123246369 (DE-627)082437076 (DE-576)175641056 aut Zhu, Xiao Xiang verfasserin aut Enthalten in Remote sensing of environment Amsterdam [u.a.] : Elsevier Science, 1969 299(2023) vom: Dez., Artikel-ID 113856, Seite 1-17 Online-Ressource (DE-627)306591324 (DE-600)1498713-2 (DE-576)098330268 1879-0704 nnns volume:299 year:2023 month:12 elocationid:113856 pages:1-17 extent:17 https://doi.org/10.1016/j.rse.2023.113856 Verlag Resolving-System lizenzpflichtig Volltext https://www.sciencedirect.com/science/article/pii/S0034425723004078 Verlag lizenzpflichtig Volltext GBV_USEFLAG_U GBV_ILN_2013 ISIL_DE-16-250 SYSFLAG_1 GBV_KXP GBV_ILN_20 GBV_ILN_22 GBV_ILN_23 GBV_ILN_24 GBV_ILN_31 GBV_ILN_32 GBV_ILN_40 GBV_ILN_60 GBV_ILN_62 GBV_ILN_65 GBV_ILN_69 GBV_ILN_70 GBV_ILN_73 GBV_ILN_74 GBV_ILN_90 GBV_ILN_95 GBV_ILN_100 GBV_ILN_105 GBV_ILN_110 GBV_ILN_150 GBV_ILN_151 GBV_ILN_187 GBV_ILN_213 GBV_ILN_224 GBV_ILN_230 GBV_ILN_370 GBV_ILN_602 GBV_ILN_702 GBV_ILN_2001 GBV_ILN_2003 GBV_ILN_2004 GBV_ILN_2005 GBV_ILN_2007 GBV_ILN_2009 GBV_ILN_2010 GBV_ILN_2011 GBV_ILN_2014 GBV_ILN_2015 GBV_ILN_2020 GBV_ILN_2021 GBV_ILN_2025 GBV_ILN_2026 GBV_ILN_2027 GBV_ILN_2034 GBV_ILN_2044 GBV_ILN_2048 GBV_ILN_2049 GBV_ILN_2050 GBV_ILN_2055 GBV_ILN_2056 GBV_ILN_2059 GBV_ILN_2061 GBV_ILN_2064 GBV_ILN_2106 GBV_ILN_2110 GBV_ILN_2111 GBV_ILN_2112 GBV_ILN_2122 GBV_ILN_2129 GBV_ILN_2143 GBV_ILN_2152 GBV_ILN_2153 GBV_ILN_2190 GBV_ILN_2232 GBV_ILN_2336 GBV_ILN_2470 GBV_ILN_2507 GBV_ILN_4035 GBV_ILN_4037 GBV_ILN_4112 GBV_ILN_4125 GBV_ILN_4242 GBV_ILN_4249 GBV_ILN_4251 GBV_ILN_4305 GBV_ILN_4306 GBV_ILN_4307 GBV_ILN_4313 GBV_ILN_4322 GBV_ILN_4323 GBV_ILN_4324 GBV_ILN_4326 GBV_ILN_4333 GBV_ILN_4334 GBV_ILN_4338 GBV_ILN_4393 GBV_ILN_4700 AR 299 2023 12 113856 1-17 17 2013 01 DE-16-250 446730200X 00 --%%-- --%%-- --%%-- --%%-- l01 24-01-24 2013 01 DE-16-250 00 s hd2023 2013 01 DE-16-250 01 s (DE-627)1410508463 wissenschaftlicher Artikel (Zeitschrift) 2013 01 DE-16-250 02 s per_10 2013 01 DE-16-250 03 s s_17 2013 01 DE-16-250 04 p (DE-627)1432438840 Zipf, Alexander 2013 01 DE-16-250 04 k (DE-627)1416534997 Geographisches Institut 2013 01 DE-16-250 04 s (DE-627)1410501914 Verfasser 2013 01 DE-16-250 04 s pos_9 |
language |
English |
source |
Enthalten in Remote sensing of environment 299(2023) vom: Dez., Artikel-ID 113856, Seite 1-17 volume:299 year:2023 month:12 elocationid:113856 pages:1-17 extent:17 |
sourceStr |
Enthalten in Remote sensing of environment 299(2023) vom: Dez., Artikel-ID 113856, Seite 1-17 volume:299 year:2023 month:12 elocationid:113856 pages:1-17 extent:17 |
format_phy_str_mv |
Article |
building |
2013:0 |
institution |
findex.gbv.de |
selectbib_iln_str_mv |
2013@01 |
topic_facet |
Cross-city Deep learning Dice loss Domain adaptation High-resolution network Land cover Multimodal benchmark datasets Remote sensing Segmentation |
sw_local_iln_str_mv |
2013:hd2023 DE-16-250:hd2023 2013:wissenschaftlicher Artikel (Zeitschrift) DE-16-250:wissenschaftlicher Artikel (Zeitschrift) 2013:per_10 DE-16-250:per_10 2013:s_17 DE-16-250:s_17 2013:Zipf, Alexander DE-16-250:Zipf, Alexander 2013:Geographisches Institut DE-16-250:Geographisches Institut 2013:Verfasser DE-16-250:Verfasser 2013:pos_9 DE-16-250:pos_9 |
isfreeaccess_bool |
false |
container_title |
Remote sensing of environment |
authorswithroles_txt_mv |
Hong, Danfeng @@aut@@ Zhang, Bing @@aut@@ Li, Hao @@aut@@ Li, Yuxuan @@aut@@ Yao, Jing @@aut@@ Li, Chenyu @@aut@@ Werner, Martin @@aut@@ Chanussot, Jocelyn @@aut@@ Zipf, Alexander @@aut@@ Zhu, Xiao Xiang @@aut@@ |
publishDateDaySort_date |
2023-12-01T00:00:00Z |
hierarchy_top_id |
306591324 |
id |
1878862448 |
language_de |
englisch |
fullrecord |
<?xml version="1.0" encoding="UTF-8"?><collection xmlns="http://www.loc.gov/MARC21/slim"><record><leader>01000caa a2200265 4500</leader><controlfield tag="001">1878862448</controlfield><controlfield tag="003">DE-627</controlfield><controlfield tag="005">20240307030141.0</controlfield><controlfield tag="007">cr uuu---uuuuu</controlfield><controlfield tag="008">240124s2023 xx |||||o 00| ||eng c</controlfield><datafield tag="024" ind1="7" ind2=" "><subfield code="a">10.1016/j.rse.2023.113856</subfield><subfield code="2">doi</subfield></datafield><datafield tag="035" ind1=" " ind2=" "><subfield code="a">(DE-627)1878862448</subfield></datafield><datafield tag="035" ind1=" " ind2=" "><subfield code="a">(DE-599)KXP1878862448</subfield></datafield><datafield tag="035" ind1=" " ind2=" "><subfield code="a">(OCoLC)1425207759</subfield></datafield><datafield tag="040" ind1=" " ind2=" "><subfield code="a">DE-627</subfield><subfield code="b">ger</subfield><subfield code="c">DE-627</subfield><subfield code="e">rda</subfield></datafield><datafield tag="041" ind1=" " ind2=" "><subfield code="a">eng</subfield></datafield><datafield tag="100" ind1="1" ind2=" "><subfield code="a">Hong, Danfeng</subfield><subfield code="e">verfasserin</subfield><subfield code="0">(DE-588)1229101713</subfield><subfield code="0">(DE-627)1751087808</subfield><subfield code="4">aut</subfield></datafield><datafield tag="245" ind1="1" ind2="0"><subfield code="a">Cross-city matters</subfield><subfield code="b">a multimodal remote sensing benchmark dataset for cross-city semantic segmentation using high-resolution domain adaptation networks</subfield><subfield code="c">Danfeng Hong, Bing Zhang, Hao Li, Yuxuan Li, Jing Yao, Chenyu Li, Martin Werner, Jocelyn Chanussot, Alexander Zipf, Xiao Xiang Zhu</subfield></datafield><datafield tag="264" ind1=" " ind2="1"><subfield code="c">15 December 2023</subfield></datafield><datafield tag="300" ind1=" " ind2=" "><subfield code="b">Illustrationen</subfield></datafield><datafield tag="300" ind1=" " ind2=" "><subfield code="a">17</subfield></datafield><datafield tag="336" ind1=" " ind2=" "><subfield code="a">Text</subfield><subfield code="b">txt</subfield><subfield code="2">rdacontent</subfield></datafield><datafield tag="337" ind1=" " ind2=" "><subfield code="a">Computermedien</subfield><subfield code="b">c</subfield><subfield code="2">rdamedia</subfield></datafield><datafield tag="338" ind1=" " ind2=" "><subfield code="a">Online-Ressource</subfield><subfield code="b">cr</subfield><subfield code="2">rdacarrier</subfield></datafield><datafield tag="500" ind1=" " ind2=" "><subfield code="a">Online verfügbar 30 October 2023, Version des Artikels 30 October 2023</subfield></datafield><datafield tag="500" ind1=" " ind2=" "><subfield code="a">Gesehen am 24.01.2024</subfield></datafield><datafield tag="520" ind1=" " ind2=" "><subfield code="a">Artificial intelligence (AI) approaches nowadays have gained remarkable success in single-modality-dominated remote sensing (RS) applications, especially with an emphasis on individual urban environments (e.g., single cities or regions). Yet these AI models tend to meet the performance bottleneck in the case studies across cities or regions, due to the lack of diverse RS information and cutting-edge solutions with high generalization ability. To this end, we build a new set of multimodal remote sensing benchmark datasets (including hyperspectral, multispectral, SAR) for the study purpose of the cross-city semantic segmentation task (called C2Seg dataset), which consists of two cross-city scenes, i.e., Berlin-Augsburg (in Germany) and Beijing-Wuhan (in China). Beyond the single city, we propose a high-resolution domain adaptation network, HighDAN for short, to promote the AI model's generalization ability from the multi-city environments. HighDAN is capable of retaining the spatially topological structure of the studied urban scene well in a parallel high-to-low resolution fusion fashion but also closing the gap derived from enormous differences of RS image representations between different cities by means of adversarial learning. In addition, the Dice loss is considered in HighDAN to alleviate the class imbalance issue caused by factors across cities. Extensive experiments conducted on the C2Seg dataset show the superiority of our HighDAN in terms of segmentation performance and generalization ability, compared to state-of-the-art competitors. The C2Seg dataset and the semantic segmentation toolbox (involving the proposed HighDAN) will be available publicly at https://github.com/danfenghong/RSE_Cross-city.</subfield></datafield><datafield tag="650" ind1=" " ind2="4"><subfield code="a">Cross-city</subfield></datafield><datafield tag="650" ind1=" " ind2="4"><subfield code="a">Deep learning</subfield></datafield><datafield tag="650" ind1=" " ind2="4"><subfield code="a">Dice loss</subfield></datafield><datafield tag="650" ind1=" " ind2="4"><subfield code="a">Domain adaptation</subfield></datafield><datafield tag="650" ind1=" " ind2="4"><subfield code="a">High-resolution network</subfield></datafield><datafield tag="650" ind1=" " ind2="4"><subfield code="a">Land cover</subfield></datafield><datafield tag="650" ind1=" " ind2="4"><subfield code="a">Multimodal benchmark datasets</subfield></datafield><datafield tag="650" ind1=" " ind2="4"><subfield code="a">Remote sensing</subfield></datafield><datafield tag="650" ind1=" " ind2="4"><subfield code="a">Segmentation</subfield></datafield><datafield tag="700" ind1="1" ind2=" "><subfield code="a">Zhang, Bing</subfield><subfield code="e">verfasserin</subfield><subfield code="4">aut</subfield></datafield><datafield tag="700" ind1="1" ind2=" "><subfield code="a">Li, Hao</subfield><subfield code="e">verfasserin</subfield><subfield code="4">aut</subfield></datafield><datafield tag="700" ind1="1" ind2=" "><subfield code="a">Li, Yuxuan</subfield><subfield code="e">verfasserin</subfield><subfield code="4">aut</subfield></datafield><datafield tag="700" ind1="1" ind2=" "><subfield code="a">Yao, Jing</subfield><subfield code="e">verfasserin</subfield><subfield code="4">aut</subfield></datafield><datafield tag="700" ind1="1" ind2=" "><subfield code="a">Li, Chenyu</subfield><subfield code="e">verfasserin</subfield><subfield code="4">aut</subfield></datafield><datafield tag="700" ind1="1" ind2=" "><subfield code="a">Werner, Martin</subfield><subfield code="e">verfasserin</subfield><subfield code="4">aut</subfield></datafield><datafield tag="700" ind1="1" ind2=" "><subfield code="a">Chanussot, Jocelyn</subfield><subfield code="e">verfasserin</subfield><subfield code="4">aut</subfield></datafield><datafield tag="700" ind1="1" ind2=" "><subfield code="a">Zipf, Alexander</subfield><subfield code="d">1971-</subfield><subfield code="e">verfasserin</subfield><subfield code="0">(DE-588)123246369</subfield><subfield code="0">(DE-627)082437076</subfield><subfield code="0">(DE-576)175641056</subfield><subfield code="4">aut</subfield></datafield><datafield tag="700" ind1="1" ind2=" "><subfield code="a">Zhu, Xiao Xiang</subfield><subfield code="e">verfasserin</subfield><subfield code="4">aut</subfield></datafield><datafield tag="773" ind1="0" ind2="8"><subfield code="i">Enthalten in</subfield><subfield code="t">Remote sensing of environment</subfield><subfield code="d">Amsterdam [u.a.] : Elsevier Science, 1969</subfield><subfield code="g">299(2023) vom: Dez., Artikel-ID 113856, Seite 1-17</subfield><subfield code="h">Online-Ressource</subfield><subfield code="w">(DE-627)306591324</subfield><subfield code="w">(DE-600)1498713-2</subfield><subfield code="w">(DE-576)098330268</subfield><subfield code="x">1879-0704</subfield><subfield code="7">nnns</subfield></datafield><datafield tag="773" ind1="1" ind2="8"><subfield code="g">volume:299</subfield><subfield code="g">year:2023</subfield><subfield code="g">month:12</subfield><subfield code="g">elocationid:113856</subfield><subfield code="g">pages:1-17</subfield><subfield code="g">extent:17</subfield></datafield><datafield tag="856" ind1="4" ind2="0"><subfield code="u">https://doi.org/10.1016/j.rse.2023.113856</subfield><subfield code="x">Verlag</subfield><subfield code="x">Resolving-System</subfield><subfield code="z">lizenzpflichtig</subfield><subfield code="3">Volltext</subfield></datafield><datafield tag="856" ind1="4" ind2="0"><subfield code="u">https://www.sciencedirect.com/science/article/pii/S0034425723004078</subfield><subfield code="x">Verlag</subfield><subfield code="z">lizenzpflichtig</subfield><subfield code="3">Volltext</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_USEFLAG_U</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2013</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">ISIL_DE-16-250</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">SYSFLAG_1</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_KXP</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_20</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_22</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_23</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_24</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_31</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_32</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_40</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_60</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_62</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_65</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_69</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_70</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_73</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_74</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_90</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_95</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_100</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_105</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_110</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_150</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_151</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_187</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_213</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_224</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_230</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_370</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_602</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_702</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2001</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2003</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2004</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2005</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2007</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2009</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2010</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2011</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2014</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2015</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2020</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2021</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2025</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2026</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2027</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2034</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2044</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2048</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2049</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2050</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2055</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2056</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2059</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2061</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2064</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2106</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2110</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2111</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2112</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2122</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2129</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2143</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2152</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2153</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2190</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2232</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2336</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2470</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2507</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4035</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4037</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4112</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4125</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4242</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4249</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4251</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4305</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4306</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4307</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4313</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4322</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4323</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4324</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4326</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4333</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4334</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4338</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4393</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4700</subfield></datafield><datafield tag="951" ind1=" " ind2=" "><subfield code="a">AR</subfield></datafield><datafield tag="952" ind1=" " ind2=" "><subfield code="d">299</subfield><subfield code="j">2023</subfield><subfield code="c">12</subfield><subfield code="i">113856</subfield><subfield code="h">1-17</subfield><subfield code="g">17</subfield></datafield><datafield tag="980" ind1=" " ind2=" "><subfield code="2">2013</subfield><subfield code="1">01</subfield><subfield code="x">DE-16-250</subfield><subfield code="b">446730200X</subfield><subfield code="c">00</subfield><subfield code="f">--%%--</subfield><subfield code="d">--%%--</subfield><subfield code="e">--%%--</subfield><subfield code="j">--%%--</subfield><subfield code="y">l01</subfield><subfield code="z">24-01-24</subfield></datafield><datafield tag="982" ind1=" " ind2=" "><subfield code="2">2013</subfield><subfield code="1">01</subfield><subfield code="x">DE-16-250</subfield><subfield code="8">00</subfield><subfield code="s">s</subfield><subfield code="a">hd2023</subfield></datafield><datafield tag="982" ind1=" " ind2=" "><subfield code="2">2013</subfield><subfield code="1">01</subfield><subfield code="x">DE-16-250</subfield><subfield code="8">01</subfield><subfield code="s">s</subfield><subfield code="0">(DE-627)1410508463</subfield><subfield code="a">wissenschaftlicher Artikel (Zeitschrift)</subfield></datafield><datafield tag="982" ind1=" " ind2=" "><subfield code="2">2013</subfield><subfield code="1">01</subfield><subfield code="x">DE-16-250</subfield><subfield code="8">02</subfield><subfield code="s">s</subfield><subfield code="a">per_10</subfield></datafield><datafield tag="982" ind1=" " ind2=" "><subfield code="2">2013</subfield><subfield code="1">01</subfield><subfield code="x">DE-16-250</subfield><subfield code="8">03</subfield><subfield code="s">s</subfield><subfield code="a">s_17</subfield></datafield><datafield tag="982" ind1=" " ind2=" "><subfield code="2">2013</subfield><subfield code="1">01</subfield><subfield code="x">DE-16-250</subfield><subfield code="8">04</subfield><subfield code="s">p</subfield><subfield code="0">(DE-627)1432438840</subfield><subfield code="a">Zipf, Alexander</subfield></datafield><datafield tag="982" ind1=" " ind2=" "><subfield code="2">2013</subfield><subfield code="1">01</subfield><subfield code="x">DE-16-250</subfield><subfield code="8">04</subfield><subfield code="s">k</subfield><subfield code="0">(DE-627)1416534997</subfield><subfield code="a">Geographisches Institut</subfield></datafield><datafield tag="982" ind1=" " ind2=" "><subfield code="2">2013</subfield><subfield code="1">01</subfield><subfield code="x">DE-16-250</subfield><subfield code="8">04</subfield><subfield code="s">s</subfield><subfield code="0">(DE-627)1410501914</subfield><subfield code="a">Verfasser</subfield></datafield><datafield tag="982" ind1=" " ind2=" "><subfield code="2">2013</subfield><subfield code="1">01</subfield><subfield code="x">DE-16-250</subfield><subfield code="8">04</subfield><subfield code="s">s</subfield><subfield code="a">pos_9</subfield></datafield></record></collection>
|
standort_str_mv |
--%%-- |
standort_iln_str_mv |
2013:--%%-- DE-16-250:--%%-- |
author |
Hong, Danfeng |
spellingShingle |
Hong, Danfeng misc Cross-city misc Deep learning misc Dice loss misc Domain adaptation misc High-resolution network misc Land cover misc Multimodal benchmark datasets misc Remote sensing misc Segmentation 2013 hd2023 2013 wissenschaftlicher Artikel (Zeitschrift) 2013 per_10 2013 s_17 2013 Zipf, Alexander 2013 Geographisches Institut 2013 Verfasser 2013 pos_9 Cross-city matters a multimodal remote sensing benchmark dataset for cross-city semantic segmentation using high-resolution domain adaptation networks |
authorStr |
Hong, Danfeng |
ppnlink_with_tag_str_mv |
@@773@@(DE-627)306591324 |
format |
electronic Article |
delete_txt_mv |
keep |
author_role |
aut aut aut aut aut aut aut aut aut aut |
typewithnormlink_str_mv |
Person@(DE-588)1229101713 DifferentiatedPerson@(DE-588)1229101713 Person@(DE-588)123246369 DifferentiatedPerson@(DE-588)123246369 |
collection |
KXP SWB GVK |
remote_str |
true |
last_changed_iln_str_mv |
2013@24-01-24 |
illustrated |
Not Illustrated |
issn |
1879-0704 |
topic_title |
2013 01 DE-16-250 00 s hd2023 2013 01 DE-16-250 01 s (DE-627)1410508463 wissenschaftlicher Artikel (Zeitschrift) 2013 01 DE-16-250 02 s per_10 2013 01 DE-16-250 03 s s_17 2013 01 DE-16-250 04 p (DE-627)1432438840 Zipf, Alexander 2013 01 DE-16-250 04 k (DE-627)1416534997 Geographisches Institut 2013 01 DE-16-250 04 s (DE-627)1410501914 Verfasser 2013 01 DE-16-250 04 s pos_9 Cross-city matters a multimodal remote sensing benchmark dataset for cross-city semantic segmentation using high-resolution domain adaptation networks Danfeng Hong, Bing Zhang, Hao Li, Yuxuan Li, Jing Yao, Chenyu Li, Martin Werner, Jocelyn Chanussot, Alexander Zipf, Xiao Xiang Zhu Cross-city Deep learning Dice loss Domain adaptation High-resolution network Land cover Multimodal benchmark datasets Remote sensing Segmentation |
topic |
misc Cross-city misc Deep learning misc Dice loss misc Domain adaptation misc High-resolution network misc Land cover misc Multimodal benchmark datasets misc Remote sensing misc Segmentation 2013 hd2023 2013 wissenschaftlicher Artikel (Zeitschrift) 2013 per_10 2013 s_17 2013 Zipf, Alexander 2013 Geographisches Institut 2013 Verfasser 2013 pos_9 |
topic_unstemmed |
misc Cross-city misc Deep learning misc Dice loss misc Domain adaptation misc High-resolution network misc Land cover misc Multimodal benchmark datasets misc Remote sensing misc Segmentation 2013 hd2023 2013 wissenschaftlicher Artikel (Zeitschrift) 2013 per_10 2013 s_17 2013 Zipf, Alexander 2013 Geographisches Institut 2013 Verfasser 2013 pos_9 |
topic_browse |
misc Cross-city misc Deep learning misc Dice loss misc Domain adaptation misc High-resolution network misc Land cover misc Multimodal benchmark datasets misc Remote sensing misc Segmentation 2013 hd2023 2013 wissenschaftlicher Artikel (Zeitschrift) 2013 per_10 2013 s_17 2013 Zipf, Alexander 2013 Geographisches Institut 2013 Verfasser 2013 pos_9 |
format_facet |
Elektronische Aufsätze Aufsätze Elektronische Ressource |
standort_txtP_mv |
--%%-- |
format_main_str_mv |
Text Zeitschrift/Artikel |
carriertype_str_mv |
cr |
hierarchy_parent_title |
Remote sensing of environment |
normlinkwithtype_str_mv |
(DE-588)1229101713@Person (DE-588)1229101713@DifferentiatedPerson (DE-588)123246369@Person (DE-588)123246369@DifferentiatedPerson |
hierarchy_parent_id |
306591324 |
signature |
--%%-- |
signature_str_mv |
--%%-- |
hierarchy_top_title |
Remote sensing of environment |
isfreeaccess_txt |
false |
familylinks_str_mv |
(DE-627)306591324 (DE-600)1498713-2 (DE-576)098330268 |
normlinkwithrole_str_mv |
(DE-588)1229101713@@aut@@ (DE-588)123246369@@aut@@ |
title |
Cross-city matters a multimodal remote sensing benchmark dataset for cross-city semantic segmentation using high-resolution domain adaptation networks |
ctrlnum |
(DE-627)1878862448 (DE-599)KXP1878862448 (OCoLC)1425207759 |
title_full |
Cross-city matters a multimodal remote sensing benchmark dataset for cross-city semantic segmentation using high-resolution domain adaptation networks Danfeng Hong, Bing Zhang, Hao Li, Yuxuan Li, Jing Yao, Chenyu Li, Martin Werner, Jocelyn Chanussot, Alexander Zipf, Xiao Xiang Zhu |
author_sort |
Hong, Danfeng |
journal |
Remote sensing of environment |
journalStr |
Remote sensing of environment |
callnumber-first-code |
- |
lang_code |
eng |
isOA_bool |
false |
recordtype |
marc |
publishDateSort |
2023 |
contenttype_str_mv |
txt |
container_start_page |
1 |
author_browse |
Hong, Danfeng Zhang, Bing Li, Hao Li, Yuxuan Yao, Jing Li, Chenyu Werner, Martin Chanussot, Jocelyn Zipf, Alexander Zhu, Xiao Xiang |
selectkey |
2013:l |
container_volume |
299 |
physical |
Illustrationen 17 |
format_se |
Elektronische Aufsätze |
author-letter |
Hong, Danfeng |
title_sub |
a multimodal remote sensing benchmark dataset for cross-city semantic segmentation using high-resolution domain adaptation networks |
doi_str_mv |
10.1016/j.rse.2023.113856 |
normlink |
1229101713 1751087808 123246369 082437076 175641056 1410508463 1432438840 1416534997 1410501914 |
normlink_prefix_str_mv |
(DE-588)1229101713 (DE-627)1751087808 (DE-588)123246369 (DE-627)082437076 (DE-576)175641056 (DE-627)1410508463 (DE-627)1432438840 (DE-627)1416534997 (DE-627)1410501914 |
author2-role |
verfasserin |
title_sort |
cross-city mattersa multimodal remote sensing benchmark dataset for cross-city semantic segmentation using high-resolution domain adaptation networks |
title_auth |
Cross-city matters a multimodal remote sensing benchmark dataset for cross-city semantic segmentation using high-resolution domain adaptation networks |
abstract |
Artificial intelligence (AI) approaches nowadays have gained remarkable success in single-modality-dominated remote sensing (RS) applications, especially with an emphasis on individual urban environments (e.g., single cities or regions). Yet these AI models tend to meet the performance bottleneck in the case studies across cities or regions, due to the lack of diverse RS information and cutting-edge solutions with high generalization ability. To this end, we build a new set of multimodal remote sensing benchmark datasets (including hyperspectral, multispectral, SAR) for the study purpose of the cross-city semantic segmentation task (called C2Seg dataset), which consists of two cross-city scenes, i.e., Berlin-Augsburg (in Germany) and Beijing-Wuhan (in China). Beyond the single city, we propose a high-resolution domain adaptation network, HighDAN for short, to promote the AI model's generalization ability from the multi-city environments. HighDAN is capable of retaining the spatially topological structure of the studied urban scene well in a parallel high-to-low resolution fusion fashion but also closing the gap derived from enormous differences of RS image representations between different cities by means of adversarial learning. In addition, the Dice loss is considered in HighDAN to alleviate the class imbalance issue caused by factors across cities. Extensive experiments conducted on the C2Seg dataset show the superiority of our HighDAN in terms of segmentation performance and generalization ability, compared to state-of-the-art competitors. The C2Seg dataset and the semantic segmentation toolbox (involving the proposed HighDAN) will be available publicly at https://github.com/danfenghong/RSE_Cross-city. Online verfügbar 30 October 2023, Version des Artikels 30 October 2023 Gesehen am 24.01.2024 |
abstractGer |
Artificial intelligence (AI) approaches nowadays have gained remarkable success in single-modality-dominated remote sensing (RS) applications, especially with an emphasis on individual urban environments (e.g., single cities or regions). Yet these AI models tend to meet the performance bottleneck in the case studies across cities or regions, due to the lack of diverse RS information and cutting-edge solutions with high generalization ability. To this end, we build a new set of multimodal remote sensing benchmark datasets (including hyperspectral, multispectral, SAR) for the study purpose of the cross-city semantic segmentation task (called C2Seg dataset), which consists of two cross-city scenes, i.e., Berlin-Augsburg (in Germany) and Beijing-Wuhan (in China). Beyond the single city, we propose a high-resolution domain adaptation network, HighDAN for short, to promote the AI model's generalization ability from the multi-city environments. HighDAN is capable of retaining the spatially topological structure of the studied urban scene well in a parallel high-to-low resolution fusion fashion but also closing the gap derived from enormous differences of RS image representations between different cities by means of adversarial learning. In addition, the Dice loss is considered in HighDAN to alleviate the class imbalance issue caused by factors across cities. Extensive experiments conducted on the C2Seg dataset show the superiority of our HighDAN in terms of segmentation performance and generalization ability, compared to state-of-the-art competitors. The C2Seg dataset and the semantic segmentation toolbox (involving the proposed HighDAN) will be available publicly at https://github.com/danfenghong/RSE_Cross-city. Online verfügbar 30 October 2023, Version des Artikels 30 October 2023 Gesehen am 24.01.2024 |
abstract_unstemmed |
Artificial intelligence (AI) approaches nowadays have gained remarkable success in single-modality-dominated remote sensing (RS) applications, especially with an emphasis on individual urban environments (e.g., single cities or regions). Yet these AI models tend to meet the performance bottleneck in the case studies across cities or regions, due to the lack of diverse RS information and cutting-edge solutions with high generalization ability. To this end, we build a new set of multimodal remote sensing benchmark datasets (including hyperspectral, multispectral, SAR) for the study purpose of the cross-city semantic segmentation task (called C2Seg dataset), which consists of two cross-city scenes, i.e., Berlin-Augsburg (in Germany) and Beijing-Wuhan (in China). Beyond the single city, we propose a high-resolution domain adaptation network, HighDAN for short, to promote the AI model's generalization ability from the multi-city environments. HighDAN is capable of retaining the spatially topological structure of the studied urban scene well in a parallel high-to-low resolution fusion fashion but also closing the gap derived from enormous differences of RS image representations between different cities by means of adversarial learning. In addition, the Dice loss is considered in HighDAN to alleviate the class imbalance issue caused by factors across cities. Extensive experiments conducted on the C2Seg dataset show the superiority of our HighDAN in terms of segmentation performance and generalization ability, compared to state-of-the-art competitors. The C2Seg dataset and the semantic segmentation toolbox (involving the proposed HighDAN) will be available publicly at https://github.com/danfenghong/RSE_Cross-city. Online verfügbar 30 October 2023, Version des Artikels 30 October 2023 Gesehen am 24.01.2024 |
collection_details |
GBV_USEFLAG_U GBV_ILN_2013 ISIL_DE-16-250 SYSFLAG_1 GBV_KXP GBV_ILN_20 GBV_ILN_22 GBV_ILN_23 GBV_ILN_24 GBV_ILN_31 GBV_ILN_32 GBV_ILN_40 GBV_ILN_60 GBV_ILN_62 GBV_ILN_65 GBV_ILN_69 GBV_ILN_70 GBV_ILN_73 GBV_ILN_74 GBV_ILN_90 GBV_ILN_95 GBV_ILN_100 GBV_ILN_105 GBV_ILN_110 GBV_ILN_150 GBV_ILN_151 GBV_ILN_187 GBV_ILN_213 GBV_ILN_224 GBV_ILN_230 GBV_ILN_370 GBV_ILN_602 GBV_ILN_702 GBV_ILN_2001 GBV_ILN_2003 GBV_ILN_2004 GBV_ILN_2005 GBV_ILN_2007 GBV_ILN_2009 GBV_ILN_2010 GBV_ILN_2011 GBV_ILN_2014 GBV_ILN_2015 GBV_ILN_2020 GBV_ILN_2021 GBV_ILN_2025 GBV_ILN_2026 GBV_ILN_2027 GBV_ILN_2034 GBV_ILN_2044 GBV_ILN_2048 GBV_ILN_2049 GBV_ILN_2050 GBV_ILN_2055 GBV_ILN_2056 GBV_ILN_2059 GBV_ILN_2061 GBV_ILN_2064 GBV_ILN_2106 GBV_ILN_2110 GBV_ILN_2111 GBV_ILN_2112 GBV_ILN_2122 GBV_ILN_2129 GBV_ILN_2143 GBV_ILN_2152 GBV_ILN_2153 GBV_ILN_2190 GBV_ILN_2232 GBV_ILN_2336 GBV_ILN_2470 GBV_ILN_2507 GBV_ILN_4035 GBV_ILN_4037 GBV_ILN_4112 GBV_ILN_4125 GBV_ILN_4242 GBV_ILN_4249 GBV_ILN_4251 GBV_ILN_4305 GBV_ILN_4306 GBV_ILN_4307 GBV_ILN_4313 GBV_ILN_4322 GBV_ILN_4323 GBV_ILN_4324 GBV_ILN_4326 GBV_ILN_4333 GBV_ILN_4334 GBV_ILN_4338 GBV_ILN_4393 GBV_ILN_4700 |
title_short |
Cross-city matters |
url |
https://doi.org/10.1016/j.rse.2023.113856 https://www.sciencedirect.com/science/article/pii/S0034425723004078 |
ausleihindikator_str_mv |
2013:- |
rolewithnormlink_str_mv |
@@aut@@(DE-588)1229101713 @@aut@@(DE-588)123246369 |
remote_bool |
true |
author2 |
Zhang, Bing Li, Hao Li, Yuxuan Yao, Jing Li, Chenyu Werner, Martin Chanussot, Jocelyn Zipf, Alexander 1971- Zhu, Xiao Xiang |
author2Str |
Zhang, Bing Li, Hao Li, Yuxuan Yao, Jing Li, Chenyu Werner, Martin Chanussot, Jocelyn Zipf, Alexander 1971- Zhu, Xiao Xiang |
ppnlink |
306591324 |
GND_str_mv |
Danfeng Hong Hong, Danfeng Zipf, Alexander Rudolf Zipf, Alexander |
GND_txt_mv |
Danfeng Hong Hong, Danfeng Zipf, Alexander Rudolf Zipf, Alexander |
GND_txtF_mv |
Danfeng Hong Hong, Danfeng Zipf, Alexander Rudolf Zipf, Alexander |
mediatype_str_mv |
c |
isOA_txt |
false |
hochschulschrift_bool |
false |
doi_str |
10.1016/j.rse.2023.113856 |
callnumber-a |
--%%-- |
up_date |
2024-07-04T19:25:41.739Z |
_version_ |
1803677746322210816 |
fullrecord_marcxml |
<?xml version="1.0" encoding="UTF-8"?><collection xmlns="http://www.loc.gov/MARC21/slim"><record><leader>01000caa a2200265 4500</leader><controlfield tag="001">1878862448</controlfield><controlfield tag="003">DE-627</controlfield><controlfield tag="005">20240307030141.0</controlfield><controlfield tag="007">cr uuu---uuuuu</controlfield><controlfield tag="008">240124s2023 xx |||||o 00| ||eng c</controlfield><datafield tag="024" ind1="7" ind2=" "><subfield code="a">10.1016/j.rse.2023.113856</subfield><subfield code="2">doi</subfield></datafield><datafield tag="035" ind1=" " ind2=" "><subfield code="a">(DE-627)1878862448</subfield></datafield><datafield tag="035" ind1=" " ind2=" "><subfield code="a">(DE-599)KXP1878862448</subfield></datafield><datafield tag="035" ind1=" " ind2=" "><subfield code="a">(OCoLC)1425207759</subfield></datafield><datafield tag="040" ind1=" " ind2=" "><subfield code="a">DE-627</subfield><subfield code="b">ger</subfield><subfield code="c">DE-627</subfield><subfield code="e">rda</subfield></datafield><datafield tag="041" ind1=" " ind2=" "><subfield code="a">eng</subfield></datafield><datafield tag="100" ind1="1" ind2=" "><subfield code="a">Hong, Danfeng</subfield><subfield code="e">verfasserin</subfield><subfield code="0">(DE-588)1229101713</subfield><subfield code="0">(DE-627)1751087808</subfield><subfield code="4">aut</subfield></datafield><datafield tag="245" ind1="1" ind2="0"><subfield code="a">Cross-city matters</subfield><subfield code="b">a multimodal remote sensing benchmark dataset for cross-city semantic segmentation using high-resolution domain adaptation networks</subfield><subfield code="c">Danfeng Hong, Bing Zhang, Hao Li, Yuxuan Li, Jing Yao, Chenyu Li, Martin Werner, Jocelyn Chanussot, Alexander Zipf, Xiao Xiang Zhu</subfield></datafield><datafield tag="264" ind1=" " ind2="1"><subfield code="c">15 December 2023</subfield></datafield><datafield tag="300" ind1=" " ind2=" "><subfield code="b">Illustrationen</subfield></datafield><datafield tag="300" ind1=" " ind2=" "><subfield code="a">17</subfield></datafield><datafield tag="336" ind1=" " ind2=" "><subfield code="a">Text</subfield><subfield code="b">txt</subfield><subfield code="2">rdacontent</subfield></datafield><datafield tag="337" ind1=" " ind2=" "><subfield code="a">Computermedien</subfield><subfield code="b">c</subfield><subfield code="2">rdamedia</subfield></datafield><datafield tag="338" ind1=" " ind2=" "><subfield code="a">Online-Ressource</subfield><subfield code="b">cr</subfield><subfield code="2">rdacarrier</subfield></datafield><datafield tag="500" ind1=" " ind2=" "><subfield code="a">Online verfügbar 30 October 2023, Version des Artikels 30 October 2023</subfield></datafield><datafield tag="500" ind1=" " ind2=" "><subfield code="a">Gesehen am 24.01.2024</subfield></datafield><datafield tag="520" ind1=" " ind2=" "><subfield code="a">Artificial intelligence (AI) approaches nowadays have gained remarkable success in single-modality-dominated remote sensing (RS) applications, especially with an emphasis on individual urban environments (e.g., single cities or regions). Yet these AI models tend to meet the performance bottleneck in the case studies across cities or regions, due to the lack of diverse RS information and cutting-edge solutions with high generalization ability. To this end, we build a new set of multimodal remote sensing benchmark datasets (including hyperspectral, multispectral, SAR) for the study purpose of the cross-city semantic segmentation task (called C2Seg dataset), which consists of two cross-city scenes, i.e., Berlin-Augsburg (in Germany) and Beijing-Wuhan (in China). Beyond the single city, we propose a high-resolution domain adaptation network, HighDAN for short, to promote the AI model's generalization ability from the multi-city environments. HighDAN is capable of retaining the spatially topological structure of the studied urban scene well in a parallel high-to-low resolution fusion fashion but also closing the gap derived from enormous differences of RS image representations between different cities by means of adversarial learning. In addition, the Dice loss is considered in HighDAN to alleviate the class imbalance issue caused by factors across cities. Extensive experiments conducted on the C2Seg dataset show the superiority of our HighDAN in terms of segmentation performance and generalization ability, compared to state-of-the-art competitors. The C2Seg dataset and the semantic segmentation toolbox (involving the proposed HighDAN) will be available publicly at https://github.com/danfenghong/RSE_Cross-city.</subfield></datafield><datafield tag="650" ind1=" " ind2="4"><subfield code="a">Cross-city</subfield></datafield><datafield tag="650" ind1=" " ind2="4"><subfield code="a">Deep learning</subfield></datafield><datafield tag="650" ind1=" " ind2="4"><subfield code="a">Dice loss</subfield></datafield><datafield tag="650" ind1=" " ind2="4"><subfield code="a">Domain adaptation</subfield></datafield><datafield tag="650" ind1=" " ind2="4"><subfield code="a">High-resolution network</subfield></datafield><datafield tag="650" ind1=" " ind2="4"><subfield code="a">Land cover</subfield></datafield><datafield tag="650" ind1=" " ind2="4"><subfield code="a">Multimodal benchmark datasets</subfield></datafield><datafield tag="650" ind1=" " ind2="4"><subfield code="a">Remote sensing</subfield></datafield><datafield tag="650" ind1=" " ind2="4"><subfield code="a">Segmentation</subfield></datafield><datafield tag="700" ind1="1" ind2=" "><subfield code="a">Zhang, Bing</subfield><subfield code="e">verfasserin</subfield><subfield code="4">aut</subfield></datafield><datafield tag="700" ind1="1" ind2=" "><subfield code="a">Li, Hao</subfield><subfield code="e">verfasserin</subfield><subfield code="4">aut</subfield></datafield><datafield tag="700" ind1="1" ind2=" "><subfield code="a">Li, Yuxuan</subfield><subfield code="e">verfasserin</subfield><subfield code="4">aut</subfield></datafield><datafield tag="700" ind1="1" ind2=" "><subfield code="a">Yao, Jing</subfield><subfield code="e">verfasserin</subfield><subfield code="4">aut</subfield></datafield><datafield tag="700" ind1="1" ind2=" "><subfield code="a">Li, Chenyu</subfield><subfield code="e">verfasserin</subfield><subfield code="4">aut</subfield></datafield><datafield tag="700" ind1="1" ind2=" "><subfield code="a">Werner, Martin</subfield><subfield code="e">verfasserin</subfield><subfield code="4">aut</subfield></datafield><datafield tag="700" ind1="1" ind2=" "><subfield code="a">Chanussot, Jocelyn</subfield><subfield code="e">verfasserin</subfield><subfield code="4">aut</subfield></datafield><datafield tag="700" ind1="1" ind2=" "><subfield code="a">Zipf, Alexander</subfield><subfield code="d">1971-</subfield><subfield code="e">verfasserin</subfield><subfield code="0">(DE-588)123246369</subfield><subfield code="0">(DE-627)082437076</subfield><subfield code="0">(DE-576)175641056</subfield><subfield code="4">aut</subfield></datafield><datafield tag="700" ind1="1" ind2=" "><subfield code="a">Zhu, Xiao Xiang</subfield><subfield code="e">verfasserin</subfield><subfield code="4">aut</subfield></datafield><datafield tag="773" ind1="0" ind2="8"><subfield code="i">Enthalten in</subfield><subfield code="t">Remote sensing of environment</subfield><subfield code="d">Amsterdam [u.a.] : Elsevier Science, 1969</subfield><subfield code="g">299(2023) vom: Dez., Artikel-ID 113856, Seite 1-17</subfield><subfield code="h">Online-Ressource</subfield><subfield code="w">(DE-627)306591324</subfield><subfield code="w">(DE-600)1498713-2</subfield><subfield code="w">(DE-576)098330268</subfield><subfield code="x">1879-0704</subfield><subfield code="7">nnns</subfield></datafield><datafield tag="773" ind1="1" ind2="8"><subfield code="g">volume:299</subfield><subfield code="g">year:2023</subfield><subfield code="g">month:12</subfield><subfield code="g">elocationid:113856</subfield><subfield code="g">pages:1-17</subfield><subfield code="g">extent:17</subfield></datafield><datafield tag="856" ind1="4" ind2="0"><subfield code="u">https://doi.org/10.1016/j.rse.2023.113856</subfield><subfield code="x">Verlag</subfield><subfield code="x">Resolving-System</subfield><subfield code="z">lizenzpflichtig</subfield><subfield code="3">Volltext</subfield></datafield><datafield tag="856" ind1="4" ind2="0"><subfield code="u">https://www.sciencedirect.com/science/article/pii/S0034425723004078</subfield><subfield code="x">Verlag</subfield><subfield code="z">lizenzpflichtig</subfield><subfield code="3">Volltext</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_USEFLAG_U</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2013</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">ISIL_DE-16-250</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">SYSFLAG_1</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_KXP</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_20</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_22</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_23</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_24</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_31</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_32</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_40</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_60</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_62</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_65</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_69</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_70</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_73</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_74</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_90</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_95</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_100</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_105</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_110</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_150</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_151</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_187</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_213</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_224</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_230</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_370</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_602</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_702</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2001</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2003</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2004</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2005</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2007</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2009</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2010</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2011</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2014</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2015</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2020</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2021</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2025</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2026</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2027</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2034</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2044</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2048</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2049</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2050</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2055</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2056</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2059</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2061</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2064</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2106</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2110</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2111</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2112</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2122</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2129</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2143</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2152</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2153</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2190</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2232</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2336</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2470</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2507</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4035</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4037</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4112</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4125</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4242</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4249</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4251</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4305</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4306</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4307</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4313</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4322</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4323</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4324</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4326</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4333</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4334</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4338</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4393</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4700</subfield></datafield><datafield tag="951" ind1=" " ind2=" "><subfield code="a">AR</subfield></datafield><datafield tag="952" ind1=" " ind2=" "><subfield code="d">299</subfield><subfield code="j">2023</subfield><subfield code="c">12</subfield><subfield code="i">113856</subfield><subfield code="h">1-17</subfield><subfield code="g">17</subfield></datafield><datafield tag="980" ind1=" " ind2=" "><subfield code="2">2013</subfield><subfield code="1">01</subfield><subfield code="x">DE-16-250</subfield><subfield code="b">446730200X</subfield><subfield code="c">00</subfield><subfield code="f">--%%--</subfield><subfield code="d">--%%--</subfield><subfield code="e">--%%--</subfield><subfield code="j">--%%--</subfield><subfield code="y">l01</subfield><subfield code="z">24-01-24</subfield></datafield><datafield tag="982" ind1=" " ind2=" "><subfield code="2">2013</subfield><subfield code="1">01</subfield><subfield code="x">DE-16-250</subfield><subfield code="8">00</subfield><subfield code="s">s</subfield><subfield code="a">hd2023</subfield></datafield><datafield tag="982" ind1=" " ind2=" "><subfield code="2">2013</subfield><subfield code="1">01</subfield><subfield code="x">DE-16-250</subfield><subfield code="8">01</subfield><subfield code="s">s</subfield><subfield code="0">(DE-627)1410508463</subfield><subfield code="a">wissenschaftlicher Artikel (Zeitschrift)</subfield></datafield><datafield tag="982" ind1=" " ind2=" "><subfield code="2">2013</subfield><subfield code="1">01</subfield><subfield code="x">DE-16-250</subfield><subfield code="8">02</subfield><subfield code="s">s</subfield><subfield code="a">per_10</subfield></datafield><datafield tag="982" ind1=" " ind2=" "><subfield code="2">2013</subfield><subfield code="1">01</subfield><subfield code="x">DE-16-250</subfield><subfield code="8">03</subfield><subfield code="s">s</subfield><subfield code="a">s_17</subfield></datafield><datafield tag="982" ind1=" " ind2=" "><subfield code="2">2013</subfield><subfield code="1">01</subfield><subfield code="x">DE-16-250</subfield><subfield code="8">04</subfield><subfield code="s">p</subfield><subfield code="0">(DE-627)1432438840</subfield><subfield code="a">Zipf, Alexander</subfield></datafield><datafield tag="982" ind1=" " ind2=" "><subfield code="2">2013</subfield><subfield code="1">01</subfield><subfield code="x">DE-16-250</subfield><subfield code="8">04</subfield><subfield code="s">k</subfield><subfield code="0">(DE-627)1416534997</subfield><subfield code="a">Geographisches Institut</subfield></datafield><datafield tag="982" ind1=" " ind2=" "><subfield code="2">2013</subfield><subfield code="1">01</subfield><subfield code="x">DE-16-250</subfield><subfield code="8">04</subfield><subfield code="s">s</subfield><subfield code="0">(DE-627)1410501914</subfield><subfield code="a">Verfasser</subfield></datafield><datafield tag="982" ind1=" " ind2=" "><subfield code="2">2013</subfield><subfield code="1">01</subfield><subfield code="x">DE-16-250</subfield><subfield code="8">04</subfield><subfield code="s">s</subfield><subfield code="a">pos_9</subfield></datafield></record></collection>
|
score |
7.4027634 |