Heterogeneous context interaction network for vehicle re-identification
Capturing global and subtle discriminative information using attention mechanisms is essential to address the challenge of inter-class high similarity for vehicle re-identification (Re-ID) task. Mixing self-information of nodes or modeling context based on pairwise dependencies between nodes are the...
Ausführliche Beschreibung
Autor*in: |
Sun, Ke [verfasserIn] Pang, Xiyu [verfasserIn] Zheng, Meifeng [verfasserIn] Nie, Xiushan [verfasserIn] Li, Xi [verfasserIn] Zhou, Houren [verfasserIn] Yin, Yilong [verfasserIn] |
---|
Format: |
E-Artikel |
---|---|
Sprache: |
Englisch |
Erschienen: |
2023 |
---|
Schlagwörter: |
---|
Übergeordnetes Werk: |
Enthalten in: Neural networks - Amsterdam : Elsevier, 1988, 169, Seite 293-306 |
---|---|
Übergeordnetes Werk: |
volume:169 ; pages:293-306 |
DOI / URN: |
10.1016/j.neunet.2023.10.032 |
---|
Katalog-ID: |
ELV066194350 |
---|
LEADER | 01000caa a22002652 4500 | ||
---|---|---|---|
001 | ELV066194350 | ||
003 | DE-627 | ||
005 | 20240114093620.0 | ||
007 | cr uuu---uuuuu | ||
008 | 231220s2023 xx |||||o 00| ||eng c | ||
024 | 7 | |a 10.1016/j.neunet.2023.10.032 |2 doi | |
035 | |a (DE-627)ELV066194350 | ||
035 | |a (ELSEVIER)S0893-6080(23)00586-5 | ||
040 | |a DE-627 |b ger |c DE-627 |e rda | ||
041 | |a eng | ||
082 | 0 | 4 | |a 004 |q VZ |
084 | |a 54.72 |2 bkl | ||
100 | 1 | |a Sun, Ke |e verfasserin |0 (orcid)0009-0007-1199-8530 |4 aut | |
245 | 1 | 0 | |a Heterogeneous context interaction network for vehicle re-identification |
264 | 1 | |c 2023 | |
336 | |a nicht spezifiziert |b zzz |2 rdacontent | ||
337 | |a Computermedien |b c |2 rdamedia | ||
338 | |a Online-Ressource |b cr |2 rdacarrier | ||
520 | |a Capturing global and subtle discriminative information using attention mechanisms is essential to address the challenge of inter-class high similarity for vehicle re-identification (Re-ID) task. Mixing self-information of nodes or modeling context based on pairwise dependencies between nodes are the core ideas of current advanced attention mechanisms. This paper aims to explore how to utilize both dependency context and self-context in an efficient way to facilitate attention to learn more effectively. We propose a heterogeneous context interaction (HCI) attention mechanism that infers the weights of nodes from the interactions of global dependency contexts and local self-contexts to enhance the effect of attention learning. To reduce computational complexity, global dependency contexts are modeled by aggregating number-compressed pairwise dependencies, and the interactions of heterogeneous contexts are restricted to a certain range. Based on this mechanism, we propose a heterogeneous context interaction network (HCI-Net), which uses channel heterogeneous context interaction module (CHCI) and spatial heterogeneous context interaction module (SHCI), and introduces a rigid partitioning strategy to extract important global and fine-grained features. In addition, we design a non-similarity constraint (NSC) that forces the HCI-Net to learn diverse subtle discriminative information. The experiment results on two large datasets, VeRi-776 and VehicleID, show that our proposed HCI-Net achieves the state-of-the-art performance. In particular, the mean average precision (mAP) reaches 83.8% on VeRi-776 dataset. | ||
650 | 4 | |a Vehicle re-identification | |
650 | 4 | |a Neural network | |
650 | 4 | |a Global dependency contexts | |
650 | 4 | |a Local self-contexts | |
650 | 4 | |a Heterogeneous context interaction | |
700 | 1 | |a Pang, Xiyu |e verfasserin |4 aut | |
700 | 1 | |a Zheng, Meifeng |e verfasserin |4 aut | |
700 | 1 | |a Nie, Xiushan |e verfasserin |4 aut | |
700 | 1 | |a Li, Xi |e verfasserin |4 aut | |
700 | 1 | |a Zhou, Houren |e verfasserin |4 aut | |
700 | 1 | |a Yin, Yilong |e verfasserin |4 aut | |
773 | 0 | 8 | |i Enthalten in |t Neural networks |d Amsterdam : Elsevier, 1988 |g 169, Seite 293-306 |h Online-Ressource |w (DE-627)302468536 |w (DE-600)1491372-0 |w (DE-576)07971997X |x 1879-2782 |7 nnns |
773 | 1 | 8 | |g volume:169 |g pages:293-306 |
912 | |a GBV_USEFLAG_U | ||
912 | |a GBV_ELV | ||
912 | |a SYSFLAG_U | ||
912 | |a GBV_ILN_20 | ||
912 | |a GBV_ILN_22 | ||
912 | |a GBV_ILN_23 | ||
912 | |a GBV_ILN_24 | ||
912 | |a GBV_ILN_31 | ||
912 | |a GBV_ILN_32 | ||
912 | |a GBV_ILN_40 | ||
912 | |a GBV_ILN_60 | ||
912 | |a GBV_ILN_62 | ||
912 | |a GBV_ILN_65 | ||
912 | |a GBV_ILN_69 | ||
912 | |a GBV_ILN_70 | ||
912 | |a GBV_ILN_73 | ||
912 | |a GBV_ILN_74 | ||
912 | |a GBV_ILN_90 | ||
912 | |a GBV_ILN_95 | ||
912 | |a GBV_ILN_100 | ||
912 | |a GBV_ILN_101 | ||
912 | |a GBV_ILN_105 | ||
912 | |a GBV_ILN_110 | ||
912 | |a GBV_ILN_150 | ||
912 | |a GBV_ILN_151 | ||
912 | |a GBV_ILN_187 | ||
912 | |a GBV_ILN_213 | ||
912 | |a GBV_ILN_224 | ||
912 | |a GBV_ILN_230 | ||
912 | |a GBV_ILN_370 | ||
912 | |a GBV_ILN_602 | ||
912 | |a GBV_ILN_702 | ||
912 | |a GBV_ILN_2001 | ||
912 | |a GBV_ILN_2003 | ||
912 | |a GBV_ILN_2004 | ||
912 | |a GBV_ILN_2005 | ||
912 | |a GBV_ILN_2007 | ||
912 | |a GBV_ILN_2009 | ||
912 | |a GBV_ILN_2010 | ||
912 | |a GBV_ILN_2011 | ||
912 | |a GBV_ILN_2014 | ||
912 | |a GBV_ILN_2015 | ||
912 | |a GBV_ILN_2020 | ||
912 | |a GBV_ILN_2021 | ||
912 | |a GBV_ILN_2025 | ||
912 | |a GBV_ILN_2026 | ||
912 | |a GBV_ILN_2027 | ||
912 | |a GBV_ILN_2034 | ||
912 | |a GBV_ILN_2044 | ||
912 | |a GBV_ILN_2048 | ||
912 | |a GBV_ILN_2049 | ||
912 | |a GBV_ILN_2050 | ||
912 | |a GBV_ILN_2055 | ||
912 | |a GBV_ILN_2056 | ||
912 | |a GBV_ILN_2059 | ||
912 | |a GBV_ILN_2061 | ||
912 | |a GBV_ILN_2064 | ||
912 | |a GBV_ILN_2106 | ||
912 | |a GBV_ILN_2110 | ||
912 | |a GBV_ILN_2111 | ||
912 | |a GBV_ILN_2112 | ||
912 | |a GBV_ILN_2122 | ||
912 | |a GBV_ILN_2129 | ||
912 | |a GBV_ILN_2143 | ||
912 | |a GBV_ILN_2152 | ||
912 | |a GBV_ILN_2153 | ||
912 | |a GBV_ILN_2190 | ||
912 | |a GBV_ILN_2232 | ||
912 | |a GBV_ILN_2336 | ||
912 | |a GBV_ILN_2470 | ||
912 | |a GBV_ILN_2507 | ||
912 | |a GBV_ILN_4035 | ||
912 | |a GBV_ILN_4037 | ||
912 | |a GBV_ILN_4112 | ||
912 | |a GBV_ILN_4125 | ||
912 | |a GBV_ILN_4242 | ||
912 | |a GBV_ILN_4249 | ||
912 | |a GBV_ILN_4251 | ||
912 | |a GBV_ILN_4305 | ||
912 | |a GBV_ILN_4306 | ||
912 | |a GBV_ILN_4307 | ||
912 | |a GBV_ILN_4313 | ||
912 | |a GBV_ILN_4322 | ||
912 | |a GBV_ILN_4323 | ||
912 | |a GBV_ILN_4324 | ||
912 | |a GBV_ILN_4326 | ||
912 | |a GBV_ILN_4333 | ||
912 | |a GBV_ILN_4334 | ||
912 | |a GBV_ILN_4338 | ||
912 | |a GBV_ILN_4393 | ||
912 | |a GBV_ILN_4700 | ||
936 | b | k | |a 54.72 |j Künstliche Intelligenz |q VZ |
951 | |a AR | ||
952 | |d 169 |h 293-306 |
author_variant |
k s ks x p xp m z mz x n xn x l xl h z hz y y yy |
---|---|
matchkey_str |
article:18792782:2023----::eeoeeucnetneatontokovhce |
hierarchy_sort_str |
2023 |
bklnumber |
54.72 |
publishDate |
2023 |
allfields |
10.1016/j.neunet.2023.10.032 doi (DE-627)ELV066194350 (ELSEVIER)S0893-6080(23)00586-5 DE-627 ger DE-627 rda eng 004 VZ 54.72 bkl Sun, Ke verfasserin (orcid)0009-0007-1199-8530 aut Heterogeneous context interaction network for vehicle re-identification 2023 nicht spezifiziert zzz rdacontent Computermedien c rdamedia Online-Ressource cr rdacarrier Capturing global and subtle discriminative information using attention mechanisms is essential to address the challenge of inter-class high similarity for vehicle re-identification (Re-ID) task. Mixing self-information of nodes or modeling context based on pairwise dependencies between nodes are the core ideas of current advanced attention mechanisms. This paper aims to explore how to utilize both dependency context and self-context in an efficient way to facilitate attention to learn more effectively. We propose a heterogeneous context interaction (HCI) attention mechanism that infers the weights of nodes from the interactions of global dependency contexts and local self-contexts to enhance the effect of attention learning. To reduce computational complexity, global dependency contexts are modeled by aggregating number-compressed pairwise dependencies, and the interactions of heterogeneous contexts are restricted to a certain range. Based on this mechanism, we propose a heterogeneous context interaction network (HCI-Net), which uses channel heterogeneous context interaction module (CHCI) and spatial heterogeneous context interaction module (SHCI), and introduces a rigid partitioning strategy to extract important global and fine-grained features. In addition, we design a non-similarity constraint (NSC) that forces the HCI-Net to learn diverse subtle discriminative information. The experiment results on two large datasets, VeRi-776 and VehicleID, show that our proposed HCI-Net achieves the state-of-the-art performance. In particular, the mean average precision (mAP) reaches 83.8% on VeRi-776 dataset. Vehicle re-identification Neural network Global dependency contexts Local self-contexts Heterogeneous context interaction Pang, Xiyu verfasserin aut Zheng, Meifeng verfasserin aut Nie, Xiushan verfasserin aut Li, Xi verfasserin aut Zhou, Houren verfasserin aut Yin, Yilong verfasserin aut Enthalten in Neural networks Amsterdam : Elsevier, 1988 169, Seite 293-306 Online-Ressource (DE-627)302468536 (DE-600)1491372-0 (DE-576)07971997X 1879-2782 nnns volume:169 pages:293-306 GBV_USEFLAG_U GBV_ELV SYSFLAG_U GBV_ILN_20 GBV_ILN_22 GBV_ILN_23 GBV_ILN_24 GBV_ILN_31 GBV_ILN_32 GBV_ILN_40 GBV_ILN_60 GBV_ILN_62 GBV_ILN_65 GBV_ILN_69 GBV_ILN_70 GBV_ILN_73 GBV_ILN_74 GBV_ILN_90 GBV_ILN_95 GBV_ILN_100 GBV_ILN_101 GBV_ILN_105 GBV_ILN_110 GBV_ILN_150 GBV_ILN_151 GBV_ILN_187 GBV_ILN_213 GBV_ILN_224 GBV_ILN_230 GBV_ILN_370 GBV_ILN_602 GBV_ILN_702 GBV_ILN_2001 GBV_ILN_2003 GBV_ILN_2004 GBV_ILN_2005 GBV_ILN_2007 GBV_ILN_2009 GBV_ILN_2010 GBV_ILN_2011 GBV_ILN_2014 GBV_ILN_2015 GBV_ILN_2020 GBV_ILN_2021 GBV_ILN_2025 GBV_ILN_2026 GBV_ILN_2027 GBV_ILN_2034 GBV_ILN_2044 GBV_ILN_2048 GBV_ILN_2049 GBV_ILN_2050 GBV_ILN_2055 GBV_ILN_2056 GBV_ILN_2059 GBV_ILN_2061 GBV_ILN_2064 GBV_ILN_2106 GBV_ILN_2110 GBV_ILN_2111 GBV_ILN_2112 GBV_ILN_2122 GBV_ILN_2129 GBV_ILN_2143 GBV_ILN_2152 GBV_ILN_2153 GBV_ILN_2190 GBV_ILN_2232 GBV_ILN_2336 GBV_ILN_2470 GBV_ILN_2507 GBV_ILN_4035 GBV_ILN_4037 GBV_ILN_4112 GBV_ILN_4125 GBV_ILN_4242 GBV_ILN_4249 GBV_ILN_4251 GBV_ILN_4305 GBV_ILN_4306 GBV_ILN_4307 GBV_ILN_4313 GBV_ILN_4322 GBV_ILN_4323 GBV_ILN_4324 GBV_ILN_4326 GBV_ILN_4333 GBV_ILN_4334 GBV_ILN_4338 GBV_ILN_4393 GBV_ILN_4700 54.72 Künstliche Intelligenz VZ AR 169 293-306 |
spelling |
10.1016/j.neunet.2023.10.032 doi (DE-627)ELV066194350 (ELSEVIER)S0893-6080(23)00586-5 DE-627 ger DE-627 rda eng 004 VZ 54.72 bkl Sun, Ke verfasserin (orcid)0009-0007-1199-8530 aut Heterogeneous context interaction network for vehicle re-identification 2023 nicht spezifiziert zzz rdacontent Computermedien c rdamedia Online-Ressource cr rdacarrier Capturing global and subtle discriminative information using attention mechanisms is essential to address the challenge of inter-class high similarity for vehicle re-identification (Re-ID) task. Mixing self-information of nodes or modeling context based on pairwise dependencies between nodes are the core ideas of current advanced attention mechanisms. This paper aims to explore how to utilize both dependency context and self-context in an efficient way to facilitate attention to learn more effectively. We propose a heterogeneous context interaction (HCI) attention mechanism that infers the weights of nodes from the interactions of global dependency contexts and local self-contexts to enhance the effect of attention learning. To reduce computational complexity, global dependency contexts are modeled by aggregating number-compressed pairwise dependencies, and the interactions of heterogeneous contexts are restricted to a certain range. Based on this mechanism, we propose a heterogeneous context interaction network (HCI-Net), which uses channel heterogeneous context interaction module (CHCI) and spatial heterogeneous context interaction module (SHCI), and introduces a rigid partitioning strategy to extract important global and fine-grained features. In addition, we design a non-similarity constraint (NSC) that forces the HCI-Net to learn diverse subtle discriminative information. The experiment results on two large datasets, VeRi-776 and VehicleID, show that our proposed HCI-Net achieves the state-of-the-art performance. In particular, the mean average precision (mAP) reaches 83.8% on VeRi-776 dataset. Vehicle re-identification Neural network Global dependency contexts Local self-contexts Heterogeneous context interaction Pang, Xiyu verfasserin aut Zheng, Meifeng verfasserin aut Nie, Xiushan verfasserin aut Li, Xi verfasserin aut Zhou, Houren verfasserin aut Yin, Yilong verfasserin aut Enthalten in Neural networks Amsterdam : Elsevier, 1988 169, Seite 293-306 Online-Ressource (DE-627)302468536 (DE-600)1491372-0 (DE-576)07971997X 1879-2782 nnns volume:169 pages:293-306 GBV_USEFLAG_U GBV_ELV SYSFLAG_U GBV_ILN_20 GBV_ILN_22 GBV_ILN_23 GBV_ILN_24 GBV_ILN_31 GBV_ILN_32 GBV_ILN_40 GBV_ILN_60 GBV_ILN_62 GBV_ILN_65 GBV_ILN_69 GBV_ILN_70 GBV_ILN_73 GBV_ILN_74 GBV_ILN_90 GBV_ILN_95 GBV_ILN_100 GBV_ILN_101 GBV_ILN_105 GBV_ILN_110 GBV_ILN_150 GBV_ILN_151 GBV_ILN_187 GBV_ILN_213 GBV_ILN_224 GBV_ILN_230 GBV_ILN_370 GBV_ILN_602 GBV_ILN_702 GBV_ILN_2001 GBV_ILN_2003 GBV_ILN_2004 GBV_ILN_2005 GBV_ILN_2007 GBV_ILN_2009 GBV_ILN_2010 GBV_ILN_2011 GBV_ILN_2014 GBV_ILN_2015 GBV_ILN_2020 GBV_ILN_2021 GBV_ILN_2025 GBV_ILN_2026 GBV_ILN_2027 GBV_ILN_2034 GBV_ILN_2044 GBV_ILN_2048 GBV_ILN_2049 GBV_ILN_2050 GBV_ILN_2055 GBV_ILN_2056 GBV_ILN_2059 GBV_ILN_2061 GBV_ILN_2064 GBV_ILN_2106 GBV_ILN_2110 GBV_ILN_2111 GBV_ILN_2112 GBV_ILN_2122 GBV_ILN_2129 GBV_ILN_2143 GBV_ILN_2152 GBV_ILN_2153 GBV_ILN_2190 GBV_ILN_2232 GBV_ILN_2336 GBV_ILN_2470 GBV_ILN_2507 GBV_ILN_4035 GBV_ILN_4037 GBV_ILN_4112 GBV_ILN_4125 GBV_ILN_4242 GBV_ILN_4249 GBV_ILN_4251 GBV_ILN_4305 GBV_ILN_4306 GBV_ILN_4307 GBV_ILN_4313 GBV_ILN_4322 GBV_ILN_4323 GBV_ILN_4324 GBV_ILN_4326 GBV_ILN_4333 GBV_ILN_4334 GBV_ILN_4338 GBV_ILN_4393 GBV_ILN_4700 54.72 Künstliche Intelligenz VZ AR 169 293-306 |
allfields_unstemmed |
10.1016/j.neunet.2023.10.032 doi (DE-627)ELV066194350 (ELSEVIER)S0893-6080(23)00586-5 DE-627 ger DE-627 rda eng 004 VZ 54.72 bkl Sun, Ke verfasserin (orcid)0009-0007-1199-8530 aut Heterogeneous context interaction network for vehicle re-identification 2023 nicht spezifiziert zzz rdacontent Computermedien c rdamedia Online-Ressource cr rdacarrier Capturing global and subtle discriminative information using attention mechanisms is essential to address the challenge of inter-class high similarity for vehicle re-identification (Re-ID) task. Mixing self-information of nodes or modeling context based on pairwise dependencies between nodes are the core ideas of current advanced attention mechanisms. This paper aims to explore how to utilize both dependency context and self-context in an efficient way to facilitate attention to learn more effectively. We propose a heterogeneous context interaction (HCI) attention mechanism that infers the weights of nodes from the interactions of global dependency contexts and local self-contexts to enhance the effect of attention learning. To reduce computational complexity, global dependency contexts are modeled by aggregating number-compressed pairwise dependencies, and the interactions of heterogeneous contexts are restricted to a certain range. Based on this mechanism, we propose a heterogeneous context interaction network (HCI-Net), which uses channel heterogeneous context interaction module (CHCI) and spatial heterogeneous context interaction module (SHCI), and introduces a rigid partitioning strategy to extract important global and fine-grained features. In addition, we design a non-similarity constraint (NSC) that forces the HCI-Net to learn diverse subtle discriminative information. The experiment results on two large datasets, VeRi-776 and VehicleID, show that our proposed HCI-Net achieves the state-of-the-art performance. In particular, the mean average precision (mAP) reaches 83.8% on VeRi-776 dataset. Vehicle re-identification Neural network Global dependency contexts Local self-contexts Heterogeneous context interaction Pang, Xiyu verfasserin aut Zheng, Meifeng verfasserin aut Nie, Xiushan verfasserin aut Li, Xi verfasserin aut Zhou, Houren verfasserin aut Yin, Yilong verfasserin aut Enthalten in Neural networks Amsterdam : Elsevier, 1988 169, Seite 293-306 Online-Ressource (DE-627)302468536 (DE-600)1491372-0 (DE-576)07971997X 1879-2782 nnns volume:169 pages:293-306 GBV_USEFLAG_U GBV_ELV SYSFLAG_U GBV_ILN_20 GBV_ILN_22 GBV_ILN_23 GBV_ILN_24 GBV_ILN_31 GBV_ILN_32 GBV_ILN_40 GBV_ILN_60 GBV_ILN_62 GBV_ILN_65 GBV_ILN_69 GBV_ILN_70 GBV_ILN_73 GBV_ILN_74 GBV_ILN_90 GBV_ILN_95 GBV_ILN_100 GBV_ILN_101 GBV_ILN_105 GBV_ILN_110 GBV_ILN_150 GBV_ILN_151 GBV_ILN_187 GBV_ILN_213 GBV_ILN_224 GBV_ILN_230 GBV_ILN_370 GBV_ILN_602 GBV_ILN_702 GBV_ILN_2001 GBV_ILN_2003 GBV_ILN_2004 GBV_ILN_2005 GBV_ILN_2007 GBV_ILN_2009 GBV_ILN_2010 GBV_ILN_2011 GBV_ILN_2014 GBV_ILN_2015 GBV_ILN_2020 GBV_ILN_2021 GBV_ILN_2025 GBV_ILN_2026 GBV_ILN_2027 GBV_ILN_2034 GBV_ILN_2044 GBV_ILN_2048 GBV_ILN_2049 GBV_ILN_2050 GBV_ILN_2055 GBV_ILN_2056 GBV_ILN_2059 GBV_ILN_2061 GBV_ILN_2064 GBV_ILN_2106 GBV_ILN_2110 GBV_ILN_2111 GBV_ILN_2112 GBV_ILN_2122 GBV_ILN_2129 GBV_ILN_2143 GBV_ILN_2152 GBV_ILN_2153 GBV_ILN_2190 GBV_ILN_2232 GBV_ILN_2336 GBV_ILN_2470 GBV_ILN_2507 GBV_ILN_4035 GBV_ILN_4037 GBV_ILN_4112 GBV_ILN_4125 GBV_ILN_4242 GBV_ILN_4249 GBV_ILN_4251 GBV_ILN_4305 GBV_ILN_4306 GBV_ILN_4307 GBV_ILN_4313 GBV_ILN_4322 GBV_ILN_4323 GBV_ILN_4324 GBV_ILN_4326 GBV_ILN_4333 GBV_ILN_4334 GBV_ILN_4338 GBV_ILN_4393 GBV_ILN_4700 54.72 Künstliche Intelligenz VZ AR 169 293-306 |
allfieldsGer |
10.1016/j.neunet.2023.10.032 doi (DE-627)ELV066194350 (ELSEVIER)S0893-6080(23)00586-5 DE-627 ger DE-627 rda eng 004 VZ 54.72 bkl Sun, Ke verfasserin (orcid)0009-0007-1199-8530 aut Heterogeneous context interaction network for vehicle re-identification 2023 nicht spezifiziert zzz rdacontent Computermedien c rdamedia Online-Ressource cr rdacarrier Capturing global and subtle discriminative information using attention mechanisms is essential to address the challenge of inter-class high similarity for vehicle re-identification (Re-ID) task. Mixing self-information of nodes or modeling context based on pairwise dependencies between nodes are the core ideas of current advanced attention mechanisms. This paper aims to explore how to utilize both dependency context and self-context in an efficient way to facilitate attention to learn more effectively. We propose a heterogeneous context interaction (HCI) attention mechanism that infers the weights of nodes from the interactions of global dependency contexts and local self-contexts to enhance the effect of attention learning. To reduce computational complexity, global dependency contexts are modeled by aggregating number-compressed pairwise dependencies, and the interactions of heterogeneous contexts are restricted to a certain range. Based on this mechanism, we propose a heterogeneous context interaction network (HCI-Net), which uses channel heterogeneous context interaction module (CHCI) and spatial heterogeneous context interaction module (SHCI), and introduces a rigid partitioning strategy to extract important global and fine-grained features. In addition, we design a non-similarity constraint (NSC) that forces the HCI-Net to learn diverse subtle discriminative information. The experiment results on two large datasets, VeRi-776 and VehicleID, show that our proposed HCI-Net achieves the state-of-the-art performance. In particular, the mean average precision (mAP) reaches 83.8% on VeRi-776 dataset. Vehicle re-identification Neural network Global dependency contexts Local self-contexts Heterogeneous context interaction Pang, Xiyu verfasserin aut Zheng, Meifeng verfasserin aut Nie, Xiushan verfasserin aut Li, Xi verfasserin aut Zhou, Houren verfasserin aut Yin, Yilong verfasserin aut Enthalten in Neural networks Amsterdam : Elsevier, 1988 169, Seite 293-306 Online-Ressource (DE-627)302468536 (DE-600)1491372-0 (DE-576)07971997X 1879-2782 nnns volume:169 pages:293-306 GBV_USEFLAG_U GBV_ELV SYSFLAG_U GBV_ILN_20 GBV_ILN_22 GBV_ILN_23 GBV_ILN_24 GBV_ILN_31 GBV_ILN_32 GBV_ILN_40 GBV_ILN_60 GBV_ILN_62 GBV_ILN_65 GBV_ILN_69 GBV_ILN_70 GBV_ILN_73 GBV_ILN_74 GBV_ILN_90 GBV_ILN_95 GBV_ILN_100 GBV_ILN_101 GBV_ILN_105 GBV_ILN_110 GBV_ILN_150 GBV_ILN_151 GBV_ILN_187 GBV_ILN_213 GBV_ILN_224 GBV_ILN_230 GBV_ILN_370 GBV_ILN_602 GBV_ILN_702 GBV_ILN_2001 GBV_ILN_2003 GBV_ILN_2004 GBV_ILN_2005 GBV_ILN_2007 GBV_ILN_2009 GBV_ILN_2010 GBV_ILN_2011 GBV_ILN_2014 GBV_ILN_2015 GBV_ILN_2020 GBV_ILN_2021 GBV_ILN_2025 GBV_ILN_2026 GBV_ILN_2027 GBV_ILN_2034 GBV_ILN_2044 GBV_ILN_2048 GBV_ILN_2049 GBV_ILN_2050 GBV_ILN_2055 GBV_ILN_2056 GBV_ILN_2059 GBV_ILN_2061 GBV_ILN_2064 GBV_ILN_2106 GBV_ILN_2110 GBV_ILN_2111 GBV_ILN_2112 GBV_ILN_2122 GBV_ILN_2129 GBV_ILN_2143 GBV_ILN_2152 GBV_ILN_2153 GBV_ILN_2190 GBV_ILN_2232 GBV_ILN_2336 GBV_ILN_2470 GBV_ILN_2507 GBV_ILN_4035 GBV_ILN_4037 GBV_ILN_4112 GBV_ILN_4125 GBV_ILN_4242 GBV_ILN_4249 GBV_ILN_4251 GBV_ILN_4305 GBV_ILN_4306 GBV_ILN_4307 GBV_ILN_4313 GBV_ILN_4322 GBV_ILN_4323 GBV_ILN_4324 GBV_ILN_4326 GBV_ILN_4333 GBV_ILN_4334 GBV_ILN_4338 GBV_ILN_4393 GBV_ILN_4700 54.72 Künstliche Intelligenz VZ AR 169 293-306 |
allfieldsSound |
10.1016/j.neunet.2023.10.032 doi (DE-627)ELV066194350 (ELSEVIER)S0893-6080(23)00586-5 DE-627 ger DE-627 rda eng 004 VZ 54.72 bkl Sun, Ke verfasserin (orcid)0009-0007-1199-8530 aut Heterogeneous context interaction network for vehicle re-identification 2023 nicht spezifiziert zzz rdacontent Computermedien c rdamedia Online-Ressource cr rdacarrier Capturing global and subtle discriminative information using attention mechanisms is essential to address the challenge of inter-class high similarity for vehicle re-identification (Re-ID) task. Mixing self-information of nodes or modeling context based on pairwise dependencies between nodes are the core ideas of current advanced attention mechanisms. This paper aims to explore how to utilize both dependency context and self-context in an efficient way to facilitate attention to learn more effectively. We propose a heterogeneous context interaction (HCI) attention mechanism that infers the weights of nodes from the interactions of global dependency contexts and local self-contexts to enhance the effect of attention learning. To reduce computational complexity, global dependency contexts are modeled by aggregating number-compressed pairwise dependencies, and the interactions of heterogeneous contexts are restricted to a certain range. Based on this mechanism, we propose a heterogeneous context interaction network (HCI-Net), which uses channel heterogeneous context interaction module (CHCI) and spatial heterogeneous context interaction module (SHCI), and introduces a rigid partitioning strategy to extract important global and fine-grained features. In addition, we design a non-similarity constraint (NSC) that forces the HCI-Net to learn diverse subtle discriminative information. The experiment results on two large datasets, VeRi-776 and VehicleID, show that our proposed HCI-Net achieves the state-of-the-art performance. In particular, the mean average precision (mAP) reaches 83.8% on VeRi-776 dataset. Vehicle re-identification Neural network Global dependency contexts Local self-contexts Heterogeneous context interaction Pang, Xiyu verfasserin aut Zheng, Meifeng verfasserin aut Nie, Xiushan verfasserin aut Li, Xi verfasserin aut Zhou, Houren verfasserin aut Yin, Yilong verfasserin aut Enthalten in Neural networks Amsterdam : Elsevier, 1988 169, Seite 293-306 Online-Ressource (DE-627)302468536 (DE-600)1491372-0 (DE-576)07971997X 1879-2782 nnns volume:169 pages:293-306 GBV_USEFLAG_U GBV_ELV SYSFLAG_U GBV_ILN_20 GBV_ILN_22 GBV_ILN_23 GBV_ILN_24 GBV_ILN_31 GBV_ILN_32 GBV_ILN_40 GBV_ILN_60 GBV_ILN_62 GBV_ILN_65 GBV_ILN_69 GBV_ILN_70 GBV_ILN_73 GBV_ILN_74 GBV_ILN_90 GBV_ILN_95 GBV_ILN_100 GBV_ILN_101 GBV_ILN_105 GBV_ILN_110 GBV_ILN_150 GBV_ILN_151 GBV_ILN_187 GBV_ILN_213 GBV_ILN_224 GBV_ILN_230 GBV_ILN_370 GBV_ILN_602 GBV_ILN_702 GBV_ILN_2001 GBV_ILN_2003 GBV_ILN_2004 GBV_ILN_2005 GBV_ILN_2007 GBV_ILN_2009 GBV_ILN_2010 GBV_ILN_2011 GBV_ILN_2014 GBV_ILN_2015 GBV_ILN_2020 GBV_ILN_2021 GBV_ILN_2025 GBV_ILN_2026 GBV_ILN_2027 GBV_ILN_2034 GBV_ILN_2044 GBV_ILN_2048 GBV_ILN_2049 GBV_ILN_2050 GBV_ILN_2055 GBV_ILN_2056 GBV_ILN_2059 GBV_ILN_2061 GBV_ILN_2064 GBV_ILN_2106 GBV_ILN_2110 GBV_ILN_2111 GBV_ILN_2112 GBV_ILN_2122 GBV_ILN_2129 GBV_ILN_2143 GBV_ILN_2152 GBV_ILN_2153 GBV_ILN_2190 GBV_ILN_2232 GBV_ILN_2336 GBV_ILN_2470 GBV_ILN_2507 GBV_ILN_4035 GBV_ILN_4037 GBV_ILN_4112 GBV_ILN_4125 GBV_ILN_4242 GBV_ILN_4249 GBV_ILN_4251 GBV_ILN_4305 GBV_ILN_4306 GBV_ILN_4307 GBV_ILN_4313 GBV_ILN_4322 GBV_ILN_4323 GBV_ILN_4324 GBV_ILN_4326 GBV_ILN_4333 GBV_ILN_4334 GBV_ILN_4338 GBV_ILN_4393 GBV_ILN_4700 54.72 Künstliche Intelligenz VZ AR 169 293-306 |
language |
English |
source |
Enthalten in Neural networks 169, Seite 293-306 volume:169 pages:293-306 |
sourceStr |
Enthalten in Neural networks 169, Seite 293-306 volume:169 pages:293-306 |
format_phy_str_mv |
Article |
bklname |
Künstliche Intelligenz |
institution |
findex.gbv.de |
topic_facet |
Vehicle re-identification Neural network Global dependency contexts Local self-contexts Heterogeneous context interaction |
dewey-raw |
004 |
isfreeaccess_bool |
false |
container_title |
Neural networks |
authorswithroles_txt_mv |
Sun, Ke @@aut@@ Pang, Xiyu @@aut@@ Zheng, Meifeng @@aut@@ Nie, Xiushan @@aut@@ Li, Xi @@aut@@ Zhou, Houren @@aut@@ Yin, Yilong @@aut@@ |
publishDateDaySort_date |
2023-01-01T00:00:00Z |
hierarchy_top_id |
302468536 |
dewey-sort |
14 |
id |
ELV066194350 |
language_de |
englisch |
fullrecord |
<?xml version="1.0" encoding="UTF-8"?><collection xmlns="http://www.loc.gov/MARC21/slim"><record><leader>01000caa a22002652 4500</leader><controlfield tag="001">ELV066194350</controlfield><controlfield tag="003">DE-627</controlfield><controlfield tag="005">20240114093620.0</controlfield><controlfield tag="007">cr uuu---uuuuu</controlfield><controlfield tag="008">231220s2023 xx |||||o 00| ||eng c</controlfield><datafield tag="024" ind1="7" ind2=" "><subfield code="a">10.1016/j.neunet.2023.10.032</subfield><subfield code="2">doi</subfield></datafield><datafield tag="035" ind1=" " ind2=" "><subfield code="a">(DE-627)ELV066194350</subfield></datafield><datafield tag="035" ind1=" " ind2=" "><subfield code="a">(ELSEVIER)S0893-6080(23)00586-5</subfield></datafield><datafield tag="040" ind1=" " ind2=" "><subfield code="a">DE-627</subfield><subfield code="b">ger</subfield><subfield code="c">DE-627</subfield><subfield code="e">rda</subfield></datafield><datafield tag="041" ind1=" " ind2=" "><subfield code="a">eng</subfield></datafield><datafield tag="082" ind1="0" ind2="4"><subfield code="a">004</subfield><subfield code="q">VZ</subfield></datafield><datafield tag="084" ind1=" " ind2=" "><subfield code="a">54.72</subfield><subfield code="2">bkl</subfield></datafield><datafield tag="100" ind1="1" ind2=" "><subfield code="a">Sun, Ke</subfield><subfield code="e">verfasserin</subfield><subfield code="0">(orcid)0009-0007-1199-8530</subfield><subfield code="4">aut</subfield></datafield><datafield tag="245" ind1="1" ind2="0"><subfield code="a">Heterogeneous context interaction network for vehicle re-identification</subfield></datafield><datafield tag="264" ind1=" " ind2="1"><subfield code="c">2023</subfield></datafield><datafield tag="336" ind1=" " ind2=" "><subfield code="a">nicht spezifiziert</subfield><subfield code="b">zzz</subfield><subfield code="2">rdacontent</subfield></datafield><datafield tag="337" ind1=" " ind2=" "><subfield code="a">Computermedien</subfield><subfield code="b">c</subfield><subfield code="2">rdamedia</subfield></datafield><datafield tag="338" ind1=" " ind2=" "><subfield code="a">Online-Ressource</subfield><subfield code="b">cr</subfield><subfield code="2">rdacarrier</subfield></datafield><datafield tag="520" ind1=" " ind2=" "><subfield code="a">Capturing global and subtle discriminative information using attention mechanisms is essential to address the challenge of inter-class high similarity for vehicle re-identification (Re-ID) task. Mixing self-information of nodes or modeling context based on pairwise dependencies between nodes are the core ideas of current advanced attention mechanisms. This paper aims to explore how to utilize both dependency context and self-context in an efficient way to facilitate attention to learn more effectively. We propose a heterogeneous context interaction (HCI) attention mechanism that infers the weights of nodes from the interactions of global dependency contexts and local self-contexts to enhance the effect of attention learning. To reduce computational complexity, global dependency contexts are modeled by aggregating number-compressed pairwise dependencies, and the interactions of heterogeneous contexts are restricted to a certain range. Based on this mechanism, we propose a heterogeneous context interaction network (HCI-Net), which uses channel heterogeneous context interaction module (CHCI) and spatial heterogeneous context interaction module (SHCI), and introduces a rigid partitioning strategy to extract important global and fine-grained features. In addition, we design a non-similarity constraint (NSC) that forces the HCI-Net to learn diverse subtle discriminative information. The experiment results on two large datasets, VeRi-776 and VehicleID, show that our proposed HCI-Net achieves the state-of-the-art performance. In particular, the mean average precision (mAP) reaches 83.8% on VeRi-776 dataset.</subfield></datafield><datafield tag="650" ind1=" " ind2="4"><subfield code="a">Vehicle re-identification</subfield></datafield><datafield tag="650" ind1=" " ind2="4"><subfield code="a">Neural network</subfield></datafield><datafield tag="650" ind1=" " ind2="4"><subfield code="a">Global dependency contexts</subfield></datafield><datafield tag="650" ind1=" " ind2="4"><subfield code="a">Local self-contexts</subfield></datafield><datafield tag="650" ind1=" " ind2="4"><subfield code="a">Heterogeneous context interaction</subfield></datafield><datafield tag="700" ind1="1" ind2=" "><subfield code="a">Pang, Xiyu</subfield><subfield code="e">verfasserin</subfield><subfield code="4">aut</subfield></datafield><datafield tag="700" ind1="1" ind2=" "><subfield code="a">Zheng, Meifeng</subfield><subfield code="e">verfasserin</subfield><subfield code="4">aut</subfield></datafield><datafield tag="700" ind1="1" ind2=" "><subfield code="a">Nie, Xiushan</subfield><subfield code="e">verfasserin</subfield><subfield code="4">aut</subfield></datafield><datafield tag="700" ind1="1" ind2=" "><subfield code="a">Li, Xi</subfield><subfield code="e">verfasserin</subfield><subfield code="4">aut</subfield></datafield><datafield tag="700" ind1="1" ind2=" "><subfield code="a">Zhou, Houren</subfield><subfield code="e">verfasserin</subfield><subfield code="4">aut</subfield></datafield><datafield tag="700" ind1="1" ind2=" "><subfield code="a">Yin, Yilong</subfield><subfield code="e">verfasserin</subfield><subfield code="4">aut</subfield></datafield><datafield tag="773" ind1="0" ind2="8"><subfield code="i">Enthalten in</subfield><subfield code="t">Neural networks</subfield><subfield code="d">Amsterdam : Elsevier, 1988</subfield><subfield code="g">169, Seite 293-306</subfield><subfield code="h">Online-Ressource</subfield><subfield code="w">(DE-627)302468536</subfield><subfield code="w">(DE-600)1491372-0</subfield><subfield code="w">(DE-576)07971997X</subfield><subfield code="x">1879-2782</subfield><subfield code="7">nnns</subfield></datafield><datafield tag="773" ind1="1" ind2="8"><subfield code="g">volume:169</subfield><subfield code="g">pages:293-306</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_USEFLAG_U</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ELV</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">SYSFLAG_U</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_20</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_22</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_23</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_24</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_31</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_32</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_40</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_60</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_62</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_65</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_69</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_70</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_73</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_74</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_90</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_95</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_100</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_101</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_105</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_110</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_150</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_151</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_187</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_213</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_224</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_230</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_370</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_602</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_702</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2001</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2003</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2004</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2005</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2007</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2009</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2010</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2011</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2014</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2015</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2020</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2021</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2025</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2026</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2027</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2034</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2044</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2048</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2049</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2050</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2055</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2056</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2059</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2061</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2064</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2106</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2110</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2111</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2112</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2122</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2129</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2143</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2152</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2153</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2190</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2232</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2336</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2470</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2507</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4035</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4037</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4112</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4125</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4242</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4249</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4251</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4305</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4306</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4307</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4313</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4322</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4323</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4324</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4326</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4333</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4334</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4338</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4393</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4700</subfield></datafield><datafield tag="936" ind1="b" ind2="k"><subfield code="a">54.72</subfield><subfield code="j">Künstliche Intelligenz</subfield><subfield code="q">VZ</subfield></datafield><datafield tag="951" ind1=" " ind2=" "><subfield code="a">AR</subfield></datafield><datafield tag="952" ind1=" " ind2=" "><subfield code="d">169</subfield><subfield code="h">293-306</subfield></datafield></record></collection>
|
author |
Sun, Ke |
spellingShingle |
Sun, Ke ddc 004 bkl 54.72 misc Vehicle re-identification misc Neural network misc Global dependency contexts misc Local self-contexts misc Heterogeneous context interaction Heterogeneous context interaction network for vehicle re-identification |
authorStr |
Sun, Ke |
ppnlink_with_tag_str_mv |
@@773@@(DE-627)302468536 |
format |
electronic Article |
dewey-ones |
004 - Data processing & computer science |
delete_txt_mv |
keep |
author_role |
aut aut aut aut aut aut aut |
collection |
elsevier |
remote_str |
true |
illustrated |
Not Illustrated |
issn |
1879-2782 |
topic_title |
004 VZ 54.72 bkl Heterogeneous context interaction network for vehicle re-identification Vehicle re-identification Neural network Global dependency contexts Local self-contexts Heterogeneous context interaction |
topic |
ddc 004 bkl 54.72 misc Vehicle re-identification misc Neural network misc Global dependency contexts misc Local self-contexts misc Heterogeneous context interaction |
topic_unstemmed |
ddc 004 bkl 54.72 misc Vehicle re-identification misc Neural network misc Global dependency contexts misc Local self-contexts misc Heterogeneous context interaction |
topic_browse |
ddc 004 bkl 54.72 misc Vehicle re-identification misc Neural network misc Global dependency contexts misc Local self-contexts misc Heterogeneous context interaction |
format_facet |
Elektronische Aufsätze Aufsätze Elektronische Ressource |
format_main_str_mv |
Text Zeitschrift/Artikel |
carriertype_str_mv |
cr |
hierarchy_parent_title |
Neural networks |
hierarchy_parent_id |
302468536 |
dewey-tens |
000 - Computer science, knowledge & systems |
hierarchy_top_title |
Neural networks |
isfreeaccess_txt |
false |
familylinks_str_mv |
(DE-627)302468536 (DE-600)1491372-0 (DE-576)07971997X |
title |
Heterogeneous context interaction network for vehicle re-identification |
ctrlnum |
(DE-627)ELV066194350 (ELSEVIER)S0893-6080(23)00586-5 |
title_full |
Heterogeneous context interaction network for vehicle re-identification |
author_sort |
Sun, Ke |
journal |
Neural networks |
journalStr |
Neural networks |
lang_code |
eng |
isOA_bool |
false |
dewey-hundreds |
000 - Computer science, information & general works |
recordtype |
marc |
publishDateSort |
2023 |
contenttype_str_mv |
zzz |
container_start_page |
293 |
author_browse |
Sun, Ke Pang, Xiyu Zheng, Meifeng Nie, Xiushan Li, Xi Zhou, Houren Yin, Yilong |
container_volume |
169 |
class |
004 VZ 54.72 bkl |
format_se |
Elektronische Aufsätze |
author-letter |
Sun, Ke |
doi_str_mv |
10.1016/j.neunet.2023.10.032 |
normlink |
(ORCID)0009-0007-1199-8530 |
normlink_prefix_str_mv |
(orcid)0009-0007-1199-8530 |
dewey-full |
004 |
author2-role |
verfasserin |
title_sort |
heterogeneous context interaction network for vehicle re-identification |
title_auth |
Heterogeneous context interaction network for vehicle re-identification |
abstract |
Capturing global and subtle discriminative information using attention mechanisms is essential to address the challenge of inter-class high similarity for vehicle re-identification (Re-ID) task. Mixing self-information of nodes or modeling context based on pairwise dependencies between nodes are the core ideas of current advanced attention mechanisms. This paper aims to explore how to utilize both dependency context and self-context in an efficient way to facilitate attention to learn more effectively. We propose a heterogeneous context interaction (HCI) attention mechanism that infers the weights of nodes from the interactions of global dependency contexts and local self-contexts to enhance the effect of attention learning. To reduce computational complexity, global dependency contexts are modeled by aggregating number-compressed pairwise dependencies, and the interactions of heterogeneous contexts are restricted to a certain range. Based on this mechanism, we propose a heterogeneous context interaction network (HCI-Net), which uses channel heterogeneous context interaction module (CHCI) and spatial heterogeneous context interaction module (SHCI), and introduces a rigid partitioning strategy to extract important global and fine-grained features. In addition, we design a non-similarity constraint (NSC) that forces the HCI-Net to learn diverse subtle discriminative information. The experiment results on two large datasets, VeRi-776 and VehicleID, show that our proposed HCI-Net achieves the state-of-the-art performance. In particular, the mean average precision (mAP) reaches 83.8% on VeRi-776 dataset. |
abstractGer |
Capturing global and subtle discriminative information using attention mechanisms is essential to address the challenge of inter-class high similarity for vehicle re-identification (Re-ID) task. Mixing self-information of nodes or modeling context based on pairwise dependencies between nodes are the core ideas of current advanced attention mechanisms. This paper aims to explore how to utilize both dependency context and self-context in an efficient way to facilitate attention to learn more effectively. We propose a heterogeneous context interaction (HCI) attention mechanism that infers the weights of nodes from the interactions of global dependency contexts and local self-contexts to enhance the effect of attention learning. To reduce computational complexity, global dependency contexts are modeled by aggregating number-compressed pairwise dependencies, and the interactions of heterogeneous contexts are restricted to a certain range. Based on this mechanism, we propose a heterogeneous context interaction network (HCI-Net), which uses channel heterogeneous context interaction module (CHCI) and spatial heterogeneous context interaction module (SHCI), and introduces a rigid partitioning strategy to extract important global and fine-grained features. In addition, we design a non-similarity constraint (NSC) that forces the HCI-Net to learn diverse subtle discriminative information. The experiment results on two large datasets, VeRi-776 and VehicleID, show that our proposed HCI-Net achieves the state-of-the-art performance. In particular, the mean average precision (mAP) reaches 83.8% on VeRi-776 dataset. |
abstract_unstemmed |
Capturing global and subtle discriminative information using attention mechanisms is essential to address the challenge of inter-class high similarity for vehicle re-identification (Re-ID) task. Mixing self-information of nodes or modeling context based on pairwise dependencies between nodes are the core ideas of current advanced attention mechanisms. This paper aims to explore how to utilize both dependency context and self-context in an efficient way to facilitate attention to learn more effectively. We propose a heterogeneous context interaction (HCI) attention mechanism that infers the weights of nodes from the interactions of global dependency contexts and local self-contexts to enhance the effect of attention learning. To reduce computational complexity, global dependency contexts are modeled by aggregating number-compressed pairwise dependencies, and the interactions of heterogeneous contexts are restricted to a certain range. Based on this mechanism, we propose a heterogeneous context interaction network (HCI-Net), which uses channel heterogeneous context interaction module (CHCI) and spatial heterogeneous context interaction module (SHCI), and introduces a rigid partitioning strategy to extract important global and fine-grained features. In addition, we design a non-similarity constraint (NSC) that forces the HCI-Net to learn diverse subtle discriminative information. The experiment results on two large datasets, VeRi-776 and VehicleID, show that our proposed HCI-Net achieves the state-of-the-art performance. In particular, the mean average precision (mAP) reaches 83.8% on VeRi-776 dataset. |
collection_details |
GBV_USEFLAG_U GBV_ELV SYSFLAG_U GBV_ILN_20 GBV_ILN_22 GBV_ILN_23 GBV_ILN_24 GBV_ILN_31 GBV_ILN_32 GBV_ILN_40 GBV_ILN_60 GBV_ILN_62 GBV_ILN_65 GBV_ILN_69 GBV_ILN_70 GBV_ILN_73 GBV_ILN_74 GBV_ILN_90 GBV_ILN_95 GBV_ILN_100 GBV_ILN_101 GBV_ILN_105 GBV_ILN_110 GBV_ILN_150 GBV_ILN_151 GBV_ILN_187 GBV_ILN_213 GBV_ILN_224 GBV_ILN_230 GBV_ILN_370 GBV_ILN_602 GBV_ILN_702 GBV_ILN_2001 GBV_ILN_2003 GBV_ILN_2004 GBV_ILN_2005 GBV_ILN_2007 GBV_ILN_2009 GBV_ILN_2010 GBV_ILN_2011 GBV_ILN_2014 GBV_ILN_2015 GBV_ILN_2020 GBV_ILN_2021 GBV_ILN_2025 GBV_ILN_2026 GBV_ILN_2027 GBV_ILN_2034 GBV_ILN_2044 GBV_ILN_2048 GBV_ILN_2049 GBV_ILN_2050 GBV_ILN_2055 GBV_ILN_2056 GBV_ILN_2059 GBV_ILN_2061 GBV_ILN_2064 GBV_ILN_2106 GBV_ILN_2110 GBV_ILN_2111 GBV_ILN_2112 GBV_ILN_2122 GBV_ILN_2129 GBV_ILN_2143 GBV_ILN_2152 GBV_ILN_2153 GBV_ILN_2190 GBV_ILN_2232 GBV_ILN_2336 GBV_ILN_2470 GBV_ILN_2507 GBV_ILN_4035 GBV_ILN_4037 GBV_ILN_4112 GBV_ILN_4125 GBV_ILN_4242 GBV_ILN_4249 GBV_ILN_4251 GBV_ILN_4305 GBV_ILN_4306 GBV_ILN_4307 GBV_ILN_4313 GBV_ILN_4322 GBV_ILN_4323 GBV_ILN_4324 GBV_ILN_4326 GBV_ILN_4333 GBV_ILN_4334 GBV_ILN_4338 GBV_ILN_4393 GBV_ILN_4700 |
title_short |
Heterogeneous context interaction network for vehicle re-identification |
remote_bool |
true |
author2 |
Pang, Xiyu Zheng, Meifeng Nie, Xiushan Li, Xi Zhou, Houren Yin, Yilong |
author2Str |
Pang, Xiyu Zheng, Meifeng Nie, Xiushan Li, Xi Zhou, Houren Yin, Yilong |
ppnlink |
302468536 |
mediatype_str_mv |
c |
isOA_txt |
false |
hochschulschrift_bool |
false |
doi_str |
10.1016/j.neunet.2023.10.032 |
up_date |
2024-07-06T16:54:41.836Z |
_version_ |
1803849440260259840 |
fullrecord_marcxml |
<?xml version="1.0" encoding="UTF-8"?><collection xmlns="http://www.loc.gov/MARC21/slim"><record><leader>01000caa a22002652 4500</leader><controlfield tag="001">ELV066194350</controlfield><controlfield tag="003">DE-627</controlfield><controlfield tag="005">20240114093620.0</controlfield><controlfield tag="007">cr uuu---uuuuu</controlfield><controlfield tag="008">231220s2023 xx |||||o 00| ||eng c</controlfield><datafield tag="024" ind1="7" ind2=" "><subfield code="a">10.1016/j.neunet.2023.10.032</subfield><subfield code="2">doi</subfield></datafield><datafield tag="035" ind1=" " ind2=" "><subfield code="a">(DE-627)ELV066194350</subfield></datafield><datafield tag="035" ind1=" " ind2=" "><subfield code="a">(ELSEVIER)S0893-6080(23)00586-5</subfield></datafield><datafield tag="040" ind1=" " ind2=" "><subfield code="a">DE-627</subfield><subfield code="b">ger</subfield><subfield code="c">DE-627</subfield><subfield code="e">rda</subfield></datafield><datafield tag="041" ind1=" " ind2=" "><subfield code="a">eng</subfield></datafield><datafield tag="082" ind1="0" ind2="4"><subfield code="a">004</subfield><subfield code="q">VZ</subfield></datafield><datafield tag="084" ind1=" " ind2=" "><subfield code="a">54.72</subfield><subfield code="2">bkl</subfield></datafield><datafield tag="100" ind1="1" ind2=" "><subfield code="a">Sun, Ke</subfield><subfield code="e">verfasserin</subfield><subfield code="0">(orcid)0009-0007-1199-8530</subfield><subfield code="4">aut</subfield></datafield><datafield tag="245" ind1="1" ind2="0"><subfield code="a">Heterogeneous context interaction network for vehicle re-identification</subfield></datafield><datafield tag="264" ind1=" " ind2="1"><subfield code="c">2023</subfield></datafield><datafield tag="336" ind1=" " ind2=" "><subfield code="a">nicht spezifiziert</subfield><subfield code="b">zzz</subfield><subfield code="2">rdacontent</subfield></datafield><datafield tag="337" ind1=" " ind2=" "><subfield code="a">Computermedien</subfield><subfield code="b">c</subfield><subfield code="2">rdamedia</subfield></datafield><datafield tag="338" ind1=" " ind2=" "><subfield code="a">Online-Ressource</subfield><subfield code="b">cr</subfield><subfield code="2">rdacarrier</subfield></datafield><datafield tag="520" ind1=" " ind2=" "><subfield code="a">Capturing global and subtle discriminative information using attention mechanisms is essential to address the challenge of inter-class high similarity for vehicle re-identification (Re-ID) task. Mixing self-information of nodes or modeling context based on pairwise dependencies between nodes are the core ideas of current advanced attention mechanisms. This paper aims to explore how to utilize both dependency context and self-context in an efficient way to facilitate attention to learn more effectively. We propose a heterogeneous context interaction (HCI) attention mechanism that infers the weights of nodes from the interactions of global dependency contexts and local self-contexts to enhance the effect of attention learning. To reduce computational complexity, global dependency contexts are modeled by aggregating number-compressed pairwise dependencies, and the interactions of heterogeneous contexts are restricted to a certain range. Based on this mechanism, we propose a heterogeneous context interaction network (HCI-Net), which uses channel heterogeneous context interaction module (CHCI) and spatial heterogeneous context interaction module (SHCI), and introduces a rigid partitioning strategy to extract important global and fine-grained features. In addition, we design a non-similarity constraint (NSC) that forces the HCI-Net to learn diverse subtle discriminative information. The experiment results on two large datasets, VeRi-776 and VehicleID, show that our proposed HCI-Net achieves the state-of-the-art performance. In particular, the mean average precision (mAP) reaches 83.8% on VeRi-776 dataset.</subfield></datafield><datafield tag="650" ind1=" " ind2="4"><subfield code="a">Vehicle re-identification</subfield></datafield><datafield tag="650" ind1=" " ind2="4"><subfield code="a">Neural network</subfield></datafield><datafield tag="650" ind1=" " ind2="4"><subfield code="a">Global dependency contexts</subfield></datafield><datafield tag="650" ind1=" " ind2="4"><subfield code="a">Local self-contexts</subfield></datafield><datafield tag="650" ind1=" " ind2="4"><subfield code="a">Heterogeneous context interaction</subfield></datafield><datafield tag="700" ind1="1" ind2=" "><subfield code="a">Pang, Xiyu</subfield><subfield code="e">verfasserin</subfield><subfield code="4">aut</subfield></datafield><datafield tag="700" ind1="1" ind2=" "><subfield code="a">Zheng, Meifeng</subfield><subfield code="e">verfasserin</subfield><subfield code="4">aut</subfield></datafield><datafield tag="700" ind1="1" ind2=" "><subfield code="a">Nie, Xiushan</subfield><subfield code="e">verfasserin</subfield><subfield code="4">aut</subfield></datafield><datafield tag="700" ind1="1" ind2=" "><subfield code="a">Li, Xi</subfield><subfield code="e">verfasserin</subfield><subfield code="4">aut</subfield></datafield><datafield tag="700" ind1="1" ind2=" "><subfield code="a">Zhou, Houren</subfield><subfield code="e">verfasserin</subfield><subfield code="4">aut</subfield></datafield><datafield tag="700" ind1="1" ind2=" "><subfield code="a">Yin, Yilong</subfield><subfield code="e">verfasserin</subfield><subfield code="4">aut</subfield></datafield><datafield tag="773" ind1="0" ind2="8"><subfield code="i">Enthalten in</subfield><subfield code="t">Neural networks</subfield><subfield code="d">Amsterdam : Elsevier, 1988</subfield><subfield code="g">169, Seite 293-306</subfield><subfield code="h">Online-Ressource</subfield><subfield code="w">(DE-627)302468536</subfield><subfield code="w">(DE-600)1491372-0</subfield><subfield code="w">(DE-576)07971997X</subfield><subfield code="x">1879-2782</subfield><subfield code="7">nnns</subfield></datafield><datafield tag="773" ind1="1" ind2="8"><subfield code="g">volume:169</subfield><subfield code="g">pages:293-306</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_USEFLAG_U</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ELV</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">SYSFLAG_U</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_20</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_22</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_23</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_24</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_31</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_32</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_40</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_60</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_62</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_65</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_69</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_70</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_73</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_74</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_90</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_95</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_100</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_101</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_105</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_110</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_150</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_151</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_187</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_213</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_224</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_230</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_370</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_602</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_702</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2001</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2003</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2004</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2005</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2007</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2009</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2010</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2011</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2014</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2015</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2020</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2021</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2025</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2026</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2027</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2034</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2044</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2048</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2049</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2050</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2055</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2056</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2059</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2061</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2064</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2106</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2110</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2111</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2112</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2122</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2129</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2143</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2152</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2153</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2190</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2232</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2336</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2470</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2507</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4035</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4037</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4112</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4125</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4242</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4249</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4251</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4305</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4306</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4307</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4313</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4322</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4323</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4324</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4326</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4333</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4334</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4338</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4393</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4700</subfield></datafield><datafield tag="936" ind1="b" ind2="k"><subfield code="a">54.72</subfield><subfield code="j">Künstliche Intelligenz</subfield><subfield code="q">VZ</subfield></datafield><datafield tag="951" ind1=" " ind2=" "><subfield code="a">AR</subfield></datafield><datafield tag="952" ind1=" " ind2=" "><subfield code="d">169</subfield><subfield code="h">293-306</subfield></datafield></record></collection>
|
score |
7.401532 |