KD-PAR: A knowledge distillation-based pedestrian attribute recognition model with multi-label mixed feature learning network
In this paper, a novel knowledge distillation (KD)-based pedestrian attribute recognition (PAR) model is developed, where a multi-label mixed feature learning network (MMFL-Net) is designed and adopted as the student model. In particular, by applying the grouped depth-wise separable convolution, re-...
Ausführliche Beschreibung
Autor*in: |
Wu, Peishu [verfasserIn] Wang, Zidong [verfasserIn] Li, Han [verfasserIn] Zeng, Nianyin [verfasserIn] |
---|
Format: |
E-Artikel |
---|---|
Sprache: |
Englisch |
Erschienen: |
2023 |
---|
Schlagwörter: |
Pedestrian attribute recognition |
---|
Übergeordnetes Werk: |
Enthalten in: Expert systems with applications - Amsterdam [u.a.] : Elsevier Science, 1990, 237 |
---|---|
Übergeordnetes Werk: |
volume:237 |
DOI / URN: |
10.1016/j.eswa.2023.121305 |
---|
Katalog-ID: |
ELV065657284 |
---|
LEADER | 01000caa a22002652 4500 | ||
---|---|---|---|
001 | ELV065657284 | ||
003 | DE-627 | ||
005 | 20240104094241.0 | ||
007 | cr uuu---uuuuu | ||
008 | 231118s2023 xx |||||o 00| ||eng c | ||
024 | 7 | |a 10.1016/j.eswa.2023.121305 |2 doi | |
035 | |a (DE-627)ELV065657284 | ||
035 | |a (ELSEVIER)S0957-4174(23)01807-9 | ||
040 | |a DE-627 |b ger |c DE-627 |e rda | ||
041 | |a eng | ||
082 | 0 | 4 | |a 004 |q VZ |
084 | |a 54.72 |2 bkl | ||
100 | 1 | |a Wu, Peishu |e verfasserin |0 (orcid)0000-0001-9891-3809 |4 aut | |
245 | 1 | 0 | |a KD-PAR: A knowledge distillation-based pedestrian attribute recognition model with multi-label mixed feature learning network |
264 | 1 | |c 2023 | |
336 | |a nicht spezifiziert |b zzz |2 rdacontent | ||
337 | |a Computermedien |b c |2 rdamedia | ||
338 | |a Online-Ressource |b cr |2 rdacarrier | ||
520 | |a In this paper, a novel knowledge distillation (KD)-based pedestrian attribute recognition (PAR) model is developed, where a multi-label mixed feature learning network (MMFL-Net) is designed and adopted as the student model. In particular, by applying the grouped depth-wise separable convolution, re-parameterization and coordinate attention mechanism, not only the multi-scale receptive field information is sufficiently fused and spatially dependent robust features are extracted, the model complexity is also effectively kept acceptable. To alleviate the imbalance of category samples, an attribute weight parameter is proposed and considered when calculating the multi-label loss. Moreover, the Jensen–Shannon (JS) divergence-based KD scheme can facilitate the learning of MMFL-Net from the teacher model, which benefits strong fitting ability of the deep feature correlations so as to realize a highly generalized model. The proposed KD-PAR is comprehensively evaluated through many of experiments, and experimental results show the effectiveness and superiority of the proposed model as compared with other advanced MLL-based methods and state-of-the-art PAR models, which efficiently achieves the balance between accuracy and complexity. When facing the complex scenes such as blurry background, similar object interference, and target occlusion, the proposed KD-PAR can even present satisfactory recognition results with strong robustness, thereby providing a feasible and practical solution to the PAR tasks. | ||
650 | 4 | |a Pedestrian attribute recognition | |
650 | 4 | |a Mixed feature learning | |
650 | 4 | |a Structural re-parameterization | |
650 | 4 | |a Attribute weight | |
650 | 4 | |a Knowledge distillation | |
700 | 1 | |a Wang, Zidong |e verfasserin |0 (orcid)0000-0002-9576-7401 |4 aut | |
700 | 1 | |a Li, Han |e verfasserin |0 (orcid)0000-0003-0276-9756 |4 aut | |
700 | 1 | |a Zeng, Nianyin |e verfasserin |0 (orcid)0000-0002-6957-2942 |4 aut | |
773 | 0 | 8 | |i Enthalten in |t Expert systems with applications |d Amsterdam [u.a.] : Elsevier Science, 1990 |g 237 |h Online-Ressource |w (DE-627)320577961 |w (DE-600)2017237-0 |w (DE-576)11481807X |7 nnns |
773 | 1 | 8 | |g volume:237 |
912 | |a GBV_USEFLAG_U | ||
912 | |a GBV_ELV | ||
912 | |a SYSFLAG_U | ||
912 | |a GBV_ILN_20 | ||
912 | |a GBV_ILN_22 | ||
912 | |a GBV_ILN_23 | ||
912 | |a GBV_ILN_24 | ||
912 | |a GBV_ILN_31 | ||
912 | |a GBV_ILN_32 | ||
912 | |a GBV_ILN_40 | ||
912 | |a GBV_ILN_60 | ||
912 | |a GBV_ILN_62 | ||
912 | |a GBV_ILN_65 | ||
912 | |a GBV_ILN_69 | ||
912 | |a GBV_ILN_70 | ||
912 | |a GBV_ILN_73 | ||
912 | |a GBV_ILN_74 | ||
912 | |a GBV_ILN_90 | ||
912 | |a GBV_ILN_95 | ||
912 | |a GBV_ILN_100 | ||
912 | |a GBV_ILN_101 | ||
912 | |a GBV_ILN_105 | ||
912 | |a GBV_ILN_110 | ||
912 | |a GBV_ILN_150 | ||
912 | |a GBV_ILN_151 | ||
912 | |a GBV_ILN_187 | ||
912 | |a GBV_ILN_213 | ||
912 | |a GBV_ILN_224 | ||
912 | |a GBV_ILN_230 | ||
912 | |a GBV_ILN_370 | ||
912 | |a GBV_ILN_602 | ||
912 | |a GBV_ILN_702 | ||
912 | |a GBV_ILN_2001 | ||
912 | |a GBV_ILN_2003 | ||
912 | |a GBV_ILN_2004 | ||
912 | |a GBV_ILN_2005 | ||
912 | |a GBV_ILN_2007 | ||
912 | |a GBV_ILN_2008 | ||
912 | |a GBV_ILN_2009 | ||
912 | |a GBV_ILN_2010 | ||
912 | |a GBV_ILN_2011 | ||
912 | |a GBV_ILN_2014 | ||
912 | |a GBV_ILN_2015 | ||
912 | |a GBV_ILN_2020 | ||
912 | |a GBV_ILN_2021 | ||
912 | |a GBV_ILN_2025 | ||
912 | |a GBV_ILN_2026 | ||
912 | |a GBV_ILN_2027 | ||
912 | |a GBV_ILN_2034 | ||
912 | |a GBV_ILN_2044 | ||
912 | |a GBV_ILN_2048 | ||
912 | |a GBV_ILN_2049 | ||
912 | |a GBV_ILN_2050 | ||
912 | |a GBV_ILN_2055 | ||
912 | |a GBV_ILN_2056 | ||
912 | |a GBV_ILN_2059 | ||
912 | |a GBV_ILN_2061 | ||
912 | |a GBV_ILN_2064 | ||
912 | |a GBV_ILN_2088 | ||
912 | |a GBV_ILN_2106 | ||
912 | |a GBV_ILN_2110 | ||
912 | |a GBV_ILN_2111 | ||
912 | |a GBV_ILN_2112 | ||
912 | |a GBV_ILN_2122 | ||
912 | |a GBV_ILN_2129 | ||
912 | |a GBV_ILN_2143 | ||
912 | |a GBV_ILN_2152 | ||
912 | |a GBV_ILN_2153 | ||
912 | |a GBV_ILN_2190 | ||
912 | |a GBV_ILN_2232 | ||
912 | |a GBV_ILN_2336 | ||
912 | |a GBV_ILN_2470 | ||
912 | |a GBV_ILN_2507 | ||
912 | |a GBV_ILN_4035 | ||
912 | |a GBV_ILN_4037 | ||
912 | |a GBV_ILN_4112 | ||
912 | |a GBV_ILN_4125 | ||
912 | |a GBV_ILN_4242 | ||
912 | |a GBV_ILN_4249 | ||
912 | |a GBV_ILN_4251 | ||
912 | |a GBV_ILN_4305 | ||
912 | |a GBV_ILN_4306 | ||
912 | |a GBV_ILN_4307 | ||
912 | |a GBV_ILN_4313 | ||
912 | |a GBV_ILN_4322 | ||
912 | |a GBV_ILN_4323 | ||
912 | |a GBV_ILN_4324 | ||
912 | |a GBV_ILN_4325 | ||
912 | |a GBV_ILN_4326 | ||
912 | |a GBV_ILN_4333 | ||
912 | |a GBV_ILN_4334 | ||
912 | |a GBV_ILN_4338 | ||
912 | |a GBV_ILN_4393 | ||
912 | |a GBV_ILN_4700 | ||
936 | b | k | |a 54.72 |j Künstliche Intelligenz |q VZ |
951 | |a AR | ||
952 | |d 237 |
author_variant |
p w pw z w zw h l hl n z nz |
---|---|
matchkey_str |
wupeishuwangzidonglihanzengnianyin:2023----:daanwegdsiltobsdeetintrbtrcgiinoewtmliae |
hierarchy_sort_str |
2023 |
bklnumber |
54.72 |
publishDate |
2023 |
allfields |
10.1016/j.eswa.2023.121305 doi (DE-627)ELV065657284 (ELSEVIER)S0957-4174(23)01807-9 DE-627 ger DE-627 rda eng 004 VZ 54.72 bkl Wu, Peishu verfasserin (orcid)0000-0001-9891-3809 aut KD-PAR: A knowledge distillation-based pedestrian attribute recognition model with multi-label mixed feature learning network 2023 nicht spezifiziert zzz rdacontent Computermedien c rdamedia Online-Ressource cr rdacarrier In this paper, a novel knowledge distillation (KD)-based pedestrian attribute recognition (PAR) model is developed, where a multi-label mixed feature learning network (MMFL-Net) is designed and adopted as the student model. In particular, by applying the grouped depth-wise separable convolution, re-parameterization and coordinate attention mechanism, not only the multi-scale receptive field information is sufficiently fused and spatially dependent robust features are extracted, the model complexity is also effectively kept acceptable. To alleviate the imbalance of category samples, an attribute weight parameter is proposed and considered when calculating the multi-label loss. Moreover, the Jensen–Shannon (JS) divergence-based KD scheme can facilitate the learning of MMFL-Net from the teacher model, which benefits strong fitting ability of the deep feature correlations so as to realize a highly generalized model. The proposed KD-PAR is comprehensively evaluated through many of experiments, and experimental results show the effectiveness and superiority of the proposed model as compared with other advanced MLL-based methods and state-of-the-art PAR models, which efficiently achieves the balance between accuracy and complexity. When facing the complex scenes such as blurry background, similar object interference, and target occlusion, the proposed KD-PAR can even present satisfactory recognition results with strong robustness, thereby providing a feasible and practical solution to the PAR tasks. Pedestrian attribute recognition Mixed feature learning Structural re-parameterization Attribute weight Knowledge distillation Wang, Zidong verfasserin (orcid)0000-0002-9576-7401 aut Li, Han verfasserin (orcid)0000-0003-0276-9756 aut Zeng, Nianyin verfasserin (orcid)0000-0002-6957-2942 aut Enthalten in Expert systems with applications Amsterdam [u.a.] : Elsevier Science, 1990 237 Online-Ressource (DE-627)320577961 (DE-600)2017237-0 (DE-576)11481807X nnns volume:237 GBV_USEFLAG_U GBV_ELV SYSFLAG_U GBV_ILN_20 GBV_ILN_22 GBV_ILN_23 GBV_ILN_24 GBV_ILN_31 GBV_ILN_32 GBV_ILN_40 GBV_ILN_60 GBV_ILN_62 GBV_ILN_65 GBV_ILN_69 GBV_ILN_70 GBV_ILN_73 GBV_ILN_74 GBV_ILN_90 GBV_ILN_95 GBV_ILN_100 GBV_ILN_101 GBV_ILN_105 GBV_ILN_110 GBV_ILN_150 GBV_ILN_151 GBV_ILN_187 GBV_ILN_213 GBV_ILN_224 GBV_ILN_230 GBV_ILN_370 GBV_ILN_602 GBV_ILN_702 GBV_ILN_2001 GBV_ILN_2003 GBV_ILN_2004 GBV_ILN_2005 GBV_ILN_2007 GBV_ILN_2008 GBV_ILN_2009 GBV_ILN_2010 GBV_ILN_2011 GBV_ILN_2014 GBV_ILN_2015 GBV_ILN_2020 GBV_ILN_2021 GBV_ILN_2025 GBV_ILN_2026 GBV_ILN_2027 GBV_ILN_2034 GBV_ILN_2044 GBV_ILN_2048 GBV_ILN_2049 GBV_ILN_2050 GBV_ILN_2055 GBV_ILN_2056 GBV_ILN_2059 GBV_ILN_2061 GBV_ILN_2064 GBV_ILN_2088 GBV_ILN_2106 GBV_ILN_2110 GBV_ILN_2111 GBV_ILN_2112 GBV_ILN_2122 GBV_ILN_2129 GBV_ILN_2143 GBV_ILN_2152 GBV_ILN_2153 GBV_ILN_2190 GBV_ILN_2232 GBV_ILN_2336 GBV_ILN_2470 GBV_ILN_2507 GBV_ILN_4035 GBV_ILN_4037 GBV_ILN_4112 GBV_ILN_4125 GBV_ILN_4242 GBV_ILN_4249 GBV_ILN_4251 GBV_ILN_4305 GBV_ILN_4306 GBV_ILN_4307 GBV_ILN_4313 GBV_ILN_4322 GBV_ILN_4323 GBV_ILN_4324 GBV_ILN_4325 GBV_ILN_4326 GBV_ILN_4333 GBV_ILN_4334 GBV_ILN_4338 GBV_ILN_4393 GBV_ILN_4700 54.72 Künstliche Intelligenz VZ AR 237 |
spelling |
10.1016/j.eswa.2023.121305 doi (DE-627)ELV065657284 (ELSEVIER)S0957-4174(23)01807-9 DE-627 ger DE-627 rda eng 004 VZ 54.72 bkl Wu, Peishu verfasserin (orcid)0000-0001-9891-3809 aut KD-PAR: A knowledge distillation-based pedestrian attribute recognition model with multi-label mixed feature learning network 2023 nicht spezifiziert zzz rdacontent Computermedien c rdamedia Online-Ressource cr rdacarrier In this paper, a novel knowledge distillation (KD)-based pedestrian attribute recognition (PAR) model is developed, where a multi-label mixed feature learning network (MMFL-Net) is designed and adopted as the student model. In particular, by applying the grouped depth-wise separable convolution, re-parameterization and coordinate attention mechanism, not only the multi-scale receptive field information is sufficiently fused and spatially dependent robust features are extracted, the model complexity is also effectively kept acceptable. To alleviate the imbalance of category samples, an attribute weight parameter is proposed and considered when calculating the multi-label loss. Moreover, the Jensen–Shannon (JS) divergence-based KD scheme can facilitate the learning of MMFL-Net from the teacher model, which benefits strong fitting ability of the deep feature correlations so as to realize a highly generalized model. The proposed KD-PAR is comprehensively evaluated through many of experiments, and experimental results show the effectiveness and superiority of the proposed model as compared with other advanced MLL-based methods and state-of-the-art PAR models, which efficiently achieves the balance between accuracy and complexity. When facing the complex scenes such as blurry background, similar object interference, and target occlusion, the proposed KD-PAR can even present satisfactory recognition results with strong robustness, thereby providing a feasible and practical solution to the PAR tasks. Pedestrian attribute recognition Mixed feature learning Structural re-parameterization Attribute weight Knowledge distillation Wang, Zidong verfasserin (orcid)0000-0002-9576-7401 aut Li, Han verfasserin (orcid)0000-0003-0276-9756 aut Zeng, Nianyin verfasserin (orcid)0000-0002-6957-2942 aut Enthalten in Expert systems with applications Amsterdam [u.a.] : Elsevier Science, 1990 237 Online-Ressource (DE-627)320577961 (DE-600)2017237-0 (DE-576)11481807X nnns volume:237 GBV_USEFLAG_U GBV_ELV SYSFLAG_U GBV_ILN_20 GBV_ILN_22 GBV_ILN_23 GBV_ILN_24 GBV_ILN_31 GBV_ILN_32 GBV_ILN_40 GBV_ILN_60 GBV_ILN_62 GBV_ILN_65 GBV_ILN_69 GBV_ILN_70 GBV_ILN_73 GBV_ILN_74 GBV_ILN_90 GBV_ILN_95 GBV_ILN_100 GBV_ILN_101 GBV_ILN_105 GBV_ILN_110 GBV_ILN_150 GBV_ILN_151 GBV_ILN_187 GBV_ILN_213 GBV_ILN_224 GBV_ILN_230 GBV_ILN_370 GBV_ILN_602 GBV_ILN_702 GBV_ILN_2001 GBV_ILN_2003 GBV_ILN_2004 GBV_ILN_2005 GBV_ILN_2007 GBV_ILN_2008 GBV_ILN_2009 GBV_ILN_2010 GBV_ILN_2011 GBV_ILN_2014 GBV_ILN_2015 GBV_ILN_2020 GBV_ILN_2021 GBV_ILN_2025 GBV_ILN_2026 GBV_ILN_2027 GBV_ILN_2034 GBV_ILN_2044 GBV_ILN_2048 GBV_ILN_2049 GBV_ILN_2050 GBV_ILN_2055 GBV_ILN_2056 GBV_ILN_2059 GBV_ILN_2061 GBV_ILN_2064 GBV_ILN_2088 GBV_ILN_2106 GBV_ILN_2110 GBV_ILN_2111 GBV_ILN_2112 GBV_ILN_2122 GBV_ILN_2129 GBV_ILN_2143 GBV_ILN_2152 GBV_ILN_2153 GBV_ILN_2190 GBV_ILN_2232 GBV_ILN_2336 GBV_ILN_2470 GBV_ILN_2507 GBV_ILN_4035 GBV_ILN_4037 GBV_ILN_4112 GBV_ILN_4125 GBV_ILN_4242 GBV_ILN_4249 GBV_ILN_4251 GBV_ILN_4305 GBV_ILN_4306 GBV_ILN_4307 GBV_ILN_4313 GBV_ILN_4322 GBV_ILN_4323 GBV_ILN_4324 GBV_ILN_4325 GBV_ILN_4326 GBV_ILN_4333 GBV_ILN_4334 GBV_ILN_4338 GBV_ILN_4393 GBV_ILN_4700 54.72 Künstliche Intelligenz VZ AR 237 |
allfields_unstemmed |
10.1016/j.eswa.2023.121305 doi (DE-627)ELV065657284 (ELSEVIER)S0957-4174(23)01807-9 DE-627 ger DE-627 rda eng 004 VZ 54.72 bkl Wu, Peishu verfasserin (orcid)0000-0001-9891-3809 aut KD-PAR: A knowledge distillation-based pedestrian attribute recognition model with multi-label mixed feature learning network 2023 nicht spezifiziert zzz rdacontent Computermedien c rdamedia Online-Ressource cr rdacarrier In this paper, a novel knowledge distillation (KD)-based pedestrian attribute recognition (PAR) model is developed, where a multi-label mixed feature learning network (MMFL-Net) is designed and adopted as the student model. In particular, by applying the grouped depth-wise separable convolution, re-parameterization and coordinate attention mechanism, not only the multi-scale receptive field information is sufficiently fused and spatially dependent robust features are extracted, the model complexity is also effectively kept acceptable. To alleviate the imbalance of category samples, an attribute weight parameter is proposed and considered when calculating the multi-label loss. Moreover, the Jensen–Shannon (JS) divergence-based KD scheme can facilitate the learning of MMFL-Net from the teacher model, which benefits strong fitting ability of the deep feature correlations so as to realize a highly generalized model. The proposed KD-PAR is comprehensively evaluated through many of experiments, and experimental results show the effectiveness and superiority of the proposed model as compared with other advanced MLL-based methods and state-of-the-art PAR models, which efficiently achieves the balance between accuracy and complexity. When facing the complex scenes such as blurry background, similar object interference, and target occlusion, the proposed KD-PAR can even present satisfactory recognition results with strong robustness, thereby providing a feasible and practical solution to the PAR tasks. Pedestrian attribute recognition Mixed feature learning Structural re-parameterization Attribute weight Knowledge distillation Wang, Zidong verfasserin (orcid)0000-0002-9576-7401 aut Li, Han verfasserin (orcid)0000-0003-0276-9756 aut Zeng, Nianyin verfasserin (orcid)0000-0002-6957-2942 aut Enthalten in Expert systems with applications Amsterdam [u.a.] : Elsevier Science, 1990 237 Online-Ressource (DE-627)320577961 (DE-600)2017237-0 (DE-576)11481807X nnns volume:237 GBV_USEFLAG_U GBV_ELV SYSFLAG_U GBV_ILN_20 GBV_ILN_22 GBV_ILN_23 GBV_ILN_24 GBV_ILN_31 GBV_ILN_32 GBV_ILN_40 GBV_ILN_60 GBV_ILN_62 GBV_ILN_65 GBV_ILN_69 GBV_ILN_70 GBV_ILN_73 GBV_ILN_74 GBV_ILN_90 GBV_ILN_95 GBV_ILN_100 GBV_ILN_101 GBV_ILN_105 GBV_ILN_110 GBV_ILN_150 GBV_ILN_151 GBV_ILN_187 GBV_ILN_213 GBV_ILN_224 GBV_ILN_230 GBV_ILN_370 GBV_ILN_602 GBV_ILN_702 GBV_ILN_2001 GBV_ILN_2003 GBV_ILN_2004 GBV_ILN_2005 GBV_ILN_2007 GBV_ILN_2008 GBV_ILN_2009 GBV_ILN_2010 GBV_ILN_2011 GBV_ILN_2014 GBV_ILN_2015 GBV_ILN_2020 GBV_ILN_2021 GBV_ILN_2025 GBV_ILN_2026 GBV_ILN_2027 GBV_ILN_2034 GBV_ILN_2044 GBV_ILN_2048 GBV_ILN_2049 GBV_ILN_2050 GBV_ILN_2055 GBV_ILN_2056 GBV_ILN_2059 GBV_ILN_2061 GBV_ILN_2064 GBV_ILN_2088 GBV_ILN_2106 GBV_ILN_2110 GBV_ILN_2111 GBV_ILN_2112 GBV_ILN_2122 GBV_ILN_2129 GBV_ILN_2143 GBV_ILN_2152 GBV_ILN_2153 GBV_ILN_2190 GBV_ILN_2232 GBV_ILN_2336 GBV_ILN_2470 GBV_ILN_2507 GBV_ILN_4035 GBV_ILN_4037 GBV_ILN_4112 GBV_ILN_4125 GBV_ILN_4242 GBV_ILN_4249 GBV_ILN_4251 GBV_ILN_4305 GBV_ILN_4306 GBV_ILN_4307 GBV_ILN_4313 GBV_ILN_4322 GBV_ILN_4323 GBV_ILN_4324 GBV_ILN_4325 GBV_ILN_4326 GBV_ILN_4333 GBV_ILN_4334 GBV_ILN_4338 GBV_ILN_4393 GBV_ILN_4700 54.72 Künstliche Intelligenz VZ AR 237 |
allfieldsGer |
10.1016/j.eswa.2023.121305 doi (DE-627)ELV065657284 (ELSEVIER)S0957-4174(23)01807-9 DE-627 ger DE-627 rda eng 004 VZ 54.72 bkl Wu, Peishu verfasserin (orcid)0000-0001-9891-3809 aut KD-PAR: A knowledge distillation-based pedestrian attribute recognition model with multi-label mixed feature learning network 2023 nicht spezifiziert zzz rdacontent Computermedien c rdamedia Online-Ressource cr rdacarrier In this paper, a novel knowledge distillation (KD)-based pedestrian attribute recognition (PAR) model is developed, where a multi-label mixed feature learning network (MMFL-Net) is designed and adopted as the student model. In particular, by applying the grouped depth-wise separable convolution, re-parameterization and coordinate attention mechanism, not only the multi-scale receptive field information is sufficiently fused and spatially dependent robust features are extracted, the model complexity is also effectively kept acceptable. To alleviate the imbalance of category samples, an attribute weight parameter is proposed and considered when calculating the multi-label loss. Moreover, the Jensen–Shannon (JS) divergence-based KD scheme can facilitate the learning of MMFL-Net from the teacher model, which benefits strong fitting ability of the deep feature correlations so as to realize a highly generalized model. The proposed KD-PAR is comprehensively evaluated through many of experiments, and experimental results show the effectiveness and superiority of the proposed model as compared with other advanced MLL-based methods and state-of-the-art PAR models, which efficiently achieves the balance between accuracy and complexity. When facing the complex scenes such as blurry background, similar object interference, and target occlusion, the proposed KD-PAR can even present satisfactory recognition results with strong robustness, thereby providing a feasible and practical solution to the PAR tasks. Pedestrian attribute recognition Mixed feature learning Structural re-parameterization Attribute weight Knowledge distillation Wang, Zidong verfasserin (orcid)0000-0002-9576-7401 aut Li, Han verfasserin (orcid)0000-0003-0276-9756 aut Zeng, Nianyin verfasserin (orcid)0000-0002-6957-2942 aut Enthalten in Expert systems with applications Amsterdam [u.a.] : Elsevier Science, 1990 237 Online-Ressource (DE-627)320577961 (DE-600)2017237-0 (DE-576)11481807X nnns volume:237 GBV_USEFLAG_U GBV_ELV SYSFLAG_U GBV_ILN_20 GBV_ILN_22 GBV_ILN_23 GBV_ILN_24 GBV_ILN_31 GBV_ILN_32 GBV_ILN_40 GBV_ILN_60 GBV_ILN_62 GBV_ILN_65 GBV_ILN_69 GBV_ILN_70 GBV_ILN_73 GBV_ILN_74 GBV_ILN_90 GBV_ILN_95 GBV_ILN_100 GBV_ILN_101 GBV_ILN_105 GBV_ILN_110 GBV_ILN_150 GBV_ILN_151 GBV_ILN_187 GBV_ILN_213 GBV_ILN_224 GBV_ILN_230 GBV_ILN_370 GBV_ILN_602 GBV_ILN_702 GBV_ILN_2001 GBV_ILN_2003 GBV_ILN_2004 GBV_ILN_2005 GBV_ILN_2007 GBV_ILN_2008 GBV_ILN_2009 GBV_ILN_2010 GBV_ILN_2011 GBV_ILN_2014 GBV_ILN_2015 GBV_ILN_2020 GBV_ILN_2021 GBV_ILN_2025 GBV_ILN_2026 GBV_ILN_2027 GBV_ILN_2034 GBV_ILN_2044 GBV_ILN_2048 GBV_ILN_2049 GBV_ILN_2050 GBV_ILN_2055 GBV_ILN_2056 GBV_ILN_2059 GBV_ILN_2061 GBV_ILN_2064 GBV_ILN_2088 GBV_ILN_2106 GBV_ILN_2110 GBV_ILN_2111 GBV_ILN_2112 GBV_ILN_2122 GBV_ILN_2129 GBV_ILN_2143 GBV_ILN_2152 GBV_ILN_2153 GBV_ILN_2190 GBV_ILN_2232 GBV_ILN_2336 GBV_ILN_2470 GBV_ILN_2507 GBV_ILN_4035 GBV_ILN_4037 GBV_ILN_4112 GBV_ILN_4125 GBV_ILN_4242 GBV_ILN_4249 GBV_ILN_4251 GBV_ILN_4305 GBV_ILN_4306 GBV_ILN_4307 GBV_ILN_4313 GBV_ILN_4322 GBV_ILN_4323 GBV_ILN_4324 GBV_ILN_4325 GBV_ILN_4326 GBV_ILN_4333 GBV_ILN_4334 GBV_ILN_4338 GBV_ILN_4393 GBV_ILN_4700 54.72 Künstliche Intelligenz VZ AR 237 |
allfieldsSound |
10.1016/j.eswa.2023.121305 doi (DE-627)ELV065657284 (ELSEVIER)S0957-4174(23)01807-9 DE-627 ger DE-627 rda eng 004 VZ 54.72 bkl Wu, Peishu verfasserin (orcid)0000-0001-9891-3809 aut KD-PAR: A knowledge distillation-based pedestrian attribute recognition model with multi-label mixed feature learning network 2023 nicht spezifiziert zzz rdacontent Computermedien c rdamedia Online-Ressource cr rdacarrier In this paper, a novel knowledge distillation (KD)-based pedestrian attribute recognition (PAR) model is developed, where a multi-label mixed feature learning network (MMFL-Net) is designed and adopted as the student model. In particular, by applying the grouped depth-wise separable convolution, re-parameterization and coordinate attention mechanism, not only the multi-scale receptive field information is sufficiently fused and spatially dependent robust features are extracted, the model complexity is also effectively kept acceptable. To alleviate the imbalance of category samples, an attribute weight parameter is proposed and considered when calculating the multi-label loss. Moreover, the Jensen–Shannon (JS) divergence-based KD scheme can facilitate the learning of MMFL-Net from the teacher model, which benefits strong fitting ability of the deep feature correlations so as to realize a highly generalized model. The proposed KD-PAR is comprehensively evaluated through many of experiments, and experimental results show the effectiveness and superiority of the proposed model as compared with other advanced MLL-based methods and state-of-the-art PAR models, which efficiently achieves the balance between accuracy and complexity. When facing the complex scenes such as blurry background, similar object interference, and target occlusion, the proposed KD-PAR can even present satisfactory recognition results with strong robustness, thereby providing a feasible and practical solution to the PAR tasks. Pedestrian attribute recognition Mixed feature learning Structural re-parameterization Attribute weight Knowledge distillation Wang, Zidong verfasserin (orcid)0000-0002-9576-7401 aut Li, Han verfasserin (orcid)0000-0003-0276-9756 aut Zeng, Nianyin verfasserin (orcid)0000-0002-6957-2942 aut Enthalten in Expert systems with applications Amsterdam [u.a.] : Elsevier Science, 1990 237 Online-Ressource (DE-627)320577961 (DE-600)2017237-0 (DE-576)11481807X nnns volume:237 GBV_USEFLAG_U GBV_ELV SYSFLAG_U GBV_ILN_20 GBV_ILN_22 GBV_ILN_23 GBV_ILN_24 GBV_ILN_31 GBV_ILN_32 GBV_ILN_40 GBV_ILN_60 GBV_ILN_62 GBV_ILN_65 GBV_ILN_69 GBV_ILN_70 GBV_ILN_73 GBV_ILN_74 GBV_ILN_90 GBV_ILN_95 GBV_ILN_100 GBV_ILN_101 GBV_ILN_105 GBV_ILN_110 GBV_ILN_150 GBV_ILN_151 GBV_ILN_187 GBV_ILN_213 GBV_ILN_224 GBV_ILN_230 GBV_ILN_370 GBV_ILN_602 GBV_ILN_702 GBV_ILN_2001 GBV_ILN_2003 GBV_ILN_2004 GBV_ILN_2005 GBV_ILN_2007 GBV_ILN_2008 GBV_ILN_2009 GBV_ILN_2010 GBV_ILN_2011 GBV_ILN_2014 GBV_ILN_2015 GBV_ILN_2020 GBV_ILN_2021 GBV_ILN_2025 GBV_ILN_2026 GBV_ILN_2027 GBV_ILN_2034 GBV_ILN_2044 GBV_ILN_2048 GBV_ILN_2049 GBV_ILN_2050 GBV_ILN_2055 GBV_ILN_2056 GBV_ILN_2059 GBV_ILN_2061 GBV_ILN_2064 GBV_ILN_2088 GBV_ILN_2106 GBV_ILN_2110 GBV_ILN_2111 GBV_ILN_2112 GBV_ILN_2122 GBV_ILN_2129 GBV_ILN_2143 GBV_ILN_2152 GBV_ILN_2153 GBV_ILN_2190 GBV_ILN_2232 GBV_ILN_2336 GBV_ILN_2470 GBV_ILN_2507 GBV_ILN_4035 GBV_ILN_4037 GBV_ILN_4112 GBV_ILN_4125 GBV_ILN_4242 GBV_ILN_4249 GBV_ILN_4251 GBV_ILN_4305 GBV_ILN_4306 GBV_ILN_4307 GBV_ILN_4313 GBV_ILN_4322 GBV_ILN_4323 GBV_ILN_4324 GBV_ILN_4325 GBV_ILN_4326 GBV_ILN_4333 GBV_ILN_4334 GBV_ILN_4338 GBV_ILN_4393 GBV_ILN_4700 54.72 Künstliche Intelligenz VZ AR 237 |
language |
English |
source |
Enthalten in Expert systems with applications 237 volume:237 |
sourceStr |
Enthalten in Expert systems with applications 237 volume:237 |
format_phy_str_mv |
Article |
bklname |
Künstliche Intelligenz |
institution |
findex.gbv.de |
topic_facet |
Pedestrian attribute recognition Mixed feature learning Structural re-parameterization Attribute weight Knowledge distillation |
dewey-raw |
004 |
isfreeaccess_bool |
false |
container_title |
Expert systems with applications |
authorswithroles_txt_mv |
Wu, Peishu @@aut@@ Wang, Zidong @@aut@@ Li, Han @@aut@@ Zeng, Nianyin @@aut@@ |
publishDateDaySort_date |
2023-01-01T00:00:00Z |
hierarchy_top_id |
320577961 |
dewey-sort |
14 |
id |
ELV065657284 |
language_de |
englisch |
fullrecord |
<?xml version="1.0" encoding="UTF-8"?><collection xmlns="http://www.loc.gov/MARC21/slim"><record><leader>01000caa a22002652 4500</leader><controlfield tag="001">ELV065657284</controlfield><controlfield tag="003">DE-627</controlfield><controlfield tag="005">20240104094241.0</controlfield><controlfield tag="007">cr uuu---uuuuu</controlfield><controlfield tag="008">231118s2023 xx |||||o 00| ||eng c</controlfield><datafield tag="024" ind1="7" ind2=" "><subfield code="a">10.1016/j.eswa.2023.121305</subfield><subfield code="2">doi</subfield></datafield><datafield tag="035" ind1=" " ind2=" "><subfield code="a">(DE-627)ELV065657284</subfield></datafield><datafield tag="035" ind1=" " ind2=" "><subfield code="a">(ELSEVIER)S0957-4174(23)01807-9</subfield></datafield><datafield tag="040" ind1=" " ind2=" "><subfield code="a">DE-627</subfield><subfield code="b">ger</subfield><subfield code="c">DE-627</subfield><subfield code="e">rda</subfield></datafield><datafield tag="041" ind1=" " ind2=" "><subfield code="a">eng</subfield></datafield><datafield tag="082" ind1="0" ind2="4"><subfield code="a">004</subfield><subfield code="q">VZ</subfield></datafield><datafield tag="084" ind1=" " ind2=" "><subfield code="a">54.72</subfield><subfield code="2">bkl</subfield></datafield><datafield tag="100" ind1="1" ind2=" "><subfield code="a">Wu, Peishu</subfield><subfield code="e">verfasserin</subfield><subfield code="0">(orcid)0000-0001-9891-3809</subfield><subfield code="4">aut</subfield></datafield><datafield tag="245" ind1="1" ind2="0"><subfield code="a">KD-PAR: A knowledge distillation-based pedestrian attribute recognition model with multi-label mixed feature learning network</subfield></datafield><datafield tag="264" ind1=" " ind2="1"><subfield code="c">2023</subfield></datafield><datafield tag="336" ind1=" " ind2=" "><subfield code="a">nicht spezifiziert</subfield><subfield code="b">zzz</subfield><subfield code="2">rdacontent</subfield></datafield><datafield tag="337" ind1=" " ind2=" "><subfield code="a">Computermedien</subfield><subfield code="b">c</subfield><subfield code="2">rdamedia</subfield></datafield><datafield tag="338" ind1=" " ind2=" "><subfield code="a">Online-Ressource</subfield><subfield code="b">cr</subfield><subfield code="2">rdacarrier</subfield></datafield><datafield tag="520" ind1=" " ind2=" "><subfield code="a">In this paper, a novel knowledge distillation (KD)-based pedestrian attribute recognition (PAR) model is developed, where a multi-label mixed feature learning network (MMFL-Net) is designed and adopted as the student model. In particular, by applying the grouped depth-wise separable convolution, re-parameterization and coordinate attention mechanism, not only the multi-scale receptive field information is sufficiently fused and spatially dependent robust features are extracted, the model complexity is also effectively kept acceptable. To alleviate the imbalance of category samples, an attribute weight parameter is proposed and considered when calculating the multi-label loss. Moreover, the Jensen–Shannon (JS) divergence-based KD scheme can facilitate the learning of MMFL-Net from the teacher model, which benefits strong fitting ability of the deep feature correlations so as to realize a highly generalized model. The proposed KD-PAR is comprehensively evaluated through many of experiments, and experimental results show the effectiveness and superiority of the proposed model as compared with other advanced MLL-based methods and state-of-the-art PAR models, which efficiently achieves the balance between accuracy and complexity. When facing the complex scenes such as blurry background, similar object interference, and target occlusion, the proposed KD-PAR can even present satisfactory recognition results with strong robustness, thereby providing a feasible and practical solution to the PAR tasks.</subfield></datafield><datafield tag="650" ind1=" " ind2="4"><subfield code="a">Pedestrian attribute recognition</subfield></datafield><datafield tag="650" ind1=" " ind2="4"><subfield code="a">Mixed feature learning</subfield></datafield><datafield tag="650" ind1=" " ind2="4"><subfield code="a">Structural re-parameterization</subfield></datafield><datafield tag="650" ind1=" " ind2="4"><subfield code="a">Attribute weight</subfield></datafield><datafield tag="650" ind1=" " ind2="4"><subfield code="a">Knowledge distillation</subfield></datafield><datafield tag="700" ind1="1" ind2=" "><subfield code="a">Wang, Zidong</subfield><subfield code="e">verfasserin</subfield><subfield code="0">(orcid)0000-0002-9576-7401</subfield><subfield code="4">aut</subfield></datafield><datafield tag="700" ind1="1" ind2=" "><subfield code="a">Li, Han</subfield><subfield code="e">verfasserin</subfield><subfield code="0">(orcid)0000-0003-0276-9756</subfield><subfield code="4">aut</subfield></datafield><datafield tag="700" ind1="1" ind2=" "><subfield code="a">Zeng, Nianyin</subfield><subfield code="e">verfasserin</subfield><subfield code="0">(orcid)0000-0002-6957-2942</subfield><subfield code="4">aut</subfield></datafield><datafield tag="773" ind1="0" ind2="8"><subfield code="i">Enthalten in</subfield><subfield code="t">Expert systems with applications</subfield><subfield code="d">Amsterdam [u.a.] : Elsevier Science, 1990</subfield><subfield code="g">237</subfield><subfield code="h">Online-Ressource</subfield><subfield code="w">(DE-627)320577961</subfield><subfield code="w">(DE-600)2017237-0</subfield><subfield code="w">(DE-576)11481807X</subfield><subfield code="7">nnns</subfield></datafield><datafield tag="773" ind1="1" ind2="8"><subfield code="g">volume:237</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_USEFLAG_U</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ELV</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">SYSFLAG_U</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_20</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_22</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_23</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_24</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_31</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_32</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_40</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_60</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_62</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_65</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_69</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_70</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_73</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_74</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_90</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_95</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_100</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_101</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_105</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_110</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_150</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_151</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_187</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_213</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_224</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_230</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_370</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_602</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_702</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2001</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2003</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2004</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2005</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2007</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2008</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2009</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2010</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2011</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2014</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2015</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2020</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2021</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2025</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2026</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2027</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2034</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2044</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2048</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2049</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2050</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2055</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2056</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2059</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2061</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2064</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2088</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2106</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2110</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2111</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2112</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2122</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2129</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2143</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2152</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2153</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2190</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2232</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2336</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2470</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2507</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4035</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4037</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4112</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4125</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4242</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4249</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4251</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4305</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4306</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4307</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4313</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4322</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4323</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4324</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4325</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4326</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4333</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4334</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4338</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4393</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4700</subfield></datafield><datafield tag="936" ind1="b" ind2="k"><subfield code="a">54.72</subfield><subfield code="j">Künstliche Intelligenz</subfield><subfield code="q">VZ</subfield></datafield><datafield tag="951" ind1=" " ind2=" "><subfield code="a">AR</subfield></datafield><datafield tag="952" ind1=" " ind2=" "><subfield code="d">237</subfield></datafield></record></collection>
|
author |
Wu, Peishu |
spellingShingle |
Wu, Peishu ddc 004 bkl 54.72 misc Pedestrian attribute recognition misc Mixed feature learning misc Structural re-parameterization misc Attribute weight misc Knowledge distillation KD-PAR: A knowledge distillation-based pedestrian attribute recognition model with multi-label mixed feature learning network |
authorStr |
Wu, Peishu |
ppnlink_with_tag_str_mv |
@@773@@(DE-627)320577961 |
format |
electronic Article |
dewey-ones |
004 - Data processing & computer science |
delete_txt_mv |
keep |
author_role |
aut aut aut aut |
collection |
elsevier |
remote_str |
true |
illustrated |
Not Illustrated |
topic_title |
004 VZ 54.72 bkl KD-PAR: A knowledge distillation-based pedestrian attribute recognition model with multi-label mixed feature learning network Pedestrian attribute recognition Mixed feature learning Structural re-parameterization Attribute weight Knowledge distillation |
topic |
ddc 004 bkl 54.72 misc Pedestrian attribute recognition misc Mixed feature learning misc Structural re-parameterization misc Attribute weight misc Knowledge distillation |
topic_unstemmed |
ddc 004 bkl 54.72 misc Pedestrian attribute recognition misc Mixed feature learning misc Structural re-parameterization misc Attribute weight misc Knowledge distillation |
topic_browse |
ddc 004 bkl 54.72 misc Pedestrian attribute recognition misc Mixed feature learning misc Structural re-parameterization misc Attribute weight misc Knowledge distillation |
format_facet |
Elektronische Aufsätze Aufsätze Elektronische Ressource |
format_main_str_mv |
Text Zeitschrift/Artikel |
carriertype_str_mv |
cr |
hierarchy_parent_title |
Expert systems with applications |
hierarchy_parent_id |
320577961 |
dewey-tens |
000 - Computer science, knowledge & systems |
hierarchy_top_title |
Expert systems with applications |
isfreeaccess_txt |
false |
familylinks_str_mv |
(DE-627)320577961 (DE-600)2017237-0 (DE-576)11481807X |
title |
KD-PAR: A knowledge distillation-based pedestrian attribute recognition model with multi-label mixed feature learning network |
ctrlnum |
(DE-627)ELV065657284 (ELSEVIER)S0957-4174(23)01807-9 |
title_full |
KD-PAR: A knowledge distillation-based pedestrian attribute recognition model with multi-label mixed feature learning network |
author_sort |
Wu, Peishu |
journal |
Expert systems with applications |
journalStr |
Expert systems with applications |
lang_code |
eng |
isOA_bool |
false |
dewey-hundreds |
000 - Computer science, information & general works |
recordtype |
marc |
publishDateSort |
2023 |
contenttype_str_mv |
zzz |
author_browse |
Wu, Peishu Wang, Zidong Li, Han Zeng, Nianyin |
container_volume |
237 |
class |
004 VZ 54.72 bkl |
format_se |
Elektronische Aufsätze |
author-letter |
Wu, Peishu |
doi_str_mv |
10.1016/j.eswa.2023.121305 |
normlink |
(ORCID)0000-0001-9891-3809 (ORCID)0000-0002-9576-7401 (ORCID)0000-0003-0276-9756 (ORCID)0000-0002-6957-2942 |
normlink_prefix_str_mv |
(orcid)0000-0001-9891-3809 (orcid)0000-0002-9576-7401 (orcid)0000-0003-0276-9756 (orcid)0000-0002-6957-2942 |
dewey-full |
004 |
author2-role |
verfasserin |
title_sort |
kd-par: a knowledge distillation-based pedestrian attribute recognition model with multi-label mixed feature learning network |
title_auth |
KD-PAR: A knowledge distillation-based pedestrian attribute recognition model with multi-label mixed feature learning network |
abstract |
In this paper, a novel knowledge distillation (KD)-based pedestrian attribute recognition (PAR) model is developed, where a multi-label mixed feature learning network (MMFL-Net) is designed and adopted as the student model. In particular, by applying the grouped depth-wise separable convolution, re-parameterization and coordinate attention mechanism, not only the multi-scale receptive field information is sufficiently fused and spatially dependent robust features are extracted, the model complexity is also effectively kept acceptable. To alleviate the imbalance of category samples, an attribute weight parameter is proposed and considered when calculating the multi-label loss. Moreover, the Jensen–Shannon (JS) divergence-based KD scheme can facilitate the learning of MMFL-Net from the teacher model, which benefits strong fitting ability of the deep feature correlations so as to realize a highly generalized model. The proposed KD-PAR is comprehensively evaluated through many of experiments, and experimental results show the effectiveness and superiority of the proposed model as compared with other advanced MLL-based methods and state-of-the-art PAR models, which efficiently achieves the balance between accuracy and complexity. When facing the complex scenes such as blurry background, similar object interference, and target occlusion, the proposed KD-PAR can even present satisfactory recognition results with strong robustness, thereby providing a feasible and practical solution to the PAR tasks. |
abstractGer |
In this paper, a novel knowledge distillation (KD)-based pedestrian attribute recognition (PAR) model is developed, where a multi-label mixed feature learning network (MMFL-Net) is designed and adopted as the student model. In particular, by applying the grouped depth-wise separable convolution, re-parameterization and coordinate attention mechanism, not only the multi-scale receptive field information is sufficiently fused and spatially dependent robust features are extracted, the model complexity is also effectively kept acceptable. To alleviate the imbalance of category samples, an attribute weight parameter is proposed and considered when calculating the multi-label loss. Moreover, the Jensen–Shannon (JS) divergence-based KD scheme can facilitate the learning of MMFL-Net from the teacher model, which benefits strong fitting ability of the deep feature correlations so as to realize a highly generalized model. The proposed KD-PAR is comprehensively evaluated through many of experiments, and experimental results show the effectiveness and superiority of the proposed model as compared with other advanced MLL-based methods and state-of-the-art PAR models, which efficiently achieves the balance between accuracy and complexity. When facing the complex scenes such as blurry background, similar object interference, and target occlusion, the proposed KD-PAR can even present satisfactory recognition results with strong robustness, thereby providing a feasible and practical solution to the PAR tasks. |
abstract_unstemmed |
In this paper, a novel knowledge distillation (KD)-based pedestrian attribute recognition (PAR) model is developed, where a multi-label mixed feature learning network (MMFL-Net) is designed and adopted as the student model. In particular, by applying the grouped depth-wise separable convolution, re-parameterization and coordinate attention mechanism, not only the multi-scale receptive field information is sufficiently fused and spatially dependent robust features are extracted, the model complexity is also effectively kept acceptable. To alleviate the imbalance of category samples, an attribute weight parameter is proposed and considered when calculating the multi-label loss. Moreover, the Jensen–Shannon (JS) divergence-based KD scheme can facilitate the learning of MMFL-Net from the teacher model, which benefits strong fitting ability of the deep feature correlations so as to realize a highly generalized model. The proposed KD-PAR is comprehensively evaluated through many of experiments, and experimental results show the effectiveness and superiority of the proposed model as compared with other advanced MLL-based methods and state-of-the-art PAR models, which efficiently achieves the balance between accuracy and complexity. When facing the complex scenes such as blurry background, similar object interference, and target occlusion, the proposed KD-PAR can even present satisfactory recognition results with strong robustness, thereby providing a feasible and practical solution to the PAR tasks. |
collection_details |
GBV_USEFLAG_U GBV_ELV SYSFLAG_U GBV_ILN_20 GBV_ILN_22 GBV_ILN_23 GBV_ILN_24 GBV_ILN_31 GBV_ILN_32 GBV_ILN_40 GBV_ILN_60 GBV_ILN_62 GBV_ILN_65 GBV_ILN_69 GBV_ILN_70 GBV_ILN_73 GBV_ILN_74 GBV_ILN_90 GBV_ILN_95 GBV_ILN_100 GBV_ILN_101 GBV_ILN_105 GBV_ILN_110 GBV_ILN_150 GBV_ILN_151 GBV_ILN_187 GBV_ILN_213 GBV_ILN_224 GBV_ILN_230 GBV_ILN_370 GBV_ILN_602 GBV_ILN_702 GBV_ILN_2001 GBV_ILN_2003 GBV_ILN_2004 GBV_ILN_2005 GBV_ILN_2007 GBV_ILN_2008 GBV_ILN_2009 GBV_ILN_2010 GBV_ILN_2011 GBV_ILN_2014 GBV_ILN_2015 GBV_ILN_2020 GBV_ILN_2021 GBV_ILN_2025 GBV_ILN_2026 GBV_ILN_2027 GBV_ILN_2034 GBV_ILN_2044 GBV_ILN_2048 GBV_ILN_2049 GBV_ILN_2050 GBV_ILN_2055 GBV_ILN_2056 GBV_ILN_2059 GBV_ILN_2061 GBV_ILN_2064 GBV_ILN_2088 GBV_ILN_2106 GBV_ILN_2110 GBV_ILN_2111 GBV_ILN_2112 GBV_ILN_2122 GBV_ILN_2129 GBV_ILN_2143 GBV_ILN_2152 GBV_ILN_2153 GBV_ILN_2190 GBV_ILN_2232 GBV_ILN_2336 GBV_ILN_2470 GBV_ILN_2507 GBV_ILN_4035 GBV_ILN_4037 GBV_ILN_4112 GBV_ILN_4125 GBV_ILN_4242 GBV_ILN_4249 GBV_ILN_4251 GBV_ILN_4305 GBV_ILN_4306 GBV_ILN_4307 GBV_ILN_4313 GBV_ILN_4322 GBV_ILN_4323 GBV_ILN_4324 GBV_ILN_4325 GBV_ILN_4326 GBV_ILN_4333 GBV_ILN_4334 GBV_ILN_4338 GBV_ILN_4393 GBV_ILN_4700 |
title_short |
KD-PAR: A knowledge distillation-based pedestrian attribute recognition model with multi-label mixed feature learning network |
remote_bool |
true |
author2 |
Wang, Zidong Li, Han Zeng, Nianyin |
author2Str |
Wang, Zidong Li, Han Zeng, Nianyin |
ppnlink |
320577961 |
mediatype_str_mv |
c |
isOA_txt |
false |
hochschulschrift_bool |
false |
doi_str |
10.1016/j.eswa.2023.121305 |
up_date |
2024-07-06T23:47:48.543Z |
_version_ |
1803875431004241920 |
fullrecord_marcxml |
<?xml version="1.0" encoding="UTF-8"?><collection xmlns="http://www.loc.gov/MARC21/slim"><record><leader>01000caa a22002652 4500</leader><controlfield tag="001">ELV065657284</controlfield><controlfield tag="003">DE-627</controlfield><controlfield tag="005">20240104094241.0</controlfield><controlfield tag="007">cr uuu---uuuuu</controlfield><controlfield tag="008">231118s2023 xx |||||o 00| ||eng c</controlfield><datafield tag="024" ind1="7" ind2=" "><subfield code="a">10.1016/j.eswa.2023.121305</subfield><subfield code="2">doi</subfield></datafield><datafield tag="035" ind1=" " ind2=" "><subfield code="a">(DE-627)ELV065657284</subfield></datafield><datafield tag="035" ind1=" " ind2=" "><subfield code="a">(ELSEVIER)S0957-4174(23)01807-9</subfield></datafield><datafield tag="040" ind1=" " ind2=" "><subfield code="a">DE-627</subfield><subfield code="b">ger</subfield><subfield code="c">DE-627</subfield><subfield code="e">rda</subfield></datafield><datafield tag="041" ind1=" " ind2=" "><subfield code="a">eng</subfield></datafield><datafield tag="082" ind1="0" ind2="4"><subfield code="a">004</subfield><subfield code="q">VZ</subfield></datafield><datafield tag="084" ind1=" " ind2=" "><subfield code="a">54.72</subfield><subfield code="2">bkl</subfield></datafield><datafield tag="100" ind1="1" ind2=" "><subfield code="a">Wu, Peishu</subfield><subfield code="e">verfasserin</subfield><subfield code="0">(orcid)0000-0001-9891-3809</subfield><subfield code="4">aut</subfield></datafield><datafield tag="245" ind1="1" ind2="0"><subfield code="a">KD-PAR: A knowledge distillation-based pedestrian attribute recognition model with multi-label mixed feature learning network</subfield></datafield><datafield tag="264" ind1=" " ind2="1"><subfield code="c">2023</subfield></datafield><datafield tag="336" ind1=" " ind2=" "><subfield code="a">nicht spezifiziert</subfield><subfield code="b">zzz</subfield><subfield code="2">rdacontent</subfield></datafield><datafield tag="337" ind1=" " ind2=" "><subfield code="a">Computermedien</subfield><subfield code="b">c</subfield><subfield code="2">rdamedia</subfield></datafield><datafield tag="338" ind1=" " ind2=" "><subfield code="a">Online-Ressource</subfield><subfield code="b">cr</subfield><subfield code="2">rdacarrier</subfield></datafield><datafield tag="520" ind1=" " ind2=" "><subfield code="a">In this paper, a novel knowledge distillation (KD)-based pedestrian attribute recognition (PAR) model is developed, where a multi-label mixed feature learning network (MMFL-Net) is designed and adopted as the student model. In particular, by applying the grouped depth-wise separable convolution, re-parameterization and coordinate attention mechanism, not only the multi-scale receptive field information is sufficiently fused and spatially dependent robust features are extracted, the model complexity is also effectively kept acceptable. To alleviate the imbalance of category samples, an attribute weight parameter is proposed and considered when calculating the multi-label loss. Moreover, the Jensen–Shannon (JS) divergence-based KD scheme can facilitate the learning of MMFL-Net from the teacher model, which benefits strong fitting ability of the deep feature correlations so as to realize a highly generalized model. The proposed KD-PAR is comprehensively evaluated through many of experiments, and experimental results show the effectiveness and superiority of the proposed model as compared with other advanced MLL-based methods and state-of-the-art PAR models, which efficiently achieves the balance between accuracy and complexity. When facing the complex scenes such as blurry background, similar object interference, and target occlusion, the proposed KD-PAR can even present satisfactory recognition results with strong robustness, thereby providing a feasible and practical solution to the PAR tasks.</subfield></datafield><datafield tag="650" ind1=" " ind2="4"><subfield code="a">Pedestrian attribute recognition</subfield></datafield><datafield tag="650" ind1=" " ind2="4"><subfield code="a">Mixed feature learning</subfield></datafield><datafield tag="650" ind1=" " ind2="4"><subfield code="a">Structural re-parameterization</subfield></datafield><datafield tag="650" ind1=" " ind2="4"><subfield code="a">Attribute weight</subfield></datafield><datafield tag="650" ind1=" " ind2="4"><subfield code="a">Knowledge distillation</subfield></datafield><datafield tag="700" ind1="1" ind2=" "><subfield code="a">Wang, Zidong</subfield><subfield code="e">verfasserin</subfield><subfield code="0">(orcid)0000-0002-9576-7401</subfield><subfield code="4">aut</subfield></datafield><datafield tag="700" ind1="1" ind2=" "><subfield code="a">Li, Han</subfield><subfield code="e">verfasserin</subfield><subfield code="0">(orcid)0000-0003-0276-9756</subfield><subfield code="4">aut</subfield></datafield><datafield tag="700" ind1="1" ind2=" "><subfield code="a">Zeng, Nianyin</subfield><subfield code="e">verfasserin</subfield><subfield code="0">(orcid)0000-0002-6957-2942</subfield><subfield code="4">aut</subfield></datafield><datafield tag="773" ind1="0" ind2="8"><subfield code="i">Enthalten in</subfield><subfield code="t">Expert systems with applications</subfield><subfield code="d">Amsterdam [u.a.] : Elsevier Science, 1990</subfield><subfield code="g">237</subfield><subfield code="h">Online-Ressource</subfield><subfield code="w">(DE-627)320577961</subfield><subfield code="w">(DE-600)2017237-0</subfield><subfield code="w">(DE-576)11481807X</subfield><subfield code="7">nnns</subfield></datafield><datafield tag="773" ind1="1" ind2="8"><subfield code="g">volume:237</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_USEFLAG_U</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ELV</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">SYSFLAG_U</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_20</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_22</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_23</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_24</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_31</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_32</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_40</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_60</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_62</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_65</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_69</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_70</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_73</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_74</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_90</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_95</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_100</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_101</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_105</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_110</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_150</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_151</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_187</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_213</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_224</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_230</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_370</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_602</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_702</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2001</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2003</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2004</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2005</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2007</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2008</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2009</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2010</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2011</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2014</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2015</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2020</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2021</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2025</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2026</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2027</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2034</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2044</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2048</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2049</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2050</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2055</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2056</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2059</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2061</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2064</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2088</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2106</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2110</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2111</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2112</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2122</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2129</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2143</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2152</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2153</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2190</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2232</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2336</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2470</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2507</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4035</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4037</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4112</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4125</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4242</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4249</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4251</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4305</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4306</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4307</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4313</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4322</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4323</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4324</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4325</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4326</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4333</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4334</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4338</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4393</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4700</subfield></datafield><datafield tag="936" ind1="b" ind2="k"><subfield code="a">54.72</subfield><subfield code="j">Künstliche Intelligenz</subfield><subfield code="q">VZ</subfield></datafield><datafield tag="951" ind1=" " ind2=" "><subfield code="a">AR</subfield></datafield><datafield tag="952" ind1=" " ind2=" "><subfield code="d">237</subfield></datafield></record></collection>
|
score |
7.39989 |