Convolutional non‐local spatial‐temporal learning for multi‐modality action recognition
Abstract Traditional deep convolutional networks have shown that both RGB and depth are complementary for video action recognition. However, it is difficult to enhance the action recognition accuracy because of the limitation of the single convolutional networks to extract the underlying relationshi...
Ausführliche Beschreibung
Autor*in: |
Ziliang Ren [verfasserIn] Huaqiang Yuan [verfasserIn] Wenhong Wei [verfasserIn] Tiezhu Zhao [verfasserIn] Qieshi Zhang [verfasserIn] |
---|
Format: |
E-Artikel |
---|---|
Sprache: |
Englisch |
Erschienen: |
2022 |
---|
Schlagwörter: |
---|
Übergeordnetes Werk: |
In: Electronics Letters - Wiley, 2021, 58(2022), 20, Seite 765-767 |
---|---|
Übergeordnetes Werk: |
volume:58 ; year:2022 ; number:20 ; pages:765-767 |
Links: |
Link aufrufen |
---|
DOI / URN: |
10.1049/ell2.12597 |
---|
Katalog-ID: |
DOAJ022719237 |
---|
LEADER | 01000caa a22002652 4500 | ||
---|---|---|---|
001 | DOAJ022719237 | ||
003 | DE-627 | ||
005 | 20230502062123.0 | ||
007 | cr uuu---uuuuu | ||
008 | 230226s2022 xx |||||o 00| ||eng c | ||
024 | 7 | |a 10.1049/ell2.12597 |2 doi | |
035 | |a (DE-627)DOAJ022719237 | ||
035 | |a (DE-599)DOAJcaed5d9a90be4e95959cda144a4457c7 | ||
040 | |a DE-627 |b ger |c DE-627 |e rakwb | ||
041 | |a eng | ||
050 | 0 | |a TK1-9971 | |
100 | 0 | |a Ziliang Ren |e verfasserin |4 aut | |
245 | 1 | 0 | |a Convolutional non‐local spatial‐temporal learning for multi‐modality action recognition |
264 | 1 | |c 2022 | |
336 | |a Text |b txt |2 rdacontent | ||
337 | |a Computermedien |b c |2 rdamedia | ||
338 | |a Online-Ressource |b cr |2 rdacarrier | ||
520 | |a Abstract Traditional deep convolutional networks have shown that both RGB and depth are complementary for video action recognition. However, it is difficult to enhance the action recognition accuracy because of the limitation of the single convolutional networks to extract the underlying relationship and complementary features between these two kinds of modalities. The authors proposed a novel two stream convolutional networks for multi‐modality action recognition by joint optimisation learning to extract global features from RGB and depth sequences. Specifically, a non‐local multi‐modality compensation block is introduced to learn the semantic fusion features for the recognition performance. Experimental results on two multi‐modality human action datasets, including NTU RGB+D 120 and PKU‐MMD dataset, verify the effectiveness of our proposed recognition framework and demonstrate that the proposed non‐local multi‐modality compensation block can learn complementary features and enhance the recognition accuracy. | ||
650 | 4 | |a Image recognition | |
650 | 4 | |a Optimisation techniques | |
650 | 4 | |a Computer vision and image processing techniques | |
650 | 4 | |a Video signal processing | |
650 | 4 | |a Neural nets | |
653 | 0 | |a Electrical engineering. Electronics. Nuclear engineering | |
700 | 0 | |a Huaqiang Yuan |e verfasserin |4 aut | |
700 | 0 | |a Wenhong Wei |e verfasserin |4 aut | |
700 | 0 | |a Tiezhu Zhao |e verfasserin |4 aut | |
700 | 0 | |a Qieshi Zhang |e verfasserin |4 aut | |
773 | 0 | 8 | |i In |t Electronics Letters |d Wiley, 2021 |g 58(2022), 20, Seite 765-767 |w (DE-627)325616094 |w (DE-600)2038620-5 |x 1350911X |7 nnns |
773 | 1 | 8 | |g volume:58 |g year:2022 |g number:20 |g pages:765-767 |
856 | 4 | 0 | |u https://doi.org/10.1049/ell2.12597 |z kostenfrei |
856 | 4 | 0 | |u https://doaj.org/article/caed5d9a90be4e95959cda144a4457c7 |z kostenfrei |
856 | 4 | 0 | |u https://doi.org/10.1049/ell2.12597 |z kostenfrei |
856 | 4 | 2 | |u https://doaj.org/toc/0013-5194 |y Journal toc |z kostenfrei |
856 | 4 | 2 | |u https://doaj.org/toc/1350-911X |y Journal toc |z kostenfrei |
912 | |a GBV_USEFLAG_A | ||
912 | |a SYSFLAG_A | ||
912 | |a GBV_DOAJ | ||
912 | |a SSG-OLC-PHA | ||
912 | |a GBV_ILN_20 | ||
912 | |a GBV_ILN_22 | ||
912 | |a GBV_ILN_23 | ||
912 | |a GBV_ILN_24 | ||
912 | |a GBV_ILN_31 | ||
912 | |a GBV_ILN_39 | ||
912 | |a GBV_ILN_40 | ||
912 | |a GBV_ILN_60 | ||
912 | |a GBV_ILN_62 | ||
912 | |a GBV_ILN_63 | ||
912 | |a GBV_ILN_65 | ||
912 | |a GBV_ILN_69 | ||
912 | |a GBV_ILN_70 | ||
912 | |a GBV_ILN_73 | ||
912 | |a GBV_ILN_95 | ||
912 | |a GBV_ILN_105 | ||
912 | |a GBV_ILN_110 | ||
912 | |a GBV_ILN_120 | ||
912 | |a GBV_ILN_151 | ||
912 | |a GBV_ILN_161 | ||
912 | |a GBV_ILN_170 | ||
912 | |a GBV_ILN_171 | ||
912 | |a GBV_ILN_213 | ||
912 | |a GBV_ILN_224 | ||
912 | |a GBV_ILN_230 | ||
912 | |a GBV_ILN_285 | ||
912 | |a GBV_ILN_293 | ||
912 | |a GBV_ILN_370 | ||
912 | |a GBV_ILN_602 | ||
912 | |a GBV_ILN_636 | ||
912 | |a GBV_ILN_2004 | ||
912 | |a GBV_ILN_2005 | ||
912 | |a GBV_ILN_2006 | ||
912 | |a GBV_ILN_2007 | ||
912 | |a GBV_ILN_2011 | ||
912 | |a GBV_ILN_2014 | ||
912 | |a GBV_ILN_2026 | ||
912 | |a GBV_ILN_2034 | ||
912 | |a GBV_ILN_2037 | ||
912 | |a GBV_ILN_2038 | ||
912 | |a GBV_ILN_2044 | ||
912 | |a GBV_ILN_2048 | ||
912 | |a GBV_ILN_2049 | ||
912 | |a GBV_ILN_2050 | ||
912 | |a GBV_ILN_2055 | ||
912 | |a GBV_ILN_2056 | ||
912 | |a GBV_ILN_2057 | ||
912 | |a GBV_ILN_2059 | ||
912 | |a GBV_ILN_2061 | ||
912 | |a GBV_ILN_2064 | ||
912 | |a GBV_ILN_2068 | ||
912 | |a GBV_ILN_2088 | ||
912 | |a GBV_ILN_2106 | ||
912 | |a GBV_ILN_2108 | ||
912 | |a GBV_ILN_2110 | ||
912 | |a GBV_ILN_2111 | ||
912 | |a GBV_ILN_2118 | ||
912 | |a GBV_ILN_2122 | ||
912 | |a GBV_ILN_2143 | ||
912 | |a GBV_ILN_2144 | ||
912 | |a GBV_ILN_2147 | ||
912 | |a GBV_ILN_2148 | ||
912 | |a GBV_ILN_2152 | ||
912 | |a GBV_ILN_2232 | ||
912 | |a GBV_ILN_2336 | ||
912 | |a GBV_ILN_2470 | ||
912 | |a GBV_ILN_2507 | ||
912 | |a GBV_ILN_2522 | ||
912 | |a GBV_ILN_2548 | ||
912 | |a GBV_ILN_4012 | ||
912 | |a GBV_ILN_4035 | ||
912 | |a GBV_ILN_4037 | ||
912 | |a GBV_ILN_4046 | ||
912 | |a GBV_ILN_4112 | ||
912 | |a GBV_ILN_4125 | ||
912 | |a GBV_ILN_4126 | ||
912 | |a GBV_ILN_4242 | ||
912 | |a GBV_ILN_4249 | ||
912 | |a GBV_ILN_4251 | ||
912 | |a GBV_ILN_4305 | ||
912 | |a GBV_ILN_4306 | ||
912 | |a GBV_ILN_4307 | ||
912 | |a GBV_ILN_4313 | ||
912 | |a GBV_ILN_4322 | ||
912 | |a GBV_ILN_4323 | ||
912 | |a GBV_ILN_4324 | ||
912 | |a GBV_ILN_4325 | ||
912 | |a GBV_ILN_4326 | ||
912 | |a GBV_ILN_4333 | ||
912 | |a GBV_ILN_4334 | ||
912 | |a GBV_ILN_4335 | ||
912 | |a GBV_ILN_4336 | ||
912 | |a GBV_ILN_4338 | ||
912 | |a GBV_ILN_4367 | ||
912 | |a GBV_ILN_4700 | ||
951 | |a AR | ||
952 | |d 58 |j 2022 |e 20 |h 765-767 |
author_variant |
z r zr h y hy w w ww t z tz q z qz |
---|---|
matchkey_str |
article:1350911X:2022----::ovltoannoasaileprlerigomlioa |
hierarchy_sort_str |
2022 |
callnumber-subject-code |
TK |
publishDate |
2022 |
allfields |
10.1049/ell2.12597 doi (DE-627)DOAJ022719237 (DE-599)DOAJcaed5d9a90be4e95959cda144a4457c7 DE-627 ger DE-627 rakwb eng TK1-9971 Ziliang Ren verfasserin aut Convolutional non‐local spatial‐temporal learning for multi‐modality action recognition 2022 Text txt rdacontent Computermedien c rdamedia Online-Ressource cr rdacarrier Abstract Traditional deep convolutional networks have shown that both RGB and depth are complementary for video action recognition. However, it is difficult to enhance the action recognition accuracy because of the limitation of the single convolutional networks to extract the underlying relationship and complementary features between these two kinds of modalities. The authors proposed a novel two stream convolutional networks for multi‐modality action recognition by joint optimisation learning to extract global features from RGB and depth sequences. Specifically, a non‐local multi‐modality compensation block is introduced to learn the semantic fusion features for the recognition performance. Experimental results on two multi‐modality human action datasets, including NTU RGB+D 120 and PKU‐MMD dataset, verify the effectiveness of our proposed recognition framework and demonstrate that the proposed non‐local multi‐modality compensation block can learn complementary features and enhance the recognition accuracy. Image recognition Optimisation techniques Computer vision and image processing techniques Video signal processing Neural nets Electrical engineering. Electronics. Nuclear engineering Huaqiang Yuan verfasserin aut Wenhong Wei verfasserin aut Tiezhu Zhao verfasserin aut Qieshi Zhang verfasserin aut In Electronics Letters Wiley, 2021 58(2022), 20, Seite 765-767 (DE-627)325616094 (DE-600)2038620-5 1350911X nnns volume:58 year:2022 number:20 pages:765-767 https://doi.org/10.1049/ell2.12597 kostenfrei https://doaj.org/article/caed5d9a90be4e95959cda144a4457c7 kostenfrei https://doi.org/10.1049/ell2.12597 kostenfrei https://doaj.org/toc/0013-5194 Journal toc kostenfrei https://doaj.org/toc/1350-911X Journal toc kostenfrei GBV_USEFLAG_A SYSFLAG_A GBV_DOAJ SSG-OLC-PHA GBV_ILN_20 GBV_ILN_22 GBV_ILN_23 GBV_ILN_24 GBV_ILN_31 GBV_ILN_39 GBV_ILN_40 GBV_ILN_60 GBV_ILN_62 GBV_ILN_63 GBV_ILN_65 GBV_ILN_69 GBV_ILN_70 GBV_ILN_73 GBV_ILN_95 GBV_ILN_105 GBV_ILN_110 GBV_ILN_120 GBV_ILN_151 GBV_ILN_161 GBV_ILN_170 GBV_ILN_171 GBV_ILN_213 GBV_ILN_224 GBV_ILN_230 GBV_ILN_285 GBV_ILN_293 GBV_ILN_370 GBV_ILN_602 GBV_ILN_636 GBV_ILN_2004 GBV_ILN_2005 GBV_ILN_2006 GBV_ILN_2007 GBV_ILN_2011 GBV_ILN_2014 GBV_ILN_2026 GBV_ILN_2034 GBV_ILN_2037 GBV_ILN_2038 GBV_ILN_2044 GBV_ILN_2048 GBV_ILN_2049 GBV_ILN_2050 GBV_ILN_2055 GBV_ILN_2056 GBV_ILN_2057 GBV_ILN_2059 GBV_ILN_2061 GBV_ILN_2064 GBV_ILN_2068 GBV_ILN_2088 GBV_ILN_2106 GBV_ILN_2108 GBV_ILN_2110 GBV_ILN_2111 GBV_ILN_2118 GBV_ILN_2122 GBV_ILN_2143 GBV_ILN_2144 GBV_ILN_2147 GBV_ILN_2148 GBV_ILN_2152 GBV_ILN_2232 GBV_ILN_2336 GBV_ILN_2470 GBV_ILN_2507 GBV_ILN_2522 GBV_ILN_2548 GBV_ILN_4012 GBV_ILN_4035 GBV_ILN_4037 GBV_ILN_4046 GBV_ILN_4112 GBV_ILN_4125 GBV_ILN_4126 GBV_ILN_4242 GBV_ILN_4249 GBV_ILN_4251 GBV_ILN_4305 GBV_ILN_4306 GBV_ILN_4307 GBV_ILN_4313 GBV_ILN_4322 GBV_ILN_4323 GBV_ILN_4324 GBV_ILN_4325 GBV_ILN_4326 GBV_ILN_4333 GBV_ILN_4334 GBV_ILN_4335 GBV_ILN_4336 GBV_ILN_4338 GBV_ILN_4367 GBV_ILN_4700 AR 58 2022 20 765-767 |
spelling |
10.1049/ell2.12597 doi (DE-627)DOAJ022719237 (DE-599)DOAJcaed5d9a90be4e95959cda144a4457c7 DE-627 ger DE-627 rakwb eng TK1-9971 Ziliang Ren verfasserin aut Convolutional non‐local spatial‐temporal learning for multi‐modality action recognition 2022 Text txt rdacontent Computermedien c rdamedia Online-Ressource cr rdacarrier Abstract Traditional deep convolutional networks have shown that both RGB and depth are complementary for video action recognition. However, it is difficult to enhance the action recognition accuracy because of the limitation of the single convolutional networks to extract the underlying relationship and complementary features between these two kinds of modalities. The authors proposed a novel two stream convolutional networks for multi‐modality action recognition by joint optimisation learning to extract global features from RGB and depth sequences. Specifically, a non‐local multi‐modality compensation block is introduced to learn the semantic fusion features for the recognition performance. Experimental results on two multi‐modality human action datasets, including NTU RGB+D 120 and PKU‐MMD dataset, verify the effectiveness of our proposed recognition framework and demonstrate that the proposed non‐local multi‐modality compensation block can learn complementary features and enhance the recognition accuracy. Image recognition Optimisation techniques Computer vision and image processing techniques Video signal processing Neural nets Electrical engineering. Electronics. Nuclear engineering Huaqiang Yuan verfasserin aut Wenhong Wei verfasserin aut Tiezhu Zhao verfasserin aut Qieshi Zhang verfasserin aut In Electronics Letters Wiley, 2021 58(2022), 20, Seite 765-767 (DE-627)325616094 (DE-600)2038620-5 1350911X nnns volume:58 year:2022 number:20 pages:765-767 https://doi.org/10.1049/ell2.12597 kostenfrei https://doaj.org/article/caed5d9a90be4e95959cda144a4457c7 kostenfrei https://doi.org/10.1049/ell2.12597 kostenfrei https://doaj.org/toc/0013-5194 Journal toc kostenfrei https://doaj.org/toc/1350-911X Journal toc kostenfrei GBV_USEFLAG_A SYSFLAG_A GBV_DOAJ SSG-OLC-PHA GBV_ILN_20 GBV_ILN_22 GBV_ILN_23 GBV_ILN_24 GBV_ILN_31 GBV_ILN_39 GBV_ILN_40 GBV_ILN_60 GBV_ILN_62 GBV_ILN_63 GBV_ILN_65 GBV_ILN_69 GBV_ILN_70 GBV_ILN_73 GBV_ILN_95 GBV_ILN_105 GBV_ILN_110 GBV_ILN_120 GBV_ILN_151 GBV_ILN_161 GBV_ILN_170 GBV_ILN_171 GBV_ILN_213 GBV_ILN_224 GBV_ILN_230 GBV_ILN_285 GBV_ILN_293 GBV_ILN_370 GBV_ILN_602 GBV_ILN_636 GBV_ILN_2004 GBV_ILN_2005 GBV_ILN_2006 GBV_ILN_2007 GBV_ILN_2011 GBV_ILN_2014 GBV_ILN_2026 GBV_ILN_2034 GBV_ILN_2037 GBV_ILN_2038 GBV_ILN_2044 GBV_ILN_2048 GBV_ILN_2049 GBV_ILN_2050 GBV_ILN_2055 GBV_ILN_2056 GBV_ILN_2057 GBV_ILN_2059 GBV_ILN_2061 GBV_ILN_2064 GBV_ILN_2068 GBV_ILN_2088 GBV_ILN_2106 GBV_ILN_2108 GBV_ILN_2110 GBV_ILN_2111 GBV_ILN_2118 GBV_ILN_2122 GBV_ILN_2143 GBV_ILN_2144 GBV_ILN_2147 GBV_ILN_2148 GBV_ILN_2152 GBV_ILN_2232 GBV_ILN_2336 GBV_ILN_2470 GBV_ILN_2507 GBV_ILN_2522 GBV_ILN_2548 GBV_ILN_4012 GBV_ILN_4035 GBV_ILN_4037 GBV_ILN_4046 GBV_ILN_4112 GBV_ILN_4125 GBV_ILN_4126 GBV_ILN_4242 GBV_ILN_4249 GBV_ILN_4251 GBV_ILN_4305 GBV_ILN_4306 GBV_ILN_4307 GBV_ILN_4313 GBV_ILN_4322 GBV_ILN_4323 GBV_ILN_4324 GBV_ILN_4325 GBV_ILN_4326 GBV_ILN_4333 GBV_ILN_4334 GBV_ILN_4335 GBV_ILN_4336 GBV_ILN_4338 GBV_ILN_4367 GBV_ILN_4700 AR 58 2022 20 765-767 |
allfields_unstemmed |
10.1049/ell2.12597 doi (DE-627)DOAJ022719237 (DE-599)DOAJcaed5d9a90be4e95959cda144a4457c7 DE-627 ger DE-627 rakwb eng TK1-9971 Ziliang Ren verfasserin aut Convolutional non‐local spatial‐temporal learning for multi‐modality action recognition 2022 Text txt rdacontent Computermedien c rdamedia Online-Ressource cr rdacarrier Abstract Traditional deep convolutional networks have shown that both RGB and depth are complementary for video action recognition. However, it is difficult to enhance the action recognition accuracy because of the limitation of the single convolutional networks to extract the underlying relationship and complementary features between these two kinds of modalities. The authors proposed a novel two stream convolutional networks for multi‐modality action recognition by joint optimisation learning to extract global features from RGB and depth sequences. Specifically, a non‐local multi‐modality compensation block is introduced to learn the semantic fusion features for the recognition performance. Experimental results on two multi‐modality human action datasets, including NTU RGB+D 120 and PKU‐MMD dataset, verify the effectiveness of our proposed recognition framework and demonstrate that the proposed non‐local multi‐modality compensation block can learn complementary features and enhance the recognition accuracy. Image recognition Optimisation techniques Computer vision and image processing techniques Video signal processing Neural nets Electrical engineering. Electronics. Nuclear engineering Huaqiang Yuan verfasserin aut Wenhong Wei verfasserin aut Tiezhu Zhao verfasserin aut Qieshi Zhang verfasserin aut In Electronics Letters Wiley, 2021 58(2022), 20, Seite 765-767 (DE-627)325616094 (DE-600)2038620-5 1350911X nnns volume:58 year:2022 number:20 pages:765-767 https://doi.org/10.1049/ell2.12597 kostenfrei https://doaj.org/article/caed5d9a90be4e95959cda144a4457c7 kostenfrei https://doi.org/10.1049/ell2.12597 kostenfrei https://doaj.org/toc/0013-5194 Journal toc kostenfrei https://doaj.org/toc/1350-911X Journal toc kostenfrei GBV_USEFLAG_A SYSFLAG_A GBV_DOAJ SSG-OLC-PHA GBV_ILN_20 GBV_ILN_22 GBV_ILN_23 GBV_ILN_24 GBV_ILN_31 GBV_ILN_39 GBV_ILN_40 GBV_ILN_60 GBV_ILN_62 GBV_ILN_63 GBV_ILN_65 GBV_ILN_69 GBV_ILN_70 GBV_ILN_73 GBV_ILN_95 GBV_ILN_105 GBV_ILN_110 GBV_ILN_120 GBV_ILN_151 GBV_ILN_161 GBV_ILN_170 GBV_ILN_171 GBV_ILN_213 GBV_ILN_224 GBV_ILN_230 GBV_ILN_285 GBV_ILN_293 GBV_ILN_370 GBV_ILN_602 GBV_ILN_636 GBV_ILN_2004 GBV_ILN_2005 GBV_ILN_2006 GBV_ILN_2007 GBV_ILN_2011 GBV_ILN_2014 GBV_ILN_2026 GBV_ILN_2034 GBV_ILN_2037 GBV_ILN_2038 GBV_ILN_2044 GBV_ILN_2048 GBV_ILN_2049 GBV_ILN_2050 GBV_ILN_2055 GBV_ILN_2056 GBV_ILN_2057 GBV_ILN_2059 GBV_ILN_2061 GBV_ILN_2064 GBV_ILN_2068 GBV_ILN_2088 GBV_ILN_2106 GBV_ILN_2108 GBV_ILN_2110 GBV_ILN_2111 GBV_ILN_2118 GBV_ILN_2122 GBV_ILN_2143 GBV_ILN_2144 GBV_ILN_2147 GBV_ILN_2148 GBV_ILN_2152 GBV_ILN_2232 GBV_ILN_2336 GBV_ILN_2470 GBV_ILN_2507 GBV_ILN_2522 GBV_ILN_2548 GBV_ILN_4012 GBV_ILN_4035 GBV_ILN_4037 GBV_ILN_4046 GBV_ILN_4112 GBV_ILN_4125 GBV_ILN_4126 GBV_ILN_4242 GBV_ILN_4249 GBV_ILN_4251 GBV_ILN_4305 GBV_ILN_4306 GBV_ILN_4307 GBV_ILN_4313 GBV_ILN_4322 GBV_ILN_4323 GBV_ILN_4324 GBV_ILN_4325 GBV_ILN_4326 GBV_ILN_4333 GBV_ILN_4334 GBV_ILN_4335 GBV_ILN_4336 GBV_ILN_4338 GBV_ILN_4367 GBV_ILN_4700 AR 58 2022 20 765-767 |
allfieldsGer |
10.1049/ell2.12597 doi (DE-627)DOAJ022719237 (DE-599)DOAJcaed5d9a90be4e95959cda144a4457c7 DE-627 ger DE-627 rakwb eng TK1-9971 Ziliang Ren verfasserin aut Convolutional non‐local spatial‐temporal learning for multi‐modality action recognition 2022 Text txt rdacontent Computermedien c rdamedia Online-Ressource cr rdacarrier Abstract Traditional deep convolutional networks have shown that both RGB and depth are complementary for video action recognition. However, it is difficult to enhance the action recognition accuracy because of the limitation of the single convolutional networks to extract the underlying relationship and complementary features between these two kinds of modalities. The authors proposed a novel two stream convolutional networks for multi‐modality action recognition by joint optimisation learning to extract global features from RGB and depth sequences. Specifically, a non‐local multi‐modality compensation block is introduced to learn the semantic fusion features for the recognition performance. Experimental results on two multi‐modality human action datasets, including NTU RGB+D 120 and PKU‐MMD dataset, verify the effectiveness of our proposed recognition framework and demonstrate that the proposed non‐local multi‐modality compensation block can learn complementary features and enhance the recognition accuracy. Image recognition Optimisation techniques Computer vision and image processing techniques Video signal processing Neural nets Electrical engineering. Electronics. Nuclear engineering Huaqiang Yuan verfasserin aut Wenhong Wei verfasserin aut Tiezhu Zhao verfasserin aut Qieshi Zhang verfasserin aut In Electronics Letters Wiley, 2021 58(2022), 20, Seite 765-767 (DE-627)325616094 (DE-600)2038620-5 1350911X nnns volume:58 year:2022 number:20 pages:765-767 https://doi.org/10.1049/ell2.12597 kostenfrei https://doaj.org/article/caed5d9a90be4e95959cda144a4457c7 kostenfrei https://doi.org/10.1049/ell2.12597 kostenfrei https://doaj.org/toc/0013-5194 Journal toc kostenfrei https://doaj.org/toc/1350-911X Journal toc kostenfrei GBV_USEFLAG_A SYSFLAG_A GBV_DOAJ SSG-OLC-PHA GBV_ILN_20 GBV_ILN_22 GBV_ILN_23 GBV_ILN_24 GBV_ILN_31 GBV_ILN_39 GBV_ILN_40 GBV_ILN_60 GBV_ILN_62 GBV_ILN_63 GBV_ILN_65 GBV_ILN_69 GBV_ILN_70 GBV_ILN_73 GBV_ILN_95 GBV_ILN_105 GBV_ILN_110 GBV_ILN_120 GBV_ILN_151 GBV_ILN_161 GBV_ILN_170 GBV_ILN_171 GBV_ILN_213 GBV_ILN_224 GBV_ILN_230 GBV_ILN_285 GBV_ILN_293 GBV_ILN_370 GBV_ILN_602 GBV_ILN_636 GBV_ILN_2004 GBV_ILN_2005 GBV_ILN_2006 GBV_ILN_2007 GBV_ILN_2011 GBV_ILN_2014 GBV_ILN_2026 GBV_ILN_2034 GBV_ILN_2037 GBV_ILN_2038 GBV_ILN_2044 GBV_ILN_2048 GBV_ILN_2049 GBV_ILN_2050 GBV_ILN_2055 GBV_ILN_2056 GBV_ILN_2057 GBV_ILN_2059 GBV_ILN_2061 GBV_ILN_2064 GBV_ILN_2068 GBV_ILN_2088 GBV_ILN_2106 GBV_ILN_2108 GBV_ILN_2110 GBV_ILN_2111 GBV_ILN_2118 GBV_ILN_2122 GBV_ILN_2143 GBV_ILN_2144 GBV_ILN_2147 GBV_ILN_2148 GBV_ILN_2152 GBV_ILN_2232 GBV_ILN_2336 GBV_ILN_2470 GBV_ILN_2507 GBV_ILN_2522 GBV_ILN_2548 GBV_ILN_4012 GBV_ILN_4035 GBV_ILN_4037 GBV_ILN_4046 GBV_ILN_4112 GBV_ILN_4125 GBV_ILN_4126 GBV_ILN_4242 GBV_ILN_4249 GBV_ILN_4251 GBV_ILN_4305 GBV_ILN_4306 GBV_ILN_4307 GBV_ILN_4313 GBV_ILN_4322 GBV_ILN_4323 GBV_ILN_4324 GBV_ILN_4325 GBV_ILN_4326 GBV_ILN_4333 GBV_ILN_4334 GBV_ILN_4335 GBV_ILN_4336 GBV_ILN_4338 GBV_ILN_4367 GBV_ILN_4700 AR 58 2022 20 765-767 |
allfieldsSound |
10.1049/ell2.12597 doi (DE-627)DOAJ022719237 (DE-599)DOAJcaed5d9a90be4e95959cda144a4457c7 DE-627 ger DE-627 rakwb eng TK1-9971 Ziliang Ren verfasserin aut Convolutional non‐local spatial‐temporal learning for multi‐modality action recognition 2022 Text txt rdacontent Computermedien c rdamedia Online-Ressource cr rdacarrier Abstract Traditional deep convolutional networks have shown that both RGB and depth are complementary for video action recognition. However, it is difficult to enhance the action recognition accuracy because of the limitation of the single convolutional networks to extract the underlying relationship and complementary features between these two kinds of modalities. The authors proposed a novel two stream convolutional networks for multi‐modality action recognition by joint optimisation learning to extract global features from RGB and depth sequences. Specifically, a non‐local multi‐modality compensation block is introduced to learn the semantic fusion features for the recognition performance. Experimental results on two multi‐modality human action datasets, including NTU RGB+D 120 and PKU‐MMD dataset, verify the effectiveness of our proposed recognition framework and demonstrate that the proposed non‐local multi‐modality compensation block can learn complementary features and enhance the recognition accuracy. Image recognition Optimisation techniques Computer vision and image processing techniques Video signal processing Neural nets Electrical engineering. Electronics. Nuclear engineering Huaqiang Yuan verfasserin aut Wenhong Wei verfasserin aut Tiezhu Zhao verfasserin aut Qieshi Zhang verfasserin aut In Electronics Letters Wiley, 2021 58(2022), 20, Seite 765-767 (DE-627)325616094 (DE-600)2038620-5 1350911X nnns volume:58 year:2022 number:20 pages:765-767 https://doi.org/10.1049/ell2.12597 kostenfrei https://doaj.org/article/caed5d9a90be4e95959cda144a4457c7 kostenfrei https://doi.org/10.1049/ell2.12597 kostenfrei https://doaj.org/toc/0013-5194 Journal toc kostenfrei https://doaj.org/toc/1350-911X Journal toc kostenfrei GBV_USEFLAG_A SYSFLAG_A GBV_DOAJ SSG-OLC-PHA GBV_ILN_20 GBV_ILN_22 GBV_ILN_23 GBV_ILN_24 GBV_ILN_31 GBV_ILN_39 GBV_ILN_40 GBV_ILN_60 GBV_ILN_62 GBV_ILN_63 GBV_ILN_65 GBV_ILN_69 GBV_ILN_70 GBV_ILN_73 GBV_ILN_95 GBV_ILN_105 GBV_ILN_110 GBV_ILN_120 GBV_ILN_151 GBV_ILN_161 GBV_ILN_170 GBV_ILN_171 GBV_ILN_213 GBV_ILN_224 GBV_ILN_230 GBV_ILN_285 GBV_ILN_293 GBV_ILN_370 GBV_ILN_602 GBV_ILN_636 GBV_ILN_2004 GBV_ILN_2005 GBV_ILN_2006 GBV_ILN_2007 GBV_ILN_2011 GBV_ILN_2014 GBV_ILN_2026 GBV_ILN_2034 GBV_ILN_2037 GBV_ILN_2038 GBV_ILN_2044 GBV_ILN_2048 GBV_ILN_2049 GBV_ILN_2050 GBV_ILN_2055 GBV_ILN_2056 GBV_ILN_2057 GBV_ILN_2059 GBV_ILN_2061 GBV_ILN_2064 GBV_ILN_2068 GBV_ILN_2088 GBV_ILN_2106 GBV_ILN_2108 GBV_ILN_2110 GBV_ILN_2111 GBV_ILN_2118 GBV_ILN_2122 GBV_ILN_2143 GBV_ILN_2144 GBV_ILN_2147 GBV_ILN_2148 GBV_ILN_2152 GBV_ILN_2232 GBV_ILN_2336 GBV_ILN_2470 GBV_ILN_2507 GBV_ILN_2522 GBV_ILN_2548 GBV_ILN_4012 GBV_ILN_4035 GBV_ILN_4037 GBV_ILN_4046 GBV_ILN_4112 GBV_ILN_4125 GBV_ILN_4126 GBV_ILN_4242 GBV_ILN_4249 GBV_ILN_4251 GBV_ILN_4305 GBV_ILN_4306 GBV_ILN_4307 GBV_ILN_4313 GBV_ILN_4322 GBV_ILN_4323 GBV_ILN_4324 GBV_ILN_4325 GBV_ILN_4326 GBV_ILN_4333 GBV_ILN_4334 GBV_ILN_4335 GBV_ILN_4336 GBV_ILN_4338 GBV_ILN_4367 GBV_ILN_4700 AR 58 2022 20 765-767 |
language |
English |
source |
In Electronics Letters 58(2022), 20, Seite 765-767 volume:58 year:2022 number:20 pages:765-767 |
sourceStr |
In Electronics Letters 58(2022), 20, Seite 765-767 volume:58 year:2022 number:20 pages:765-767 |
format_phy_str_mv |
Article |
institution |
findex.gbv.de |
topic_facet |
Image recognition Optimisation techniques Computer vision and image processing techniques Video signal processing Neural nets Electrical engineering. Electronics. Nuclear engineering |
isfreeaccess_bool |
true |
container_title |
Electronics Letters |
authorswithroles_txt_mv |
Ziliang Ren @@aut@@ Huaqiang Yuan @@aut@@ Wenhong Wei @@aut@@ Tiezhu Zhao @@aut@@ Qieshi Zhang @@aut@@ |
publishDateDaySort_date |
2022-01-01T00:00:00Z |
hierarchy_top_id |
325616094 |
id |
DOAJ022719237 |
language_de |
englisch |
fullrecord |
<?xml version="1.0" encoding="UTF-8"?><collection xmlns="http://www.loc.gov/MARC21/slim"><record><leader>01000caa a22002652 4500</leader><controlfield tag="001">DOAJ022719237</controlfield><controlfield tag="003">DE-627</controlfield><controlfield tag="005">20230502062123.0</controlfield><controlfield tag="007">cr uuu---uuuuu</controlfield><controlfield tag="008">230226s2022 xx |||||o 00| ||eng c</controlfield><datafield tag="024" ind1="7" ind2=" "><subfield code="a">10.1049/ell2.12597</subfield><subfield code="2">doi</subfield></datafield><datafield tag="035" ind1=" " ind2=" "><subfield code="a">(DE-627)DOAJ022719237</subfield></datafield><datafield tag="035" ind1=" " ind2=" "><subfield code="a">(DE-599)DOAJcaed5d9a90be4e95959cda144a4457c7</subfield></datafield><datafield tag="040" ind1=" " ind2=" "><subfield code="a">DE-627</subfield><subfield code="b">ger</subfield><subfield code="c">DE-627</subfield><subfield code="e">rakwb</subfield></datafield><datafield tag="041" ind1=" " ind2=" "><subfield code="a">eng</subfield></datafield><datafield tag="050" ind1=" " ind2="0"><subfield code="a">TK1-9971</subfield></datafield><datafield tag="100" ind1="0" ind2=" "><subfield code="a">Ziliang Ren</subfield><subfield code="e">verfasserin</subfield><subfield code="4">aut</subfield></datafield><datafield tag="245" ind1="1" ind2="0"><subfield code="a">Convolutional non‐local spatial‐temporal learning for multi‐modality action recognition</subfield></datafield><datafield tag="264" ind1=" " ind2="1"><subfield code="c">2022</subfield></datafield><datafield tag="336" ind1=" " ind2=" "><subfield code="a">Text</subfield><subfield code="b">txt</subfield><subfield code="2">rdacontent</subfield></datafield><datafield tag="337" ind1=" " ind2=" "><subfield code="a">Computermedien</subfield><subfield code="b">c</subfield><subfield code="2">rdamedia</subfield></datafield><datafield tag="338" ind1=" " ind2=" "><subfield code="a">Online-Ressource</subfield><subfield code="b">cr</subfield><subfield code="2">rdacarrier</subfield></datafield><datafield tag="520" ind1=" " ind2=" "><subfield code="a">Abstract Traditional deep convolutional networks have shown that both RGB and depth are complementary for video action recognition. However, it is difficult to enhance the action recognition accuracy because of the limitation of the single convolutional networks to extract the underlying relationship and complementary features between these two kinds of modalities. The authors proposed a novel two stream convolutional networks for multi‐modality action recognition by joint optimisation learning to extract global features from RGB and depth sequences. Specifically, a non‐local multi‐modality compensation block is introduced to learn the semantic fusion features for the recognition performance. Experimental results on two multi‐modality human action datasets, including NTU RGB+D 120 and PKU‐MMD dataset, verify the effectiveness of our proposed recognition framework and demonstrate that the proposed non‐local multi‐modality compensation block can learn complementary features and enhance the recognition accuracy.</subfield></datafield><datafield tag="650" ind1=" " ind2="4"><subfield code="a">Image recognition</subfield></datafield><datafield tag="650" ind1=" " ind2="4"><subfield code="a">Optimisation techniques</subfield></datafield><datafield tag="650" ind1=" " ind2="4"><subfield code="a">Computer vision and image processing techniques</subfield></datafield><datafield tag="650" ind1=" " ind2="4"><subfield code="a">Video signal processing</subfield></datafield><datafield tag="650" ind1=" " ind2="4"><subfield code="a">Neural nets</subfield></datafield><datafield tag="653" ind1=" " ind2="0"><subfield code="a">Electrical engineering. Electronics. Nuclear engineering</subfield></datafield><datafield tag="700" ind1="0" ind2=" "><subfield code="a">Huaqiang Yuan</subfield><subfield code="e">verfasserin</subfield><subfield code="4">aut</subfield></datafield><datafield tag="700" ind1="0" ind2=" "><subfield code="a">Wenhong Wei</subfield><subfield code="e">verfasserin</subfield><subfield code="4">aut</subfield></datafield><datafield tag="700" ind1="0" ind2=" "><subfield code="a">Tiezhu Zhao</subfield><subfield code="e">verfasserin</subfield><subfield code="4">aut</subfield></datafield><datafield tag="700" ind1="0" ind2=" "><subfield code="a">Qieshi Zhang</subfield><subfield code="e">verfasserin</subfield><subfield code="4">aut</subfield></datafield><datafield tag="773" ind1="0" ind2="8"><subfield code="i">In</subfield><subfield code="t">Electronics Letters</subfield><subfield code="d">Wiley, 2021</subfield><subfield code="g">58(2022), 20, Seite 765-767</subfield><subfield code="w">(DE-627)325616094</subfield><subfield code="w">(DE-600)2038620-5</subfield><subfield code="x">1350911X</subfield><subfield code="7">nnns</subfield></datafield><datafield tag="773" ind1="1" ind2="8"><subfield code="g">volume:58</subfield><subfield code="g">year:2022</subfield><subfield code="g">number:20</subfield><subfield code="g">pages:765-767</subfield></datafield><datafield tag="856" ind1="4" ind2="0"><subfield code="u">https://doi.org/10.1049/ell2.12597</subfield><subfield code="z">kostenfrei</subfield></datafield><datafield tag="856" ind1="4" ind2="0"><subfield code="u">https://doaj.org/article/caed5d9a90be4e95959cda144a4457c7</subfield><subfield code="z">kostenfrei</subfield></datafield><datafield tag="856" ind1="4" ind2="0"><subfield code="u">https://doi.org/10.1049/ell2.12597</subfield><subfield code="z">kostenfrei</subfield></datafield><datafield tag="856" ind1="4" ind2="2"><subfield code="u">https://doaj.org/toc/0013-5194</subfield><subfield code="y">Journal toc</subfield><subfield code="z">kostenfrei</subfield></datafield><datafield tag="856" ind1="4" ind2="2"><subfield code="u">https://doaj.org/toc/1350-911X</subfield><subfield code="y">Journal toc</subfield><subfield code="z">kostenfrei</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_USEFLAG_A</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">SYSFLAG_A</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_DOAJ</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">SSG-OLC-PHA</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_20</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_22</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_23</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_24</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_31</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_39</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_40</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_60</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_62</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_63</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_65</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_69</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_70</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_73</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_95</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_105</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_110</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_120</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_151</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_161</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_170</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_171</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_213</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_224</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_230</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_285</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_293</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_370</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_602</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_636</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2004</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2005</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2006</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2007</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2011</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2014</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2026</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2034</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2037</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2038</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2044</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2048</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2049</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2050</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2055</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2056</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2057</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2059</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2061</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2064</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2068</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2088</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2106</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2108</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2110</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2111</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2118</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2122</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2143</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2144</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2147</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2148</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2152</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2232</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2336</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2470</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2507</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2522</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2548</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4012</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4035</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4037</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4046</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4112</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4125</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4126</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4242</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4249</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4251</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4305</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4306</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4307</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4313</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4322</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4323</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4324</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4325</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4326</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4333</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4334</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4335</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4336</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4338</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4367</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4700</subfield></datafield><datafield tag="951" ind1=" " ind2=" "><subfield code="a">AR</subfield></datafield><datafield tag="952" ind1=" " ind2=" "><subfield code="d">58</subfield><subfield code="j">2022</subfield><subfield code="e">20</subfield><subfield code="h">765-767</subfield></datafield></record></collection>
|
callnumber-first |
T - Technology |
author |
Ziliang Ren |
spellingShingle |
Ziliang Ren misc TK1-9971 misc Image recognition misc Optimisation techniques misc Computer vision and image processing techniques misc Video signal processing misc Neural nets misc Electrical engineering. Electronics. Nuclear engineering Convolutional non‐local spatial‐temporal learning for multi‐modality action recognition |
authorStr |
Ziliang Ren |
ppnlink_with_tag_str_mv |
@@773@@(DE-627)325616094 |
format |
electronic Article |
delete_txt_mv |
keep |
author_role |
aut aut aut aut aut |
collection |
DOAJ |
remote_str |
true |
callnumber-label |
TK1-9971 |
illustrated |
Not Illustrated |
issn |
1350911X |
topic_title |
TK1-9971 Convolutional non‐local spatial‐temporal learning for multi‐modality action recognition Image recognition Optimisation techniques Computer vision and image processing techniques Video signal processing Neural nets |
topic |
misc TK1-9971 misc Image recognition misc Optimisation techniques misc Computer vision and image processing techniques misc Video signal processing misc Neural nets misc Electrical engineering. Electronics. Nuclear engineering |
topic_unstemmed |
misc TK1-9971 misc Image recognition misc Optimisation techniques misc Computer vision and image processing techniques misc Video signal processing misc Neural nets misc Electrical engineering. Electronics. Nuclear engineering |
topic_browse |
misc TK1-9971 misc Image recognition misc Optimisation techniques misc Computer vision and image processing techniques misc Video signal processing misc Neural nets misc Electrical engineering. Electronics. Nuclear engineering |
format_facet |
Elektronische Aufsätze Aufsätze Elektronische Ressource |
format_main_str_mv |
Text Zeitschrift/Artikel |
carriertype_str_mv |
cr |
hierarchy_parent_title |
Electronics Letters |
hierarchy_parent_id |
325616094 |
hierarchy_top_title |
Electronics Letters |
isfreeaccess_txt |
true |
familylinks_str_mv |
(DE-627)325616094 (DE-600)2038620-5 |
title |
Convolutional non‐local spatial‐temporal learning for multi‐modality action recognition |
ctrlnum |
(DE-627)DOAJ022719237 (DE-599)DOAJcaed5d9a90be4e95959cda144a4457c7 |
title_full |
Convolutional non‐local spatial‐temporal learning for multi‐modality action recognition |
author_sort |
Ziliang Ren |
journal |
Electronics Letters |
journalStr |
Electronics Letters |
callnumber-first-code |
T |
lang_code |
eng |
isOA_bool |
true |
recordtype |
marc |
publishDateSort |
2022 |
contenttype_str_mv |
txt |
container_start_page |
765 |
author_browse |
Ziliang Ren Huaqiang Yuan Wenhong Wei Tiezhu Zhao Qieshi Zhang |
container_volume |
58 |
class |
TK1-9971 |
format_se |
Elektronische Aufsätze |
author-letter |
Ziliang Ren |
doi_str_mv |
10.1049/ell2.12597 |
author2-role |
verfasserin |
title_sort |
convolutional non‐local spatial‐temporal learning for multi‐modality action recognition |
callnumber |
TK1-9971 |
title_auth |
Convolutional non‐local spatial‐temporal learning for multi‐modality action recognition |
abstract |
Abstract Traditional deep convolutional networks have shown that both RGB and depth are complementary for video action recognition. However, it is difficult to enhance the action recognition accuracy because of the limitation of the single convolutional networks to extract the underlying relationship and complementary features between these two kinds of modalities. The authors proposed a novel two stream convolutional networks for multi‐modality action recognition by joint optimisation learning to extract global features from RGB and depth sequences. Specifically, a non‐local multi‐modality compensation block is introduced to learn the semantic fusion features for the recognition performance. Experimental results on two multi‐modality human action datasets, including NTU RGB+D 120 and PKU‐MMD dataset, verify the effectiveness of our proposed recognition framework and demonstrate that the proposed non‐local multi‐modality compensation block can learn complementary features and enhance the recognition accuracy. |
abstractGer |
Abstract Traditional deep convolutional networks have shown that both RGB and depth are complementary for video action recognition. However, it is difficult to enhance the action recognition accuracy because of the limitation of the single convolutional networks to extract the underlying relationship and complementary features between these two kinds of modalities. The authors proposed a novel two stream convolutional networks for multi‐modality action recognition by joint optimisation learning to extract global features from RGB and depth sequences. Specifically, a non‐local multi‐modality compensation block is introduced to learn the semantic fusion features for the recognition performance. Experimental results on two multi‐modality human action datasets, including NTU RGB+D 120 and PKU‐MMD dataset, verify the effectiveness of our proposed recognition framework and demonstrate that the proposed non‐local multi‐modality compensation block can learn complementary features and enhance the recognition accuracy. |
abstract_unstemmed |
Abstract Traditional deep convolutional networks have shown that both RGB and depth are complementary for video action recognition. However, it is difficult to enhance the action recognition accuracy because of the limitation of the single convolutional networks to extract the underlying relationship and complementary features between these two kinds of modalities. The authors proposed a novel two stream convolutional networks for multi‐modality action recognition by joint optimisation learning to extract global features from RGB and depth sequences. Specifically, a non‐local multi‐modality compensation block is introduced to learn the semantic fusion features for the recognition performance. Experimental results on two multi‐modality human action datasets, including NTU RGB+D 120 and PKU‐MMD dataset, verify the effectiveness of our proposed recognition framework and demonstrate that the proposed non‐local multi‐modality compensation block can learn complementary features and enhance the recognition accuracy. |
collection_details |
GBV_USEFLAG_A SYSFLAG_A GBV_DOAJ SSG-OLC-PHA GBV_ILN_20 GBV_ILN_22 GBV_ILN_23 GBV_ILN_24 GBV_ILN_31 GBV_ILN_39 GBV_ILN_40 GBV_ILN_60 GBV_ILN_62 GBV_ILN_63 GBV_ILN_65 GBV_ILN_69 GBV_ILN_70 GBV_ILN_73 GBV_ILN_95 GBV_ILN_105 GBV_ILN_110 GBV_ILN_120 GBV_ILN_151 GBV_ILN_161 GBV_ILN_170 GBV_ILN_171 GBV_ILN_213 GBV_ILN_224 GBV_ILN_230 GBV_ILN_285 GBV_ILN_293 GBV_ILN_370 GBV_ILN_602 GBV_ILN_636 GBV_ILN_2004 GBV_ILN_2005 GBV_ILN_2006 GBV_ILN_2007 GBV_ILN_2011 GBV_ILN_2014 GBV_ILN_2026 GBV_ILN_2034 GBV_ILN_2037 GBV_ILN_2038 GBV_ILN_2044 GBV_ILN_2048 GBV_ILN_2049 GBV_ILN_2050 GBV_ILN_2055 GBV_ILN_2056 GBV_ILN_2057 GBV_ILN_2059 GBV_ILN_2061 GBV_ILN_2064 GBV_ILN_2068 GBV_ILN_2088 GBV_ILN_2106 GBV_ILN_2108 GBV_ILN_2110 GBV_ILN_2111 GBV_ILN_2118 GBV_ILN_2122 GBV_ILN_2143 GBV_ILN_2144 GBV_ILN_2147 GBV_ILN_2148 GBV_ILN_2152 GBV_ILN_2232 GBV_ILN_2336 GBV_ILN_2470 GBV_ILN_2507 GBV_ILN_2522 GBV_ILN_2548 GBV_ILN_4012 GBV_ILN_4035 GBV_ILN_4037 GBV_ILN_4046 GBV_ILN_4112 GBV_ILN_4125 GBV_ILN_4126 GBV_ILN_4242 GBV_ILN_4249 GBV_ILN_4251 GBV_ILN_4305 GBV_ILN_4306 GBV_ILN_4307 GBV_ILN_4313 GBV_ILN_4322 GBV_ILN_4323 GBV_ILN_4324 GBV_ILN_4325 GBV_ILN_4326 GBV_ILN_4333 GBV_ILN_4334 GBV_ILN_4335 GBV_ILN_4336 GBV_ILN_4338 GBV_ILN_4367 GBV_ILN_4700 |
container_issue |
20 |
title_short |
Convolutional non‐local spatial‐temporal learning for multi‐modality action recognition |
url |
https://doi.org/10.1049/ell2.12597 https://doaj.org/article/caed5d9a90be4e95959cda144a4457c7 https://doaj.org/toc/0013-5194 https://doaj.org/toc/1350-911X |
remote_bool |
true |
author2 |
Huaqiang Yuan Wenhong Wei Tiezhu Zhao Qieshi Zhang |
author2Str |
Huaqiang Yuan Wenhong Wei Tiezhu Zhao Qieshi Zhang |
ppnlink |
325616094 |
callnumber-subject |
TK - Electrical and Nuclear Engineering |
mediatype_str_mv |
c |
isOA_txt |
true |
hochschulschrift_bool |
false |
doi_str |
10.1049/ell2.12597 |
callnumber-a |
TK1-9971 |
up_date |
2024-07-03T13:39:31.038Z |
_version_ |
1803565369711919104 |
fullrecord_marcxml |
<?xml version="1.0" encoding="UTF-8"?><collection xmlns="http://www.loc.gov/MARC21/slim"><record><leader>01000caa a22002652 4500</leader><controlfield tag="001">DOAJ022719237</controlfield><controlfield tag="003">DE-627</controlfield><controlfield tag="005">20230502062123.0</controlfield><controlfield tag="007">cr uuu---uuuuu</controlfield><controlfield tag="008">230226s2022 xx |||||o 00| ||eng c</controlfield><datafield tag="024" ind1="7" ind2=" "><subfield code="a">10.1049/ell2.12597</subfield><subfield code="2">doi</subfield></datafield><datafield tag="035" ind1=" " ind2=" "><subfield code="a">(DE-627)DOAJ022719237</subfield></datafield><datafield tag="035" ind1=" " ind2=" "><subfield code="a">(DE-599)DOAJcaed5d9a90be4e95959cda144a4457c7</subfield></datafield><datafield tag="040" ind1=" " ind2=" "><subfield code="a">DE-627</subfield><subfield code="b">ger</subfield><subfield code="c">DE-627</subfield><subfield code="e">rakwb</subfield></datafield><datafield tag="041" ind1=" " ind2=" "><subfield code="a">eng</subfield></datafield><datafield tag="050" ind1=" " ind2="0"><subfield code="a">TK1-9971</subfield></datafield><datafield tag="100" ind1="0" ind2=" "><subfield code="a">Ziliang Ren</subfield><subfield code="e">verfasserin</subfield><subfield code="4">aut</subfield></datafield><datafield tag="245" ind1="1" ind2="0"><subfield code="a">Convolutional non‐local spatial‐temporal learning for multi‐modality action recognition</subfield></datafield><datafield tag="264" ind1=" " ind2="1"><subfield code="c">2022</subfield></datafield><datafield tag="336" ind1=" " ind2=" "><subfield code="a">Text</subfield><subfield code="b">txt</subfield><subfield code="2">rdacontent</subfield></datafield><datafield tag="337" ind1=" " ind2=" "><subfield code="a">Computermedien</subfield><subfield code="b">c</subfield><subfield code="2">rdamedia</subfield></datafield><datafield tag="338" ind1=" " ind2=" "><subfield code="a">Online-Ressource</subfield><subfield code="b">cr</subfield><subfield code="2">rdacarrier</subfield></datafield><datafield tag="520" ind1=" " ind2=" "><subfield code="a">Abstract Traditional deep convolutional networks have shown that both RGB and depth are complementary for video action recognition. However, it is difficult to enhance the action recognition accuracy because of the limitation of the single convolutional networks to extract the underlying relationship and complementary features between these two kinds of modalities. The authors proposed a novel two stream convolutional networks for multi‐modality action recognition by joint optimisation learning to extract global features from RGB and depth sequences. Specifically, a non‐local multi‐modality compensation block is introduced to learn the semantic fusion features for the recognition performance. Experimental results on two multi‐modality human action datasets, including NTU RGB+D 120 and PKU‐MMD dataset, verify the effectiveness of our proposed recognition framework and demonstrate that the proposed non‐local multi‐modality compensation block can learn complementary features and enhance the recognition accuracy.</subfield></datafield><datafield tag="650" ind1=" " ind2="4"><subfield code="a">Image recognition</subfield></datafield><datafield tag="650" ind1=" " ind2="4"><subfield code="a">Optimisation techniques</subfield></datafield><datafield tag="650" ind1=" " ind2="4"><subfield code="a">Computer vision and image processing techniques</subfield></datafield><datafield tag="650" ind1=" " ind2="4"><subfield code="a">Video signal processing</subfield></datafield><datafield tag="650" ind1=" " ind2="4"><subfield code="a">Neural nets</subfield></datafield><datafield tag="653" ind1=" " ind2="0"><subfield code="a">Electrical engineering. Electronics. Nuclear engineering</subfield></datafield><datafield tag="700" ind1="0" ind2=" "><subfield code="a">Huaqiang Yuan</subfield><subfield code="e">verfasserin</subfield><subfield code="4">aut</subfield></datafield><datafield tag="700" ind1="0" ind2=" "><subfield code="a">Wenhong Wei</subfield><subfield code="e">verfasserin</subfield><subfield code="4">aut</subfield></datafield><datafield tag="700" ind1="0" ind2=" "><subfield code="a">Tiezhu Zhao</subfield><subfield code="e">verfasserin</subfield><subfield code="4">aut</subfield></datafield><datafield tag="700" ind1="0" ind2=" "><subfield code="a">Qieshi Zhang</subfield><subfield code="e">verfasserin</subfield><subfield code="4">aut</subfield></datafield><datafield tag="773" ind1="0" ind2="8"><subfield code="i">In</subfield><subfield code="t">Electronics Letters</subfield><subfield code="d">Wiley, 2021</subfield><subfield code="g">58(2022), 20, Seite 765-767</subfield><subfield code="w">(DE-627)325616094</subfield><subfield code="w">(DE-600)2038620-5</subfield><subfield code="x">1350911X</subfield><subfield code="7">nnns</subfield></datafield><datafield tag="773" ind1="1" ind2="8"><subfield code="g">volume:58</subfield><subfield code="g">year:2022</subfield><subfield code="g">number:20</subfield><subfield code="g">pages:765-767</subfield></datafield><datafield tag="856" ind1="4" ind2="0"><subfield code="u">https://doi.org/10.1049/ell2.12597</subfield><subfield code="z">kostenfrei</subfield></datafield><datafield tag="856" ind1="4" ind2="0"><subfield code="u">https://doaj.org/article/caed5d9a90be4e95959cda144a4457c7</subfield><subfield code="z">kostenfrei</subfield></datafield><datafield tag="856" ind1="4" ind2="0"><subfield code="u">https://doi.org/10.1049/ell2.12597</subfield><subfield code="z">kostenfrei</subfield></datafield><datafield tag="856" ind1="4" ind2="2"><subfield code="u">https://doaj.org/toc/0013-5194</subfield><subfield code="y">Journal toc</subfield><subfield code="z">kostenfrei</subfield></datafield><datafield tag="856" ind1="4" ind2="2"><subfield code="u">https://doaj.org/toc/1350-911X</subfield><subfield code="y">Journal toc</subfield><subfield code="z">kostenfrei</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_USEFLAG_A</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">SYSFLAG_A</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_DOAJ</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">SSG-OLC-PHA</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_20</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_22</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_23</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_24</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_31</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_39</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_40</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_60</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_62</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_63</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_65</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_69</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_70</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_73</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_95</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_105</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_110</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_120</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_151</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_161</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_170</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_171</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_213</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_224</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_230</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_285</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_293</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_370</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_602</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_636</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2004</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2005</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2006</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2007</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2011</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2014</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2026</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2034</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2037</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2038</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2044</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2048</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2049</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2050</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2055</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2056</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2057</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2059</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2061</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2064</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2068</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2088</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2106</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2108</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2110</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2111</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2118</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2122</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2143</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2144</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2147</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2148</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2152</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2232</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2336</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2470</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2507</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2522</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2548</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4012</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4035</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4037</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4046</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4112</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4125</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4126</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4242</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4249</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4251</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4305</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4306</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4307</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4313</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4322</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4323</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4324</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4325</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4326</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4333</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4334</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4335</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4336</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4338</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4367</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4700</subfield></datafield><datafield tag="951" ind1=" " ind2=" "><subfield code="a">AR</subfield></datafield><datafield tag="952" ind1=" " ind2=" "><subfield code="d">58</subfield><subfield code="j">2022</subfield><subfield code="e">20</subfield><subfield code="h">765-767</subfield></datafield></record></collection>
|
score |
7.4021635 |