Skeleton‐Guided Action Recognition with Multistream 3D Convolutional Neural Network for Elderly‐Care Robot
With the arrival of a global aging society, elderly‐care robots are becoming more and more attractive and can provide better caring services through action recognition. This article presents a skeleton‐guided action recognition framework with multistream 3D convolutional neural network. Two parallel...
Ausführliche Beschreibung
Autor*in: |
Dawei Zhang [verfasserIn] Yanming Zhang [verfasserIn] Meng Zhou [verfasserIn] |
---|
Format: |
E-Artikel |
---|---|
Sprache: |
Englisch |
Erschienen: |
2023 |
---|
Schlagwörter: |
---|
Übergeordnetes Werk: |
In: Advanced Intelligent Systems - Wiley, 2019, 5(2023), 12, Seite n/a-n/a |
---|---|
Übergeordnetes Werk: |
volume:5 ; year:2023 ; number:12 ; pages:n/a-n/a |
Links: |
---|
DOI / URN: |
10.1002/aisy.202300326 |
---|
Katalog-ID: |
DOAJ09876828X |
---|
LEADER | 01000naa a22002652 4500 | ||
---|---|---|---|
001 | DOAJ09876828X | ||
003 | DE-627 | ||
005 | 20240414001916.0 | ||
007 | cr uuu---uuuuu | ||
008 | 240414s2023 xx |||||o 00| ||eng c | ||
024 | 7 | |a 10.1002/aisy.202300326 |2 doi | |
035 | |a (DE-627)DOAJ09876828X | ||
035 | |a (DE-599)DOAJ5c77601117ec4e5695d835890056e539 | ||
040 | |a DE-627 |b ger |c DE-627 |e rakwb | ||
041 | |a eng | ||
050 | 0 | |a TK7885-7895 | |
050 | 0 | |a TJ212-225 | |
100 | 0 | |a Dawei Zhang |e verfasserin |4 aut | |
245 | 1 | 0 | |a Skeleton‐Guided Action Recognition with Multistream 3D Convolutional Neural Network for Elderly‐Care Robot |
264 | 1 | |c 2023 | |
336 | |a Text |b txt |2 rdacontent | ||
337 | |a Computermedien |b c |2 rdamedia | ||
338 | |a Online-Ressource |b cr |2 rdacarrier | ||
520 | |a With the arrival of a global aging society, elderly‐care robots are becoming more and more attractive and can provide better caring services through action recognition. This article presents a skeleton‐guided action recognition framework with multistream 3D convolutional neural network. Two parallel dual‐stream lightweight networks are proposed to enhance the feature extraction ability of human action and meanwhile reduce computation. Two different modes of skeleton input video are constructed to improve the recognition accuracy by decision fusion. The backbone networks adopt Resnet‐18, the feature fusion layer and sliding window mechanism are both designed, and two cross‐entropy losses are used to supervise their training. A dataset (named elder care action recognition (EC‐AR)) with different categories of action is built. The experimental results on HMDB‐51 and EC‐AR datasets both demonstrate that the proposed framework outperforms the existing methods. The developed method is also applied to a prototype of elderly‐care robots, and the test results in home scenarios show that it still has high recognition accuracy and good real‐time performance. | ||
650 | 4 | |a action recognition | |
650 | 4 | |a deep learning | |
650 | 4 | |a service robots | |
653 | 0 | |a Computer engineering. Computer hardware | |
653 | 0 | |a Control engineering systems. Automatic machinery (General) | |
700 | 0 | |a Yanming Zhang |e verfasserin |4 aut | |
700 | 0 | |a Meng Zhou |e verfasserin |4 aut | |
773 | 0 | 8 | |i In |t Advanced Intelligent Systems |d Wiley, 2019 |g 5(2023), 12, Seite n/a-n/a |w (DE-627)166775601X |w (DE-600)2975566-9 |x 26404567 |7 nnns |
773 | 1 | 8 | |g volume:5 |g year:2023 |g number:12 |g pages:n/a-n/a |
856 | 4 | 0 | |u https://doi.org/10.1002/aisy.202300326 |z kostenfrei |
856 | 4 | 0 | |u https://doaj.org/article/5c77601117ec4e5695d835890056e539 |z kostenfrei |
856 | 4 | 0 | |u https://doi.org/10.1002/aisy.202300326 |z kostenfrei |
856 | 4 | 2 | |u https://doaj.org/toc/2640-4567 |y Journal toc |z kostenfrei |
912 | |a GBV_USEFLAG_A | ||
912 | |a SYSFLAG_A | ||
912 | |a GBV_DOAJ | ||
912 | |a GBV_ILN_20 | ||
912 | |a GBV_ILN_22 | ||
912 | |a GBV_ILN_23 | ||
912 | |a GBV_ILN_24 | ||
912 | |a GBV_ILN_31 | ||
912 | |a GBV_ILN_39 | ||
912 | |a GBV_ILN_40 | ||
912 | |a GBV_ILN_60 | ||
912 | |a GBV_ILN_62 | ||
912 | |a GBV_ILN_63 | ||
912 | |a GBV_ILN_65 | ||
912 | |a GBV_ILN_69 | ||
912 | |a GBV_ILN_70 | ||
912 | |a GBV_ILN_73 | ||
912 | |a GBV_ILN_95 | ||
912 | |a GBV_ILN_105 | ||
912 | |a GBV_ILN_110 | ||
912 | |a GBV_ILN_151 | ||
912 | |a GBV_ILN_161 | ||
912 | |a GBV_ILN_170 | ||
912 | |a GBV_ILN_171 | ||
912 | |a GBV_ILN_213 | ||
912 | |a GBV_ILN_224 | ||
912 | |a GBV_ILN_230 | ||
912 | |a GBV_ILN_267 | ||
912 | |a GBV_ILN_285 | ||
912 | |a GBV_ILN_293 | ||
912 | |a GBV_ILN_370 | ||
912 | |a GBV_ILN_602 | ||
912 | |a GBV_ILN_636 | ||
912 | |a GBV_ILN_2004 | ||
912 | |a GBV_ILN_2005 | ||
912 | |a GBV_ILN_2006 | ||
912 | |a GBV_ILN_2007 | ||
912 | |a GBV_ILN_2010 | ||
912 | |a GBV_ILN_2011 | ||
912 | |a GBV_ILN_2014 | ||
912 | |a GBV_ILN_2026 | ||
912 | |a GBV_ILN_2027 | ||
912 | |a GBV_ILN_2034 | ||
912 | |a GBV_ILN_2037 | ||
912 | |a GBV_ILN_2038 | ||
912 | |a GBV_ILN_2044 | ||
912 | |a GBV_ILN_2048 | ||
912 | |a GBV_ILN_2049 | ||
912 | |a GBV_ILN_2050 | ||
912 | |a GBV_ILN_2055 | ||
912 | |a GBV_ILN_2056 | ||
912 | |a GBV_ILN_2057 | ||
912 | |a GBV_ILN_2059 | ||
912 | |a GBV_ILN_2061 | ||
912 | |a GBV_ILN_2064 | ||
912 | |a GBV_ILN_2068 | ||
912 | |a GBV_ILN_2088 | ||
912 | |a GBV_ILN_2106 | ||
912 | |a GBV_ILN_2108 | ||
912 | |a GBV_ILN_2110 | ||
912 | |a GBV_ILN_2111 | ||
912 | |a GBV_ILN_2118 | ||
912 | |a GBV_ILN_2122 | ||
912 | |a GBV_ILN_2143 | ||
912 | |a GBV_ILN_2144 | ||
912 | |a GBV_ILN_2147 | ||
912 | |a GBV_ILN_2148 | ||
912 | |a GBV_ILN_2152 | ||
912 | |a GBV_ILN_2153 | ||
912 | |a GBV_ILN_2232 | ||
912 | |a GBV_ILN_2336 | ||
912 | |a GBV_ILN_2470 | ||
912 | |a GBV_ILN_2507 | ||
912 | |a GBV_ILN_2522 | ||
912 | |a GBV_ILN_4012 | ||
912 | |a GBV_ILN_4035 | ||
912 | |a GBV_ILN_4037 | ||
912 | |a GBV_ILN_4046 | ||
912 | |a GBV_ILN_4112 | ||
912 | |a GBV_ILN_4125 | ||
912 | |a GBV_ILN_4126 | ||
912 | |a GBV_ILN_4242 | ||
912 | |a GBV_ILN_4249 | ||
912 | |a GBV_ILN_4251 | ||
912 | |a GBV_ILN_4305 | ||
912 | |a GBV_ILN_4306 | ||
912 | |a GBV_ILN_4307 | ||
912 | |a GBV_ILN_4313 | ||
912 | |a GBV_ILN_4322 | ||
912 | |a GBV_ILN_4323 | ||
912 | |a GBV_ILN_4324 | ||
912 | |a GBV_ILN_4325 | ||
912 | |a GBV_ILN_4326 | ||
912 | |a GBV_ILN_4333 | ||
912 | |a GBV_ILN_4334 | ||
912 | |a GBV_ILN_4335 | ||
912 | |a GBV_ILN_4336 | ||
912 | |a GBV_ILN_4338 | ||
912 | |a GBV_ILN_4367 | ||
912 | |a GBV_ILN_4700 | ||
951 | |a AR | ||
952 | |d 5 |j 2023 |e 12 |h n/a-n/a |
author_variant |
d z dz y z yz m z mz |
---|---|
matchkey_str |
article:26404567:2023----::kltnuddcineontowtmlitemdovltoanuan |
hierarchy_sort_str |
2023 |
callnumber-subject-code |
TK |
publishDate |
2023 |
allfields |
10.1002/aisy.202300326 doi (DE-627)DOAJ09876828X (DE-599)DOAJ5c77601117ec4e5695d835890056e539 DE-627 ger DE-627 rakwb eng TK7885-7895 TJ212-225 Dawei Zhang verfasserin aut Skeleton‐Guided Action Recognition with Multistream 3D Convolutional Neural Network for Elderly‐Care Robot 2023 Text txt rdacontent Computermedien c rdamedia Online-Ressource cr rdacarrier With the arrival of a global aging society, elderly‐care robots are becoming more and more attractive and can provide better caring services through action recognition. This article presents a skeleton‐guided action recognition framework with multistream 3D convolutional neural network. Two parallel dual‐stream lightweight networks are proposed to enhance the feature extraction ability of human action and meanwhile reduce computation. Two different modes of skeleton input video are constructed to improve the recognition accuracy by decision fusion. The backbone networks adopt Resnet‐18, the feature fusion layer and sliding window mechanism are both designed, and two cross‐entropy losses are used to supervise their training. A dataset (named elder care action recognition (EC‐AR)) with different categories of action is built. The experimental results on HMDB‐51 and EC‐AR datasets both demonstrate that the proposed framework outperforms the existing methods. The developed method is also applied to a prototype of elderly‐care robots, and the test results in home scenarios show that it still has high recognition accuracy and good real‐time performance. action recognition deep learning service robots Computer engineering. Computer hardware Control engineering systems. Automatic machinery (General) Yanming Zhang verfasserin aut Meng Zhou verfasserin aut In Advanced Intelligent Systems Wiley, 2019 5(2023), 12, Seite n/a-n/a (DE-627)166775601X (DE-600)2975566-9 26404567 nnns volume:5 year:2023 number:12 pages:n/a-n/a https://doi.org/10.1002/aisy.202300326 kostenfrei https://doaj.org/article/5c77601117ec4e5695d835890056e539 kostenfrei https://doi.org/10.1002/aisy.202300326 kostenfrei https://doaj.org/toc/2640-4567 Journal toc kostenfrei GBV_USEFLAG_A SYSFLAG_A GBV_DOAJ GBV_ILN_20 GBV_ILN_22 GBV_ILN_23 GBV_ILN_24 GBV_ILN_31 GBV_ILN_39 GBV_ILN_40 GBV_ILN_60 GBV_ILN_62 GBV_ILN_63 GBV_ILN_65 GBV_ILN_69 GBV_ILN_70 GBV_ILN_73 GBV_ILN_95 GBV_ILN_105 GBV_ILN_110 GBV_ILN_151 GBV_ILN_161 GBV_ILN_170 GBV_ILN_171 GBV_ILN_213 GBV_ILN_224 GBV_ILN_230 GBV_ILN_267 GBV_ILN_285 GBV_ILN_293 GBV_ILN_370 GBV_ILN_602 GBV_ILN_636 GBV_ILN_2004 GBV_ILN_2005 GBV_ILN_2006 GBV_ILN_2007 GBV_ILN_2010 GBV_ILN_2011 GBV_ILN_2014 GBV_ILN_2026 GBV_ILN_2027 GBV_ILN_2034 GBV_ILN_2037 GBV_ILN_2038 GBV_ILN_2044 GBV_ILN_2048 GBV_ILN_2049 GBV_ILN_2050 GBV_ILN_2055 GBV_ILN_2056 GBV_ILN_2057 GBV_ILN_2059 GBV_ILN_2061 GBV_ILN_2064 GBV_ILN_2068 GBV_ILN_2088 GBV_ILN_2106 GBV_ILN_2108 GBV_ILN_2110 GBV_ILN_2111 GBV_ILN_2118 GBV_ILN_2122 GBV_ILN_2143 GBV_ILN_2144 GBV_ILN_2147 GBV_ILN_2148 GBV_ILN_2152 GBV_ILN_2153 GBV_ILN_2232 GBV_ILN_2336 GBV_ILN_2470 GBV_ILN_2507 GBV_ILN_2522 GBV_ILN_4012 GBV_ILN_4035 GBV_ILN_4037 GBV_ILN_4046 GBV_ILN_4112 GBV_ILN_4125 GBV_ILN_4126 GBV_ILN_4242 GBV_ILN_4249 GBV_ILN_4251 GBV_ILN_4305 GBV_ILN_4306 GBV_ILN_4307 GBV_ILN_4313 GBV_ILN_4322 GBV_ILN_4323 GBV_ILN_4324 GBV_ILN_4325 GBV_ILN_4326 GBV_ILN_4333 GBV_ILN_4334 GBV_ILN_4335 GBV_ILN_4336 GBV_ILN_4338 GBV_ILN_4367 GBV_ILN_4700 AR 5 2023 12 n/a-n/a |
spelling |
10.1002/aisy.202300326 doi (DE-627)DOAJ09876828X (DE-599)DOAJ5c77601117ec4e5695d835890056e539 DE-627 ger DE-627 rakwb eng TK7885-7895 TJ212-225 Dawei Zhang verfasserin aut Skeleton‐Guided Action Recognition with Multistream 3D Convolutional Neural Network for Elderly‐Care Robot 2023 Text txt rdacontent Computermedien c rdamedia Online-Ressource cr rdacarrier With the arrival of a global aging society, elderly‐care robots are becoming more and more attractive and can provide better caring services through action recognition. This article presents a skeleton‐guided action recognition framework with multistream 3D convolutional neural network. Two parallel dual‐stream lightweight networks are proposed to enhance the feature extraction ability of human action and meanwhile reduce computation. Two different modes of skeleton input video are constructed to improve the recognition accuracy by decision fusion. The backbone networks adopt Resnet‐18, the feature fusion layer and sliding window mechanism are both designed, and two cross‐entropy losses are used to supervise their training. A dataset (named elder care action recognition (EC‐AR)) with different categories of action is built. The experimental results on HMDB‐51 and EC‐AR datasets both demonstrate that the proposed framework outperforms the existing methods. The developed method is also applied to a prototype of elderly‐care robots, and the test results in home scenarios show that it still has high recognition accuracy and good real‐time performance. action recognition deep learning service robots Computer engineering. Computer hardware Control engineering systems. Automatic machinery (General) Yanming Zhang verfasserin aut Meng Zhou verfasserin aut In Advanced Intelligent Systems Wiley, 2019 5(2023), 12, Seite n/a-n/a (DE-627)166775601X (DE-600)2975566-9 26404567 nnns volume:5 year:2023 number:12 pages:n/a-n/a https://doi.org/10.1002/aisy.202300326 kostenfrei https://doaj.org/article/5c77601117ec4e5695d835890056e539 kostenfrei https://doi.org/10.1002/aisy.202300326 kostenfrei https://doaj.org/toc/2640-4567 Journal toc kostenfrei GBV_USEFLAG_A SYSFLAG_A GBV_DOAJ GBV_ILN_20 GBV_ILN_22 GBV_ILN_23 GBV_ILN_24 GBV_ILN_31 GBV_ILN_39 GBV_ILN_40 GBV_ILN_60 GBV_ILN_62 GBV_ILN_63 GBV_ILN_65 GBV_ILN_69 GBV_ILN_70 GBV_ILN_73 GBV_ILN_95 GBV_ILN_105 GBV_ILN_110 GBV_ILN_151 GBV_ILN_161 GBV_ILN_170 GBV_ILN_171 GBV_ILN_213 GBV_ILN_224 GBV_ILN_230 GBV_ILN_267 GBV_ILN_285 GBV_ILN_293 GBV_ILN_370 GBV_ILN_602 GBV_ILN_636 GBV_ILN_2004 GBV_ILN_2005 GBV_ILN_2006 GBV_ILN_2007 GBV_ILN_2010 GBV_ILN_2011 GBV_ILN_2014 GBV_ILN_2026 GBV_ILN_2027 GBV_ILN_2034 GBV_ILN_2037 GBV_ILN_2038 GBV_ILN_2044 GBV_ILN_2048 GBV_ILN_2049 GBV_ILN_2050 GBV_ILN_2055 GBV_ILN_2056 GBV_ILN_2057 GBV_ILN_2059 GBV_ILN_2061 GBV_ILN_2064 GBV_ILN_2068 GBV_ILN_2088 GBV_ILN_2106 GBV_ILN_2108 GBV_ILN_2110 GBV_ILN_2111 GBV_ILN_2118 GBV_ILN_2122 GBV_ILN_2143 GBV_ILN_2144 GBV_ILN_2147 GBV_ILN_2148 GBV_ILN_2152 GBV_ILN_2153 GBV_ILN_2232 GBV_ILN_2336 GBV_ILN_2470 GBV_ILN_2507 GBV_ILN_2522 GBV_ILN_4012 GBV_ILN_4035 GBV_ILN_4037 GBV_ILN_4046 GBV_ILN_4112 GBV_ILN_4125 GBV_ILN_4126 GBV_ILN_4242 GBV_ILN_4249 GBV_ILN_4251 GBV_ILN_4305 GBV_ILN_4306 GBV_ILN_4307 GBV_ILN_4313 GBV_ILN_4322 GBV_ILN_4323 GBV_ILN_4324 GBV_ILN_4325 GBV_ILN_4326 GBV_ILN_4333 GBV_ILN_4334 GBV_ILN_4335 GBV_ILN_4336 GBV_ILN_4338 GBV_ILN_4367 GBV_ILN_4700 AR 5 2023 12 n/a-n/a |
allfields_unstemmed |
10.1002/aisy.202300326 doi (DE-627)DOAJ09876828X (DE-599)DOAJ5c77601117ec4e5695d835890056e539 DE-627 ger DE-627 rakwb eng TK7885-7895 TJ212-225 Dawei Zhang verfasserin aut Skeleton‐Guided Action Recognition with Multistream 3D Convolutional Neural Network for Elderly‐Care Robot 2023 Text txt rdacontent Computermedien c rdamedia Online-Ressource cr rdacarrier With the arrival of a global aging society, elderly‐care robots are becoming more and more attractive and can provide better caring services through action recognition. This article presents a skeleton‐guided action recognition framework with multistream 3D convolutional neural network. Two parallel dual‐stream lightweight networks are proposed to enhance the feature extraction ability of human action and meanwhile reduce computation. Two different modes of skeleton input video are constructed to improve the recognition accuracy by decision fusion. The backbone networks adopt Resnet‐18, the feature fusion layer and sliding window mechanism are both designed, and two cross‐entropy losses are used to supervise their training. A dataset (named elder care action recognition (EC‐AR)) with different categories of action is built. The experimental results on HMDB‐51 and EC‐AR datasets both demonstrate that the proposed framework outperforms the existing methods. The developed method is also applied to a prototype of elderly‐care robots, and the test results in home scenarios show that it still has high recognition accuracy and good real‐time performance. action recognition deep learning service robots Computer engineering. Computer hardware Control engineering systems. Automatic machinery (General) Yanming Zhang verfasserin aut Meng Zhou verfasserin aut In Advanced Intelligent Systems Wiley, 2019 5(2023), 12, Seite n/a-n/a (DE-627)166775601X (DE-600)2975566-9 26404567 nnns volume:5 year:2023 number:12 pages:n/a-n/a https://doi.org/10.1002/aisy.202300326 kostenfrei https://doaj.org/article/5c77601117ec4e5695d835890056e539 kostenfrei https://doi.org/10.1002/aisy.202300326 kostenfrei https://doaj.org/toc/2640-4567 Journal toc kostenfrei GBV_USEFLAG_A SYSFLAG_A GBV_DOAJ GBV_ILN_20 GBV_ILN_22 GBV_ILN_23 GBV_ILN_24 GBV_ILN_31 GBV_ILN_39 GBV_ILN_40 GBV_ILN_60 GBV_ILN_62 GBV_ILN_63 GBV_ILN_65 GBV_ILN_69 GBV_ILN_70 GBV_ILN_73 GBV_ILN_95 GBV_ILN_105 GBV_ILN_110 GBV_ILN_151 GBV_ILN_161 GBV_ILN_170 GBV_ILN_171 GBV_ILN_213 GBV_ILN_224 GBV_ILN_230 GBV_ILN_267 GBV_ILN_285 GBV_ILN_293 GBV_ILN_370 GBV_ILN_602 GBV_ILN_636 GBV_ILN_2004 GBV_ILN_2005 GBV_ILN_2006 GBV_ILN_2007 GBV_ILN_2010 GBV_ILN_2011 GBV_ILN_2014 GBV_ILN_2026 GBV_ILN_2027 GBV_ILN_2034 GBV_ILN_2037 GBV_ILN_2038 GBV_ILN_2044 GBV_ILN_2048 GBV_ILN_2049 GBV_ILN_2050 GBV_ILN_2055 GBV_ILN_2056 GBV_ILN_2057 GBV_ILN_2059 GBV_ILN_2061 GBV_ILN_2064 GBV_ILN_2068 GBV_ILN_2088 GBV_ILN_2106 GBV_ILN_2108 GBV_ILN_2110 GBV_ILN_2111 GBV_ILN_2118 GBV_ILN_2122 GBV_ILN_2143 GBV_ILN_2144 GBV_ILN_2147 GBV_ILN_2148 GBV_ILN_2152 GBV_ILN_2153 GBV_ILN_2232 GBV_ILN_2336 GBV_ILN_2470 GBV_ILN_2507 GBV_ILN_2522 GBV_ILN_4012 GBV_ILN_4035 GBV_ILN_4037 GBV_ILN_4046 GBV_ILN_4112 GBV_ILN_4125 GBV_ILN_4126 GBV_ILN_4242 GBV_ILN_4249 GBV_ILN_4251 GBV_ILN_4305 GBV_ILN_4306 GBV_ILN_4307 GBV_ILN_4313 GBV_ILN_4322 GBV_ILN_4323 GBV_ILN_4324 GBV_ILN_4325 GBV_ILN_4326 GBV_ILN_4333 GBV_ILN_4334 GBV_ILN_4335 GBV_ILN_4336 GBV_ILN_4338 GBV_ILN_4367 GBV_ILN_4700 AR 5 2023 12 n/a-n/a |
allfieldsGer |
10.1002/aisy.202300326 doi (DE-627)DOAJ09876828X (DE-599)DOAJ5c77601117ec4e5695d835890056e539 DE-627 ger DE-627 rakwb eng TK7885-7895 TJ212-225 Dawei Zhang verfasserin aut Skeleton‐Guided Action Recognition with Multistream 3D Convolutional Neural Network for Elderly‐Care Robot 2023 Text txt rdacontent Computermedien c rdamedia Online-Ressource cr rdacarrier With the arrival of a global aging society, elderly‐care robots are becoming more and more attractive and can provide better caring services through action recognition. This article presents a skeleton‐guided action recognition framework with multistream 3D convolutional neural network. Two parallel dual‐stream lightweight networks are proposed to enhance the feature extraction ability of human action and meanwhile reduce computation. Two different modes of skeleton input video are constructed to improve the recognition accuracy by decision fusion. The backbone networks adopt Resnet‐18, the feature fusion layer and sliding window mechanism are both designed, and two cross‐entropy losses are used to supervise their training. A dataset (named elder care action recognition (EC‐AR)) with different categories of action is built. The experimental results on HMDB‐51 and EC‐AR datasets both demonstrate that the proposed framework outperforms the existing methods. The developed method is also applied to a prototype of elderly‐care robots, and the test results in home scenarios show that it still has high recognition accuracy and good real‐time performance. action recognition deep learning service robots Computer engineering. Computer hardware Control engineering systems. Automatic machinery (General) Yanming Zhang verfasserin aut Meng Zhou verfasserin aut In Advanced Intelligent Systems Wiley, 2019 5(2023), 12, Seite n/a-n/a (DE-627)166775601X (DE-600)2975566-9 26404567 nnns volume:5 year:2023 number:12 pages:n/a-n/a https://doi.org/10.1002/aisy.202300326 kostenfrei https://doaj.org/article/5c77601117ec4e5695d835890056e539 kostenfrei https://doi.org/10.1002/aisy.202300326 kostenfrei https://doaj.org/toc/2640-4567 Journal toc kostenfrei GBV_USEFLAG_A SYSFLAG_A GBV_DOAJ GBV_ILN_20 GBV_ILN_22 GBV_ILN_23 GBV_ILN_24 GBV_ILN_31 GBV_ILN_39 GBV_ILN_40 GBV_ILN_60 GBV_ILN_62 GBV_ILN_63 GBV_ILN_65 GBV_ILN_69 GBV_ILN_70 GBV_ILN_73 GBV_ILN_95 GBV_ILN_105 GBV_ILN_110 GBV_ILN_151 GBV_ILN_161 GBV_ILN_170 GBV_ILN_171 GBV_ILN_213 GBV_ILN_224 GBV_ILN_230 GBV_ILN_267 GBV_ILN_285 GBV_ILN_293 GBV_ILN_370 GBV_ILN_602 GBV_ILN_636 GBV_ILN_2004 GBV_ILN_2005 GBV_ILN_2006 GBV_ILN_2007 GBV_ILN_2010 GBV_ILN_2011 GBV_ILN_2014 GBV_ILN_2026 GBV_ILN_2027 GBV_ILN_2034 GBV_ILN_2037 GBV_ILN_2038 GBV_ILN_2044 GBV_ILN_2048 GBV_ILN_2049 GBV_ILN_2050 GBV_ILN_2055 GBV_ILN_2056 GBV_ILN_2057 GBV_ILN_2059 GBV_ILN_2061 GBV_ILN_2064 GBV_ILN_2068 GBV_ILN_2088 GBV_ILN_2106 GBV_ILN_2108 GBV_ILN_2110 GBV_ILN_2111 GBV_ILN_2118 GBV_ILN_2122 GBV_ILN_2143 GBV_ILN_2144 GBV_ILN_2147 GBV_ILN_2148 GBV_ILN_2152 GBV_ILN_2153 GBV_ILN_2232 GBV_ILN_2336 GBV_ILN_2470 GBV_ILN_2507 GBV_ILN_2522 GBV_ILN_4012 GBV_ILN_4035 GBV_ILN_4037 GBV_ILN_4046 GBV_ILN_4112 GBV_ILN_4125 GBV_ILN_4126 GBV_ILN_4242 GBV_ILN_4249 GBV_ILN_4251 GBV_ILN_4305 GBV_ILN_4306 GBV_ILN_4307 GBV_ILN_4313 GBV_ILN_4322 GBV_ILN_4323 GBV_ILN_4324 GBV_ILN_4325 GBV_ILN_4326 GBV_ILN_4333 GBV_ILN_4334 GBV_ILN_4335 GBV_ILN_4336 GBV_ILN_4338 GBV_ILN_4367 GBV_ILN_4700 AR 5 2023 12 n/a-n/a |
allfieldsSound |
10.1002/aisy.202300326 doi (DE-627)DOAJ09876828X (DE-599)DOAJ5c77601117ec4e5695d835890056e539 DE-627 ger DE-627 rakwb eng TK7885-7895 TJ212-225 Dawei Zhang verfasserin aut Skeleton‐Guided Action Recognition with Multistream 3D Convolutional Neural Network for Elderly‐Care Robot 2023 Text txt rdacontent Computermedien c rdamedia Online-Ressource cr rdacarrier With the arrival of a global aging society, elderly‐care robots are becoming more and more attractive and can provide better caring services through action recognition. This article presents a skeleton‐guided action recognition framework with multistream 3D convolutional neural network. Two parallel dual‐stream lightweight networks are proposed to enhance the feature extraction ability of human action and meanwhile reduce computation. Two different modes of skeleton input video are constructed to improve the recognition accuracy by decision fusion. The backbone networks adopt Resnet‐18, the feature fusion layer and sliding window mechanism are both designed, and two cross‐entropy losses are used to supervise their training. A dataset (named elder care action recognition (EC‐AR)) with different categories of action is built. The experimental results on HMDB‐51 and EC‐AR datasets both demonstrate that the proposed framework outperforms the existing methods. The developed method is also applied to a prototype of elderly‐care robots, and the test results in home scenarios show that it still has high recognition accuracy and good real‐time performance. action recognition deep learning service robots Computer engineering. Computer hardware Control engineering systems. Automatic machinery (General) Yanming Zhang verfasserin aut Meng Zhou verfasserin aut In Advanced Intelligent Systems Wiley, 2019 5(2023), 12, Seite n/a-n/a (DE-627)166775601X (DE-600)2975566-9 26404567 nnns volume:5 year:2023 number:12 pages:n/a-n/a https://doi.org/10.1002/aisy.202300326 kostenfrei https://doaj.org/article/5c77601117ec4e5695d835890056e539 kostenfrei https://doi.org/10.1002/aisy.202300326 kostenfrei https://doaj.org/toc/2640-4567 Journal toc kostenfrei GBV_USEFLAG_A SYSFLAG_A GBV_DOAJ GBV_ILN_20 GBV_ILN_22 GBV_ILN_23 GBV_ILN_24 GBV_ILN_31 GBV_ILN_39 GBV_ILN_40 GBV_ILN_60 GBV_ILN_62 GBV_ILN_63 GBV_ILN_65 GBV_ILN_69 GBV_ILN_70 GBV_ILN_73 GBV_ILN_95 GBV_ILN_105 GBV_ILN_110 GBV_ILN_151 GBV_ILN_161 GBV_ILN_170 GBV_ILN_171 GBV_ILN_213 GBV_ILN_224 GBV_ILN_230 GBV_ILN_267 GBV_ILN_285 GBV_ILN_293 GBV_ILN_370 GBV_ILN_602 GBV_ILN_636 GBV_ILN_2004 GBV_ILN_2005 GBV_ILN_2006 GBV_ILN_2007 GBV_ILN_2010 GBV_ILN_2011 GBV_ILN_2014 GBV_ILN_2026 GBV_ILN_2027 GBV_ILN_2034 GBV_ILN_2037 GBV_ILN_2038 GBV_ILN_2044 GBV_ILN_2048 GBV_ILN_2049 GBV_ILN_2050 GBV_ILN_2055 GBV_ILN_2056 GBV_ILN_2057 GBV_ILN_2059 GBV_ILN_2061 GBV_ILN_2064 GBV_ILN_2068 GBV_ILN_2088 GBV_ILN_2106 GBV_ILN_2108 GBV_ILN_2110 GBV_ILN_2111 GBV_ILN_2118 GBV_ILN_2122 GBV_ILN_2143 GBV_ILN_2144 GBV_ILN_2147 GBV_ILN_2148 GBV_ILN_2152 GBV_ILN_2153 GBV_ILN_2232 GBV_ILN_2336 GBV_ILN_2470 GBV_ILN_2507 GBV_ILN_2522 GBV_ILN_4012 GBV_ILN_4035 GBV_ILN_4037 GBV_ILN_4046 GBV_ILN_4112 GBV_ILN_4125 GBV_ILN_4126 GBV_ILN_4242 GBV_ILN_4249 GBV_ILN_4251 GBV_ILN_4305 GBV_ILN_4306 GBV_ILN_4307 GBV_ILN_4313 GBV_ILN_4322 GBV_ILN_4323 GBV_ILN_4324 GBV_ILN_4325 GBV_ILN_4326 GBV_ILN_4333 GBV_ILN_4334 GBV_ILN_4335 GBV_ILN_4336 GBV_ILN_4338 GBV_ILN_4367 GBV_ILN_4700 AR 5 2023 12 n/a-n/a |
language |
English |
source |
In Advanced Intelligent Systems 5(2023), 12, Seite n/a-n/a volume:5 year:2023 number:12 pages:n/a-n/a |
sourceStr |
In Advanced Intelligent Systems 5(2023), 12, Seite n/a-n/a volume:5 year:2023 number:12 pages:n/a-n/a |
format_phy_str_mv |
Article |
institution |
findex.gbv.de |
topic_facet |
action recognition deep learning service robots Computer engineering. Computer hardware Control engineering systems. Automatic machinery (General) |
isfreeaccess_bool |
true |
container_title |
Advanced Intelligent Systems |
authorswithroles_txt_mv |
Dawei Zhang @@aut@@ Yanming Zhang @@aut@@ Meng Zhou @@aut@@ |
publishDateDaySort_date |
2023-01-01T00:00:00Z |
hierarchy_top_id |
166775601X |
id |
DOAJ09876828X |
language_de |
englisch |
fullrecord |
<?xml version="1.0" encoding="UTF-8"?><collection xmlns="http://www.loc.gov/MARC21/slim"><record><leader>01000naa a22002652 4500</leader><controlfield tag="001">DOAJ09876828X</controlfield><controlfield tag="003">DE-627</controlfield><controlfield tag="005">20240414001916.0</controlfield><controlfield tag="007">cr uuu---uuuuu</controlfield><controlfield tag="008">240414s2023 xx |||||o 00| ||eng c</controlfield><datafield tag="024" ind1="7" ind2=" "><subfield code="a">10.1002/aisy.202300326</subfield><subfield code="2">doi</subfield></datafield><datafield tag="035" ind1=" " ind2=" "><subfield code="a">(DE-627)DOAJ09876828X</subfield></datafield><datafield tag="035" ind1=" " ind2=" "><subfield code="a">(DE-599)DOAJ5c77601117ec4e5695d835890056e539</subfield></datafield><datafield tag="040" ind1=" " ind2=" "><subfield code="a">DE-627</subfield><subfield code="b">ger</subfield><subfield code="c">DE-627</subfield><subfield code="e">rakwb</subfield></datafield><datafield tag="041" ind1=" " ind2=" "><subfield code="a">eng</subfield></datafield><datafield tag="050" ind1=" " ind2="0"><subfield code="a">TK7885-7895</subfield></datafield><datafield tag="050" ind1=" " ind2="0"><subfield code="a">TJ212-225</subfield></datafield><datafield tag="100" ind1="0" ind2=" "><subfield code="a">Dawei Zhang</subfield><subfield code="e">verfasserin</subfield><subfield code="4">aut</subfield></datafield><datafield tag="245" ind1="1" ind2="0"><subfield code="a">Skeleton‐Guided Action Recognition with Multistream 3D Convolutional Neural Network for Elderly‐Care Robot</subfield></datafield><datafield tag="264" ind1=" " ind2="1"><subfield code="c">2023</subfield></datafield><datafield tag="336" ind1=" " ind2=" "><subfield code="a">Text</subfield><subfield code="b">txt</subfield><subfield code="2">rdacontent</subfield></datafield><datafield tag="337" ind1=" " ind2=" "><subfield code="a">Computermedien</subfield><subfield code="b">c</subfield><subfield code="2">rdamedia</subfield></datafield><datafield tag="338" ind1=" " ind2=" "><subfield code="a">Online-Ressource</subfield><subfield code="b">cr</subfield><subfield code="2">rdacarrier</subfield></datafield><datafield tag="520" ind1=" " ind2=" "><subfield code="a">With the arrival of a global aging society, elderly‐care robots are becoming more and more attractive and can provide better caring services through action recognition. This article presents a skeleton‐guided action recognition framework with multistream 3D convolutional neural network. Two parallel dual‐stream lightweight networks are proposed to enhance the feature extraction ability of human action and meanwhile reduce computation. Two different modes of skeleton input video are constructed to improve the recognition accuracy by decision fusion. The backbone networks adopt Resnet‐18, the feature fusion layer and sliding window mechanism are both designed, and two cross‐entropy losses are used to supervise their training. A dataset (named elder care action recognition (EC‐AR)) with different categories of action is built. The experimental results on HMDB‐51 and EC‐AR datasets both demonstrate that the proposed framework outperforms the existing methods. The developed method is also applied to a prototype of elderly‐care robots, and the test results in home scenarios show that it still has high recognition accuracy and good real‐time performance.</subfield></datafield><datafield tag="650" ind1=" " ind2="4"><subfield code="a">action recognition</subfield></datafield><datafield tag="650" ind1=" " ind2="4"><subfield code="a">deep learning</subfield></datafield><datafield tag="650" ind1=" " ind2="4"><subfield code="a">service robots</subfield></datafield><datafield tag="653" ind1=" " ind2="0"><subfield code="a">Computer engineering. Computer hardware</subfield></datafield><datafield tag="653" ind1=" " ind2="0"><subfield code="a">Control engineering systems. Automatic machinery (General)</subfield></datafield><datafield tag="700" ind1="0" ind2=" "><subfield code="a">Yanming Zhang</subfield><subfield code="e">verfasserin</subfield><subfield code="4">aut</subfield></datafield><datafield tag="700" ind1="0" ind2=" "><subfield code="a">Meng Zhou</subfield><subfield code="e">verfasserin</subfield><subfield code="4">aut</subfield></datafield><datafield tag="773" ind1="0" ind2="8"><subfield code="i">In</subfield><subfield code="t">Advanced Intelligent Systems</subfield><subfield code="d">Wiley, 2019</subfield><subfield code="g">5(2023), 12, Seite n/a-n/a</subfield><subfield code="w">(DE-627)166775601X</subfield><subfield code="w">(DE-600)2975566-9</subfield><subfield code="x">26404567</subfield><subfield code="7">nnns</subfield></datafield><datafield tag="773" ind1="1" ind2="8"><subfield code="g">volume:5</subfield><subfield code="g">year:2023</subfield><subfield code="g">number:12</subfield><subfield code="g">pages:n/a-n/a</subfield></datafield><datafield tag="856" ind1="4" ind2="0"><subfield code="u">https://doi.org/10.1002/aisy.202300326</subfield><subfield code="z">kostenfrei</subfield></datafield><datafield tag="856" ind1="4" ind2="0"><subfield code="u">https://doaj.org/article/5c77601117ec4e5695d835890056e539</subfield><subfield code="z">kostenfrei</subfield></datafield><datafield tag="856" ind1="4" ind2="0"><subfield code="u">https://doi.org/10.1002/aisy.202300326</subfield><subfield code="z">kostenfrei</subfield></datafield><datafield tag="856" ind1="4" ind2="2"><subfield code="u">https://doaj.org/toc/2640-4567</subfield><subfield code="y">Journal toc</subfield><subfield code="z">kostenfrei</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_USEFLAG_A</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">SYSFLAG_A</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_DOAJ</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_20</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_22</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_23</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_24</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_31</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_39</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_40</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_60</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_62</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_63</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_65</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_69</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_70</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_73</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_95</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_105</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_110</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_151</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_161</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_170</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_171</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_213</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_224</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_230</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_267</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_285</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_293</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_370</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_602</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_636</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2004</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2005</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2006</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2007</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2010</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2011</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2014</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2026</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2027</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2034</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2037</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2038</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2044</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2048</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2049</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2050</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2055</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2056</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2057</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2059</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2061</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2064</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2068</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2088</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2106</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2108</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2110</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2111</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2118</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2122</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2143</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2144</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2147</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2148</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2152</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2153</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2232</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2336</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2470</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2507</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2522</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4012</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4035</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4037</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4046</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4112</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4125</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4126</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4242</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4249</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4251</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4305</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4306</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4307</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4313</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4322</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4323</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4324</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4325</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4326</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4333</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4334</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4335</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4336</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4338</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4367</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4700</subfield></datafield><datafield tag="951" ind1=" " ind2=" "><subfield code="a">AR</subfield></datafield><datafield tag="952" ind1=" " ind2=" "><subfield code="d">5</subfield><subfield code="j">2023</subfield><subfield code="e">12</subfield><subfield code="h">n/a-n/a</subfield></datafield></record></collection>
|
callnumber-first |
T - Technology |
author |
Dawei Zhang |
spellingShingle |
Dawei Zhang misc TK7885-7895 misc TJ212-225 misc action recognition misc deep learning misc service robots misc Computer engineering. Computer hardware misc Control engineering systems. Automatic machinery (General) Skeleton‐Guided Action Recognition with Multistream 3D Convolutional Neural Network for Elderly‐Care Robot |
authorStr |
Dawei Zhang |
ppnlink_with_tag_str_mv |
@@773@@(DE-627)166775601X |
format |
electronic Article |
delete_txt_mv |
keep |
author_role |
aut aut aut |
collection |
DOAJ |
remote_str |
true |
callnumber-label |
TK7885-7895 |
illustrated |
Not Illustrated |
issn |
26404567 |
topic_title |
TK7885-7895 TJ212-225 Skeleton‐Guided Action Recognition with Multistream 3D Convolutional Neural Network for Elderly‐Care Robot action recognition deep learning service robots |
topic |
misc TK7885-7895 misc TJ212-225 misc action recognition misc deep learning misc service robots misc Computer engineering. Computer hardware misc Control engineering systems. Automatic machinery (General) |
topic_unstemmed |
misc TK7885-7895 misc TJ212-225 misc action recognition misc deep learning misc service robots misc Computer engineering. Computer hardware misc Control engineering systems. Automatic machinery (General) |
topic_browse |
misc TK7885-7895 misc TJ212-225 misc action recognition misc deep learning misc service robots misc Computer engineering. Computer hardware misc Control engineering systems. Automatic machinery (General) |
format_facet |
Elektronische Aufsätze Aufsätze Elektronische Ressource |
format_main_str_mv |
Text Zeitschrift/Artikel |
carriertype_str_mv |
cr |
hierarchy_parent_title |
Advanced Intelligent Systems |
hierarchy_parent_id |
166775601X |
hierarchy_top_title |
Advanced Intelligent Systems |
isfreeaccess_txt |
true |
familylinks_str_mv |
(DE-627)166775601X (DE-600)2975566-9 |
title |
Skeleton‐Guided Action Recognition with Multistream 3D Convolutional Neural Network for Elderly‐Care Robot |
ctrlnum |
(DE-627)DOAJ09876828X (DE-599)DOAJ5c77601117ec4e5695d835890056e539 |
title_full |
Skeleton‐Guided Action Recognition with Multistream 3D Convolutional Neural Network for Elderly‐Care Robot |
author_sort |
Dawei Zhang |
journal |
Advanced Intelligent Systems |
journalStr |
Advanced Intelligent Systems |
callnumber-first-code |
T |
lang_code |
eng |
isOA_bool |
true |
recordtype |
marc |
publishDateSort |
2023 |
contenttype_str_mv |
txt |
author_browse |
Dawei Zhang Yanming Zhang Meng Zhou |
container_volume |
5 |
class |
TK7885-7895 TJ212-225 |
format_se |
Elektronische Aufsätze |
author-letter |
Dawei Zhang |
doi_str_mv |
10.1002/aisy.202300326 |
author2-role |
verfasserin |
title_sort |
skeleton‐guided action recognition with multistream 3d convolutional neural network for elderly‐care robot |
callnumber |
TK7885-7895 |
title_auth |
Skeleton‐Guided Action Recognition with Multistream 3D Convolutional Neural Network for Elderly‐Care Robot |
abstract |
With the arrival of a global aging society, elderly‐care robots are becoming more and more attractive and can provide better caring services through action recognition. This article presents a skeleton‐guided action recognition framework with multistream 3D convolutional neural network. Two parallel dual‐stream lightweight networks are proposed to enhance the feature extraction ability of human action and meanwhile reduce computation. Two different modes of skeleton input video are constructed to improve the recognition accuracy by decision fusion. The backbone networks adopt Resnet‐18, the feature fusion layer and sliding window mechanism are both designed, and two cross‐entropy losses are used to supervise their training. A dataset (named elder care action recognition (EC‐AR)) with different categories of action is built. The experimental results on HMDB‐51 and EC‐AR datasets both demonstrate that the proposed framework outperforms the existing methods. The developed method is also applied to a prototype of elderly‐care robots, and the test results in home scenarios show that it still has high recognition accuracy and good real‐time performance. |
abstractGer |
With the arrival of a global aging society, elderly‐care robots are becoming more and more attractive and can provide better caring services through action recognition. This article presents a skeleton‐guided action recognition framework with multistream 3D convolutional neural network. Two parallel dual‐stream lightweight networks are proposed to enhance the feature extraction ability of human action and meanwhile reduce computation. Two different modes of skeleton input video are constructed to improve the recognition accuracy by decision fusion. The backbone networks adopt Resnet‐18, the feature fusion layer and sliding window mechanism are both designed, and two cross‐entropy losses are used to supervise their training. A dataset (named elder care action recognition (EC‐AR)) with different categories of action is built. The experimental results on HMDB‐51 and EC‐AR datasets both demonstrate that the proposed framework outperforms the existing methods. The developed method is also applied to a prototype of elderly‐care robots, and the test results in home scenarios show that it still has high recognition accuracy and good real‐time performance. |
abstract_unstemmed |
With the arrival of a global aging society, elderly‐care robots are becoming more and more attractive and can provide better caring services through action recognition. This article presents a skeleton‐guided action recognition framework with multistream 3D convolutional neural network. Two parallel dual‐stream lightweight networks are proposed to enhance the feature extraction ability of human action and meanwhile reduce computation. Two different modes of skeleton input video are constructed to improve the recognition accuracy by decision fusion. The backbone networks adopt Resnet‐18, the feature fusion layer and sliding window mechanism are both designed, and two cross‐entropy losses are used to supervise their training. A dataset (named elder care action recognition (EC‐AR)) with different categories of action is built. The experimental results on HMDB‐51 and EC‐AR datasets both demonstrate that the proposed framework outperforms the existing methods. The developed method is also applied to a prototype of elderly‐care robots, and the test results in home scenarios show that it still has high recognition accuracy and good real‐time performance. |
collection_details |
GBV_USEFLAG_A SYSFLAG_A GBV_DOAJ GBV_ILN_20 GBV_ILN_22 GBV_ILN_23 GBV_ILN_24 GBV_ILN_31 GBV_ILN_39 GBV_ILN_40 GBV_ILN_60 GBV_ILN_62 GBV_ILN_63 GBV_ILN_65 GBV_ILN_69 GBV_ILN_70 GBV_ILN_73 GBV_ILN_95 GBV_ILN_105 GBV_ILN_110 GBV_ILN_151 GBV_ILN_161 GBV_ILN_170 GBV_ILN_171 GBV_ILN_213 GBV_ILN_224 GBV_ILN_230 GBV_ILN_267 GBV_ILN_285 GBV_ILN_293 GBV_ILN_370 GBV_ILN_602 GBV_ILN_636 GBV_ILN_2004 GBV_ILN_2005 GBV_ILN_2006 GBV_ILN_2007 GBV_ILN_2010 GBV_ILN_2011 GBV_ILN_2014 GBV_ILN_2026 GBV_ILN_2027 GBV_ILN_2034 GBV_ILN_2037 GBV_ILN_2038 GBV_ILN_2044 GBV_ILN_2048 GBV_ILN_2049 GBV_ILN_2050 GBV_ILN_2055 GBV_ILN_2056 GBV_ILN_2057 GBV_ILN_2059 GBV_ILN_2061 GBV_ILN_2064 GBV_ILN_2068 GBV_ILN_2088 GBV_ILN_2106 GBV_ILN_2108 GBV_ILN_2110 GBV_ILN_2111 GBV_ILN_2118 GBV_ILN_2122 GBV_ILN_2143 GBV_ILN_2144 GBV_ILN_2147 GBV_ILN_2148 GBV_ILN_2152 GBV_ILN_2153 GBV_ILN_2232 GBV_ILN_2336 GBV_ILN_2470 GBV_ILN_2507 GBV_ILN_2522 GBV_ILN_4012 GBV_ILN_4035 GBV_ILN_4037 GBV_ILN_4046 GBV_ILN_4112 GBV_ILN_4125 GBV_ILN_4126 GBV_ILN_4242 GBV_ILN_4249 GBV_ILN_4251 GBV_ILN_4305 GBV_ILN_4306 GBV_ILN_4307 GBV_ILN_4313 GBV_ILN_4322 GBV_ILN_4323 GBV_ILN_4324 GBV_ILN_4325 GBV_ILN_4326 GBV_ILN_4333 GBV_ILN_4334 GBV_ILN_4335 GBV_ILN_4336 GBV_ILN_4338 GBV_ILN_4367 GBV_ILN_4700 |
container_issue |
12 |
title_short |
Skeleton‐Guided Action Recognition with Multistream 3D Convolutional Neural Network for Elderly‐Care Robot |
url |
https://doi.org/10.1002/aisy.202300326 https://doaj.org/article/5c77601117ec4e5695d835890056e539 https://doaj.org/toc/2640-4567 |
remote_bool |
true |
author2 |
Yanming Zhang Meng Zhou |
author2Str |
Yanming Zhang Meng Zhou |
ppnlink |
166775601X |
callnumber-subject |
TK - Electrical and Nuclear Engineering |
mediatype_str_mv |
c |
isOA_txt |
true |
hochschulschrift_bool |
false |
doi_str |
10.1002/aisy.202300326 |
callnumber-a |
TK7885-7895 |
up_date |
2024-07-03T19:06:06.661Z |
_version_ |
1803585917199319040 |
fullrecord_marcxml |
<?xml version="1.0" encoding="UTF-8"?><collection xmlns="http://www.loc.gov/MARC21/slim"><record><leader>01000naa a22002652 4500</leader><controlfield tag="001">DOAJ09876828X</controlfield><controlfield tag="003">DE-627</controlfield><controlfield tag="005">20240414001916.0</controlfield><controlfield tag="007">cr uuu---uuuuu</controlfield><controlfield tag="008">240414s2023 xx |||||o 00| ||eng c</controlfield><datafield tag="024" ind1="7" ind2=" "><subfield code="a">10.1002/aisy.202300326</subfield><subfield code="2">doi</subfield></datafield><datafield tag="035" ind1=" " ind2=" "><subfield code="a">(DE-627)DOAJ09876828X</subfield></datafield><datafield tag="035" ind1=" " ind2=" "><subfield code="a">(DE-599)DOAJ5c77601117ec4e5695d835890056e539</subfield></datafield><datafield tag="040" ind1=" " ind2=" "><subfield code="a">DE-627</subfield><subfield code="b">ger</subfield><subfield code="c">DE-627</subfield><subfield code="e">rakwb</subfield></datafield><datafield tag="041" ind1=" " ind2=" "><subfield code="a">eng</subfield></datafield><datafield tag="050" ind1=" " ind2="0"><subfield code="a">TK7885-7895</subfield></datafield><datafield tag="050" ind1=" " ind2="0"><subfield code="a">TJ212-225</subfield></datafield><datafield tag="100" ind1="0" ind2=" "><subfield code="a">Dawei Zhang</subfield><subfield code="e">verfasserin</subfield><subfield code="4">aut</subfield></datafield><datafield tag="245" ind1="1" ind2="0"><subfield code="a">Skeleton‐Guided Action Recognition with Multistream 3D Convolutional Neural Network for Elderly‐Care Robot</subfield></datafield><datafield tag="264" ind1=" " ind2="1"><subfield code="c">2023</subfield></datafield><datafield tag="336" ind1=" " ind2=" "><subfield code="a">Text</subfield><subfield code="b">txt</subfield><subfield code="2">rdacontent</subfield></datafield><datafield tag="337" ind1=" " ind2=" "><subfield code="a">Computermedien</subfield><subfield code="b">c</subfield><subfield code="2">rdamedia</subfield></datafield><datafield tag="338" ind1=" " ind2=" "><subfield code="a">Online-Ressource</subfield><subfield code="b">cr</subfield><subfield code="2">rdacarrier</subfield></datafield><datafield tag="520" ind1=" " ind2=" "><subfield code="a">With the arrival of a global aging society, elderly‐care robots are becoming more and more attractive and can provide better caring services through action recognition. This article presents a skeleton‐guided action recognition framework with multistream 3D convolutional neural network. Two parallel dual‐stream lightweight networks are proposed to enhance the feature extraction ability of human action and meanwhile reduce computation. Two different modes of skeleton input video are constructed to improve the recognition accuracy by decision fusion. The backbone networks adopt Resnet‐18, the feature fusion layer and sliding window mechanism are both designed, and two cross‐entropy losses are used to supervise their training. A dataset (named elder care action recognition (EC‐AR)) with different categories of action is built. The experimental results on HMDB‐51 and EC‐AR datasets both demonstrate that the proposed framework outperforms the existing methods. The developed method is also applied to a prototype of elderly‐care robots, and the test results in home scenarios show that it still has high recognition accuracy and good real‐time performance.</subfield></datafield><datafield tag="650" ind1=" " ind2="4"><subfield code="a">action recognition</subfield></datafield><datafield tag="650" ind1=" " ind2="4"><subfield code="a">deep learning</subfield></datafield><datafield tag="650" ind1=" " ind2="4"><subfield code="a">service robots</subfield></datafield><datafield tag="653" ind1=" " ind2="0"><subfield code="a">Computer engineering. Computer hardware</subfield></datafield><datafield tag="653" ind1=" " ind2="0"><subfield code="a">Control engineering systems. Automatic machinery (General)</subfield></datafield><datafield tag="700" ind1="0" ind2=" "><subfield code="a">Yanming Zhang</subfield><subfield code="e">verfasserin</subfield><subfield code="4">aut</subfield></datafield><datafield tag="700" ind1="0" ind2=" "><subfield code="a">Meng Zhou</subfield><subfield code="e">verfasserin</subfield><subfield code="4">aut</subfield></datafield><datafield tag="773" ind1="0" ind2="8"><subfield code="i">In</subfield><subfield code="t">Advanced Intelligent Systems</subfield><subfield code="d">Wiley, 2019</subfield><subfield code="g">5(2023), 12, Seite n/a-n/a</subfield><subfield code="w">(DE-627)166775601X</subfield><subfield code="w">(DE-600)2975566-9</subfield><subfield code="x">26404567</subfield><subfield code="7">nnns</subfield></datafield><datafield tag="773" ind1="1" ind2="8"><subfield code="g">volume:5</subfield><subfield code="g">year:2023</subfield><subfield code="g">number:12</subfield><subfield code="g">pages:n/a-n/a</subfield></datafield><datafield tag="856" ind1="4" ind2="0"><subfield code="u">https://doi.org/10.1002/aisy.202300326</subfield><subfield code="z">kostenfrei</subfield></datafield><datafield tag="856" ind1="4" ind2="0"><subfield code="u">https://doaj.org/article/5c77601117ec4e5695d835890056e539</subfield><subfield code="z">kostenfrei</subfield></datafield><datafield tag="856" ind1="4" ind2="0"><subfield code="u">https://doi.org/10.1002/aisy.202300326</subfield><subfield code="z">kostenfrei</subfield></datafield><datafield tag="856" ind1="4" ind2="2"><subfield code="u">https://doaj.org/toc/2640-4567</subfield><subfield code="y">Journal toc</subfield><subfield code="z">kostenfrei</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_USEFLAG_A</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">SYSFLAG_A</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_DOAJ</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_20</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_22</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_23</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_24</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_31</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_39</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_40</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_60</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_62</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_63</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_65</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_69</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_70</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_73</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_95</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_105</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_110</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_151</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_161</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_170</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_171</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_213</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_224</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_230</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_267</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_285</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_293</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_370</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_602</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_636</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2004</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2005</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2006</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2007</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2010</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2011</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2014</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2026</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2027</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2034</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2037</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2038</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2044</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2048</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2049</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2050</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2055</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2056</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2057</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2059</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2061</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2064</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2068</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2088</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2106</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2108</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2110</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2111</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2118</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2122</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2143</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2144</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2147</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2148</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2152</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2153</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2232</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2336</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2470</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2507</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2522</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4012</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4035</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4037</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4046</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4112</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4125</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4126</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4242</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4249</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4251</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4305</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4306</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4307</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4313</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4322</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4323</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4324</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4325</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4326</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4333</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4334</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4335</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4336</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4338</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4367</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4700</subfield></datafield><datafield tag="951" ind1=" " ind2=" "><subfield code="a">AR</subfield></datafield><datafield tag="952" ind1=" " ind2=" "><subfield code="d">5</subfield><subfield code="j">2023</subfield><subfield code="e">12</subfield><subfield code="h">n/a-n/a</subfield></datafield></record></collection>
|
score |
7.4004126 |