A data augmentation method for human action recognition using dense joint motion images
With the development of deep learning and neural network techniques, human action recognition has made great progress in recent years. However, it remains challenging to analyse temporal information and identify human actions with few training samples. In this paper, an effective motion image called...
Ausführliche Beschreibung
Autor*in: |
Yao, Leiyue [verfasserIn] Yang, Wei [verfasserIn] Huang, Wei [verfasserIn] |
---|
Format: |
E-Artikel |
---|---|
Sprache: |
Englisch |
Erschienen: |
2020 |
---|
Schlagwörter: |
---|
Übergeordnetes Werk: |
Enthalten in: Applied soft computing - Amsterdam [u.a.] : Elsevier Science, 2001, 97 |
---|---|
Übergeordnetes Werk: |
volume:97 |
DOI / URN: |
10.1016/j.asoc.2020.106713 |
---|
Katalog-ID: |
ELV005141869 |
---|
LEADER | 01000caa a22002652 4500 | ||
---|---|---|---|
001 | ELV005141869 | ||
003 | DE-627 | ||
005 | 20230524152337.0 | ||
007 | cr uuu---uuuuu | ||
008 | 230503s2020 xx |||||o 00| ||eng c | ||
024 | 7 | |a 10.1016/j.asoc.2020.106713 |2 doi | |
035 | |a (DE-627)ELV005141869 | ||
035 | |a (ELSEVIER)S1568-4946(20)30651-7 | ||
040 | |a DE-627 |b ger |c DE-627 |e rda | ||
041 | |a eng | ||
082 | 0 | 4 | |a 004 |q DE-600 |
084 | |a 54.00 |2 bkl | ||
100 | 1 | |a Yao, Leiyue |e verfasserin |0 (orcid)0000-0003-0726-3711 |4 aut | |
245 | 1 | 0 | |a A data augmentation method for human action recognition using dense joint motion images |
264 | 1 | |c 2020 | |
336 | |a nicht spezifiziert |b zzz |2 rdacontent | ||
337 | |a Computermedien |b c |2 rdamedia | ||
338 | |a Online-Ressource |b cr |2 rdacarrier | ||
520 | |a With the development of deep learning and neural network techniques, human action recognition has made great progress in recent years. However, it remains challenging to analyse temporal information and identify human actions with few training samples. In this paper, an effective motion image called a dense joint motion image (DJMI) was proposed to transform an action to an image. Our method was compared with state-of-the-art methods, and its contributions are mainly reflected in three characteristics. First, in contrast to the current classic joint trajectory map (JTM), every pixel of the DJMI is useful and contains essential spatio-temporal information. Thus, the input parameters of the deep neural network (DNN) are reduced by an order of magnitude, and the efficiency of action recognition is improved. Second, each frame of an action video is encoded as an independent slice of the DJMI, which avoids the information loss caused by action trajectory overlap. Third, by using DJMIs, proven algorithms for graphics and images can be used to generate training samples. Compared with the original image, the generated DJMIs contain new and different spatio-temporal information, which enables DNNs to be trained well on very few samples. Our method was evaluated on three benchmark datasets, namely, Florence-3D, UTKinect-Action3D and MSR Action3D. The results showed that our method achieved a recognition speed of 37 fps with competitive accuracy on these datasets. The time efficiency and few-shot learning capability of our method enable it to be used in real-time surveillance. | ||
650 | 4 | |a Human action recognition | |
650 | 4 | |a Motion image | |
650 | 4 | |a Action encoding | |
650 | 4 | |a Few-shot learning | |
650 | 4 | |a Skeleton-based action recognition | |
700 | 1 | |a Yang, Wei |e verfasserin |4 aut | |
700 | 1 | |a Huang, Wei |e verfasserin |4 aut | |
773 | 0 | 8 | |i Enthalten in |t Applied soft computing |d Amsterdam [u.a.] : Elsevier Science, 2001 |g 97 |h Online-Ressource |w (DE-627)334375754 |w (DE-600)2057709-6 |w (DE-576)256145733 |x 1568-4946 |7 nnns |
773 | 1 | 8 | |g volume:97 |
912 | |a GBV_USEFLAG_U | ||
912 | |a SYSFLAG_U | ||
912 | |a GBV_ELV | ||
912 | |a GBV_ILN_20 | ||
912 | |a GBV_ILN_22 | ||
912 | |a GBV_ILN_23 | ||
912 | |a GBV_ILN_24 | ||
912 | |a GBV_ILN_31 | ||
912 | |a GBV_ILN_32 | ||
912 | |a GBV_ILN_40 | ||
912 | |a GBV_ILN_60 | ||
912 | |a GBV_ILN_62 | ||
912 | |a GBV_ILN_63 | ||
912 | |a GBV_ILN_65 | ||
912 | |a GBV_ILN_69 | ||
912 | |a GBV_ILN_70 | ||
912 | |a GBV_ILN_73 | ||
912 | |a GBV_ILN_74 | ||
912 | |a GBV_ILN_90 | ||
912 | |a GBV_ILN_95 | ||
912 | |a GBV_ILN_100 | ||
912 | |a GBV_ILN_101 | ||
912 | |a GBV_ILN_105 | ||
912 | |a GBV_ILN_110 | ||
912 | |a GBV_ILN_150 | ||
912 | |a GBV_ILN_151 | ||
912 | |a GBV_ILN_224 | ||
912 | |a GBV_ILN_370 | ||
912 | |a GBV_ILN_602 | ||
912 | |a GBV_ILN_702 | ||
912 | |a GBV_ILN_2003 | ||
912 | |a GBV_ILN_2004 | ||
912 | |a GBV_ILN_2005 | ||
912 | |a GBV_ILN_2006 | ||
912 | |a GBV_ILN_2008 | ||
912 | |a GBV_ILN_2011 | ||
912 | |a GBV_ILN_2014 | ||
912 | |a GBV_ILN_2015 | ||
912 | |a GBV_ILN_2020 | ||
912 | |a GBV_ILN_2021 | ||
912 | |a GBV_ILN_2025 | ||
912 | |a GBV_ILN_2027 | ||
912 | |a GBV_ILN_2034 | ||
912 | |a GBV_ILN_2038 | ||
912 | |a GBV_ILN_2044 | ||
912 | |a GBV_ILN_2048 | ||
912 | |a GBV_ILN_2049 | ||
912 | |a GBV_ILN_2050 | ||
912 | |a GBV_ILN_2056 | ||
912 | |a GBV_ILN_2059 | ||
912 | |a GBV_ILN_2061 | ||
912 | |a GBV_ILN_2064 | ||
912 | |a GBV_ILN_2065 | ||
912 | |a GBV_ILN_2068 | ||
912 | |a GBV_ILN_2088 | ||
912 | |a GBV_ILN_2111 | ||
912 | |a GBV_ILN_2112 | ||
912 | |a GBV_ILN_2113 | ||
912 | |a GBV_ILN_2118 | ||
912 | |a GBV_ILN_2122 | ||
912 | |a GBV_ILN_2129 | ||
912 | |a GBV_ILN_2143 | ||
912 | |a GBV_ILN_2147 | ||
912 | |a GBV_ILN_2148 | ||
912 | |a GBV_ILN_2152 | ||
912 | |a GBV_ILN_2153 | ||
912 | |a GBV_ILN_2190 | ||
912 | |a GBV_ILN_2336 | ||
912 | |a GBV_ILN_2470 | ||
912 | |a GBV_ILN_2507 | ||
912 | |a GBV_ILN_2522 | ||
912 | |a GBV_ILN_4035 | ||
912 | |a GBV_ILN_4037 | ||
912 | |a GBV_ILN_4046 | ||
912 | |a GBV_ILN_4112 | ||
912 | |a GBV_ILN_4125 | ||
912 | |a GBV_ILN_4126 | ||
912 | |a GBV_ILN_4242 | ||
912 | |a GBV_ILN_4251 | ||
912 | |a GBV_ILN_4305 | ||
912 | |a GBV_ILN_4313 | ||
912 | |a GBV_ILN_4322 | ||
912 | |a GBV_ILN_4323 | ||
912 | |a GBV_ILN_4324 | ||
912 | |a GBV_ILN_4325 | ||
912 | |a GBV_ILN_4326 | ||
912 | |a GBV_ILN_4333 | ||
912 | |a GBV_ILN_4334 | ||
912 | |a GBV_ILN_4335 | ||
912 | |a GBV_ILN_4338 | ||
912 | |a GBV_ILN_4393 | ||
936 | b | k | |a 54.00 |j Informatik: Allgemeines |
951 | |a AR | ||
952 | |d 97 |
author_variant |
l y ly w y wy w h wh |
---|---|
matchkey_str |
article:15684946:2020----::dtagettomtofruaatorcgiinsnd |
hierarchy_sort_str |
2020 |
bklnumber |
54.00 |
publishDate |
2020 |
allfields |
10.1016/j.asoc.2020.106713 doi (DE-627)ELV005141869 (ELSEVIER)S1568-4946(20)30651-7 DE-627 ger DE-627 rda eng 004 DE-600 54.00 bkl Yao, Leiyue verfasserin (orcid)0000-0003-0726-3711 aut A data augmentation method for human action recognition using dense joint motion images 2020 nicht spezifiziert zzz rdacontent Computermedien c rdamedia Online-Ressource cr rdacarrier With the development of deep learning and neural network techniques, human action recognition has made great progress in recent years. However, it remains challenging to analyse temporal information and identify human actions with few training samples. In this paper, an effective motion image called a dense joint motion image (DJMI) was proposed to transform an action to an image. Our method was compared with state-of-the-art methods, and its contributions are mainly reflected in three characteristics. First, in contrast to the current classic joint trajectory map (JTM), every pixel of the DJMI is useful and contains essential spatio-temporal information. Thus, the input parameters of the deep neural network (DNN) are reduced by an order of magnitude, and the efficiency of action recognition is improved. Second, each frame of an action video is encoded as an independent slice of the DJMI, which avoids the information loss caused by action trajectory overlap. Third, by using DJMIs, proven algorithms for graphics and images can be used to generate training samples. Compared with the original image, the generated DJMIs contain new and different spatio-temporal information, which enables DNNs to be trained well on very few samples. Our method was evaluated on three benchmark datasets, namely, Florence-3D, UTKinect-Action3D and MSR Action3D. The results showed that our method achieved a recognition speed of 37 fps with competitive accuracy on these datasets. The time efficiency and few-shot learning capability of our method enable it to be used in real-time surveillance. Human action recognition Motion image Action encoding Few-shot learning Skeleton-based action recognition Yang, Wei verfasserin aut Huang, Wei verfasserin aut Enthalten in Applied soft computing Amsterdam [u.a.] : Elsevier Science, 2001 97 Online-Ressource (DE-627)334375754 (DE-600)2057709-6 (DE-576)256145733 1568-4946 nnns volume:97 GBV_USEFLAG_U SYSFLAG_U GBV_ELV GBV_ILN_20 GBV_ILN_22 GBV_ILN_23 GBV_ILN_24 GBV_ILN_31 GBV_ILN_32 GBV_ILN_40 GBV_ILN_60 GBV_ILN_62 GBV_ILN_63 GBV_ILN_65 GBV_ILN_69 GBV_ILN_70 GBV_ILN_73 GBV_ILN_74 GBV_ILN_90 GBV_ILN_95 GBV_ILN_100 GBV_ILN_101 GBV_ILN_105 GBV_ILN_110 GBV_ILN_150 GBV_ILN_151 GBV_ILN_224 GBV_ILN_370 GBV_ILN_602 GBV_ILN_702 GBV_ILN_2003 GBV_ILN_2004 GBV_ILN_2005 GBV_ILN_2006 GBV_ILN_2008 GBV_ILN_2011 GBV_ILN_2014 GBV_ILN_2015 GBV_ILN_2020 GBV_ILN_2021 GBV_ILN_2025 GBV_ILN_2027 GBV_ILN_2034 GBV_ILN_2038 GBV_ILN_2044 GBV_ILN_2048 GBV_ILN_2049 GBV_ILN_2050 GBV_ILN_2056 GBV_ILN_2059 GBV_ILN_2061 GBV_ILN_2064 GBV_ILN_2065 GBV_ILN_2068 GBV_ILN_2088 GBV_ILN_2111 GBV_ILN_2112 GBV_ILN_2113 GBV_ILN_2118 GBV_ILN_2122 GBV_ILN_2129 GBV_ILN_2143 GBV_ILN_2147 GBV_ILN_2148 GBV_ILN_2152 GBV_ILN_2153 GBV_ILN_2190 GBV_ILN_2336 GBV_ILN_2470 GBV_ILN_2507 GBV_ILN_2522 GBV_ILN_4035 GBV_ILN_4037 GBV_ILN_4046 GBV_ILN_4112 GBV_ILN_4125 GBV_ILN_4126 GBV_ILN_4242 GBV_ILN_4251 GBV_ILN_4305 GBV_ILN_4313 GBV_ILN_4322 GBV_ILN_4323 GBV_ILN_4324 GBV_ILN_4325 GBV_ILN_4326 GBV_ILN_4333 GBV_ILN_4334 GBV_ILN_4335 GBV_ILN_4338 GBV_ILN_4393 54.00 Informatik: Allgemeines AR 97 |
spelling |
10.1016/j.asoc.2020.106713 doi (DE-627)ELV005141869 (ELSEVIER)S1568-4946(20)30651-7 DE-627 ger DE-627 rda eng 004 DE-600 54.00 bkl Yao, Leiyue verfasserin (orcid)0000-0003-0726-3711 aut A data augmentation method for human action recognition using dense joint motion images 2020 nicht spezifiziert zzz rdacontent Computermedien c rdamedia Online-Ressource cr rdacarrier With the development of deep learning and neural network techniques, human action recognition has made great progress in recent years. However, it remains challenging to analyse temporal information and identify human actions with few training samples. In this paper, an effective motion image called a dense joint motion image (DJMI) was proposed to transform an action to an image. Our method was compared with state-of-the-art methods, and its contributions are mainly reflected in three characteristics. First, in contrast to the current classic joint trajectory map (JTM), every pixel of the DJMI is useful and contains essential spatio-temporal information. Thus, the input parameters of the deep neural network (DNN) are reduced by an order of magnitude, and the efficiency of action recognition is improved. Second, each frame of an action video is encoded as an independent slice of the DJMI, which avoids the information loss caused by action trajectory overlap. Third, by using DJMIs, proven algorithms for graphics and images can be used to generate training samples. Compared with the original image, the generated DJMIs contain new and different spatio-temporal information, which enables DNNs to be trained well on very few samples. Our method was evaluated on three benchmark datasets, namely, Florence-3D, UTKinect-Action3D and MSR Action3D. The results showed that our method achieved a recognition speed of 37 fps with competitive accuracy on these datasets. The time efficiency and few-shot learning capability of our method enable it to be used in real-time surveillance. Human action recognition Motion image Action encoding Few-shot learning Skeleton-based action recognition Yang, Wei verfasserin aut Huang, Wei verfasserin aut Enthalten in Applied soft computing Amsterdam [u.a.] : Elsevier Science, 2001 97 Online-Ressource (DE-627)334375754 (DE-600)2057709-6 (DE-576)256145733 1568-4946 nnns volume:97 GBV_USEFLAG_U SYSFLAG_U GBV_ELV GBV_ILN_20 GBV_ILN_22 GBV_ILN_23 GBV_ILN_24 GBV_ILN_31 GBV_ILN_32 GBV_ILN_40 GBV_ILN_60 GBV_ILN_62 GBV_ILN_63 GBV_ILN_65 GBV_ILN_69 GBV_ILN_70 GBV_ILN_73 GBV_ILN_74 GBV_ILN_90 GBV_ILN_95 GBV_ILN_100 GBV_ILN_101 GBV_ILN_105 GBV_ILN_110 GBV_ILN_150 GBV_ILN_151 GBV_ILN_224 GBV_ILN_370 GBV_ILN_602 GBV_ILN_702 GBV_ILN_2003 GBV_ILN_2004 GBV_ILN_2005 GBV_ILN_2006 GBV_ILN_2008 GBV_ILN_2011 GBV_ILN_2014 GBV_ILN_2015 GBV_ILN_2020 GBV_ILN_2021 GBV_ILN_2025 GBV_ILN_2027 GBV_ILN_2034 GBV_ILN_2038 GBV_ILN_2044 GBV_ILN_2048 GBV_ILN_2049 GBV_ILN_2050 GBV_ILN_2056 GBV_ILN_2059 GBV_ILN_2061 GBV_ILN_2064 GBV_ILN_2065 GBV_ILN_2068 GBV_ILN_2088 GBV_ILN_2111 GBV_ILN_2112 GBV_ILN_2113 GBV_ILN_2118 GBV_ILN_2122 GBV_ILN_2129 GBV_ILN_2143 GBV_ILN_2147 GBV_ILN_2148 GBV_ILN_2152 GBV_ILN_2153 GBV_ILN_2190 GBV_ILN_2336 GBV_ILN_2470 GBV_ILN_2507 GBV_ILN_2522 GBV_ILN_4035 GBV_ILN_4037 GBV_ILN_4046 GBV_ILN_4112 GBV_ILN_4125 GBV_ILN_4126 GBV_ILN_4242 GBV_ILN_4251 GBV_ILN_4305 GBV_ILN_4313 GBV_ILN_4322 GBV_ILN_4323 GBV_ILN_4324 GBV_ILN_4325 GBV_ILN_4326 GBV_ILN_4333 GBV_ILN_4334 GBV_ILN_4335 GBV_ILN_4338 GBV_ILN_4393 54.00 Informatik: Allgemeines AR 97 |
allfields_unstemmed |
10.1016/j.asoc.2020.106713 doi (DE-627)ELV005141869 (ELSEVIER)S1568-4946(20)30651-7 DE-627 ger DE-627 rda eng 004 DE-600 54.00 bkl Yao, Leiyue verfasserin (orcid)0000-0003-0726-3711 aut A data augmentation method for human action recognition using dense joint motion images 2020 nicht spezifiziert zzz rdacontent Computermedien c rdamedia Online-Ressource cr rdacarrier With the development of deep learning and neural network techniques, human action recognition has made great progress in recent years. However, it remains challenging to analyse temporal information and identify human actions with few training samples. In this paper, an effective motion image called a dense joint motion image (DJMI) was proposed to transform an action to an image. Our method was compared with state-of-the-art methods, and its contributions are mainly reflected in three characteristics. First, in contrast to the current classic joint trajectory map (JTM), every pixel of the DJMI is useful and contains essential spatio-temporal information. Thus, the input parameters of the deep neural network (DNN) are reduced by an order of magnitude, and the efficiency of action recognition is improved. Second, each frame of an action video is encoded as an independent slice of the DJMI, which avoids the information loss caused by action trajectory overlap. Third, by using DJMIs, proven algorithms for graphics and images can be used to generate training samples. Compared with the original image, the generated DJMIs contain new and different spatio-temporal information, which enables DNNs to be trained well on very few samples. Our method was evaluated on three benchmark datasets, namely, Florence-3D, UTKinect-Action3D and MSR Action3D. The results showed that our method achieved a recognition speed of 37 fps with competitive accuracy on these datasets. The time efficiency and few-shot learning capability of our method enable it to be used in real-time surveillance. Human action recognition Motion image Action encoding Few-shot learning Skeleton-based action recognition Yang, Wei verfasserin aut Huang, Wei verfasserin aut Enthalten in Applied soft computing Amsterdam [u.a.] : Elsevier Science, 2001 97 Online-Ressource (DE-627)334375754 (DE-600)2057709-6 (DE-576)256145733 1568-4946 nnns volume:97 GBV_USEFLAG_U SYSFLAG_U GBV_ELV GBV_ILN_20 GBV_ILN_22 GBV_ILN_23 GBV_ILN_24 GBV_ILN_31 GBV_ILN_32 GBV_ILN_40 GBV_ILN_60 GBV_ILN_62 GBV_ILN_63 GBV_ILN_65 GBV_ILN_69 GBV_ILN_70 GBV_ILN_73 GBV_ILN_74 GBV_ILN_90 GBV_ILN_95 GBV_ILN_100 GBV_ILN_101 GBV_ILN_105 GBV_ILN_110 GBV_ILN_150 GBV_ILN_151 GBV_ILN_224 GBV_ILN_370 GBV_ILN_602 GBV_ILN_702 GBV_ILN_2003 GBV_ILN_2004 GBV_ILN_2005 GBV_ILN_2006 GBV_ILN_2008 GBV_ILN_2011 GBV_ILN_2014 GBV_ILN_2015 GBV_ILN_2020 GBV_ILN_2021 GBV_ILN_2025 GBV_ILN_2027 GBV_ILN_2034 GBV_ILN_2038 GBV_ILN_2044 GBV_ILN_2048 GBV_ILN_2049 GBV_ILN_2050 GBV_ILN_2056 GBV_ILN_2059 GBV_ILN_2061 GBV_ILN_2064 GBV_ILN_2065 GBV_ILN_2068 GBV_ILN_2088 GBV_ILN_2111 GBV_ILN_2112 GBV_ILN_2113 GBV_ILN_2118 GBV_ILN_2122 GBV_ILN_2129 GBV_ILN_2143 GBV_ILN_2147 GBV_ILN_2148 GBV_ILN_2152 GBV_ILN_2153 GBV_ILN_2190 GBV_ILN_2336 GBV_ILN_2470 GBV_ILN_2507 GBV_ILN_2522 GBV_ILN_4035 GBV_ILN_4037 GBV_ILN_4046 GBV_ILN_4112 GBV_ILN_4125 GBV_ILN_4126 GBV_ILN_4242 GBV_ILN_4251 GBV_ILN_4305 GBV_ILN_4313 GBV_ILN_4322 GBV_ILN_4323 GBV_ILN_4324 GBV_ILN_4325 GBV_ILN_4326 GBV_ILN_4333 GBV_ILN_4334 GBV_ILN_4335 GBV_ILN_4338 GBV_ILN_4393 54.00 Informatik: Allgemeines AR 97 |
allfieldsGer |
10.1016/j.asoc.2020.106713 doi (DE-627)ELV005141869 (ELSEVIER)S1568-4946(20)30651-7 DE-627 ger DE-627 rda eng 004 DE-600 54.00 bkl Yao, Leiyue verfasserin (orcid)0000-0003-0726-3711 aut A data augmentation method for human action recognition using dense joint motion images 2020 nicht spezifiziert zzz rdacontent Computermedien c rdamedia Online-Ressource cr rdacarrier With the development of deep learning and neural network techniques, human action recognition has made great progress in recent years. However, it remains challenging to analyse temporal information and identify human actions with few training samples. In this paper, an effective motion image called a dense joint motion image (DJMI) was proposed to transform an action to an image. Our method was compared with state-of-the-art methods, and its contributions are mainly reflected in three characteristics. First, in contrast to the current classic joint trajectory map (JTM), every pixel of the DJMI is useful and contains essential spatio-temporal information. Thus, the input parameters of the deep neural network (DNN) are reduced by an order of magnitude, and the efficiency of action recognition is improved. Second, each frame of an action video is encoded as an independent slice of the DJMI, which avoids the information loss caused by action trajectory overlap. Third, by using DJMIs, proven algorithms for graphics and images can be used to generate training samples. Compared with the original image, the generated DJMIs contain new and different spatio-temporal information, which enables DNNs to be trained well on very few samples. Our method was evaluated on three benchmark datasets, namely, Florence-3D, UTKinect-Action3D and MSR Action3D. The results showed that our method achieved a recognition speed of 37 fps with competitive accuracy on these datasets. The time efficiency and few-shot learning capability of our method enable it to be used in real-time surveillance. Human action recognition Motion image Action encoding Few-shot learning Skeleton-based action recognition Yang, Wei verfasserin aut Huang, Wei verfasserin aut Enthalten in Applied soft computing Amsterdam [u.a.] : Elsevier Science, 2001 97 Online-Ressource (DE-627)334375754 (DE-600)2057709-6 (DE-576)256145733 1568-4946 nnns volume:97 GBV_USEFLAG_U SYSFLAG_U GBV_ELV GBV_ILN_20 GBV_ILN_22 GBV_ILN_23 GBV_ILN_24 GBV_ILN_31 GBV_ILN_32 GBV_ILN_40 GBV_ILN_60 GBV_ILN_62 GBV_ILN_63 GBV_ILN_65 GBV_ILN_69 GBV_ILN_70 GBV_ILN_73 GBV_ILN_74 GBV_ILN_90 GBV_ILN_95 GBV_ILN_100 GBV_ILN_101 GBV_ILN_105 GBV_ILN_110 GBV_ILN_150 GBV_ILN_151 GBV_ILN_224 GBV_ILN_370 GBV_ILN_602 GBV_ILN_702 GBV_ILN_2003 GBV_ILN_2004 GBV_ILN_2005 GBV_ILN_2006 GBV_ILN_2008 GBV_ILN_2011 GBV_ILN_2014 GBV_ILN_2015 GBV_ILN_2020 GBV_ILN_2021 GBV_ILN_2025 GBV_ILN_2027 GBV_ILN_2034 GBV_ILN_2038 GBV_ILN_2044 GBV_ILN_2048 GBV_ILN_2049 GBV_ILN_2050 GBV_ILN_2056 GBV_ILN_2059 GBV_ILN_2061 GBV_ILN_2064 GBV_ILN_2065 GBV_ILN_2068 GBV_ILN_2088 GBV_ILN_2111 GBV_ILN_2112 GBV_ILN_2113 GBV_ILN_2118 GBV_ILN_2122 GBV_ILN_2129 GBV_ILN_2143 GBV_ILN_2147 GBV_ILN_2148 GBV_ILN_2152 GBV_ILN_2153 GBV_ILN_2190 GBV_ILN_2336 GBV_ILN_2470 GBV_ILN_2507 GBV_ILN_2522 GBV_ILN_4035 GBV_ILN_4037 GBV_ILN_4046 GBV_ILN_4112 GBV_ILN_4125 GBV_ILN_4126 GBV_ILN_4242 GBV_ILN_4251 GBV_ILN_4305 GBV_ILN_4313 GBV_ILN_4322 GBV_ILN_4323 GBV_ILN_4324 GBV_ILN_4325 GBV_ILN_4326 GBV_ILN_4333 GBV_ILN_4334 GBV_ILN_4335 GBV_ILN_4338 GBV_ILN_4393 54.00 Informatik: Allgemeines AR 97 |
allfieldsSound |
10.1016/j.asoc.2020.106713 doi (DE-627)ELV005141869 (ELSEVIER)S1568-4946(20)30651-7 DE-627 ger DE-627 rda eng 004 DE-600 54.00 bkl Yao, Leiyue verfasserin (orcid)0000-0003-0726-3711 aut A data augmentation method for human action recognition using dense joint motion images 2020 nicht spezifiziert zzz rdacontent Computermedien c rdamedia Online-Ressource cr rdacarrier With the development of deep learning and neural network techniques, human action recognition has made great progress in recent years. However, it remains challenging to analyse temporal information and identify human actions with few training samples. In this paper, an effective motion image called a dense joint motion image (DJMI) was proposed to transform an action to an image. Our method was compared with state-of-the-art methods, and its contributions are mainly reflected in three characteristics. First, in contrast to the current classic joint trajectory map (JTM), every pixel of the DJMI is useful and contains essential spatio-temporal information. Thus, the input parameters of the deep neural network (DNN) are reduced by an order of magnitude, and the efficiency of action recognition is improved. Second, each frame of an action video is encoded as an independent slice of the DJMI, which avoids the information loss caused by action trajectory overlap. Third, by using DJMIs, proven algorithms for graphics and images can be used to generate training samples. Compared with the original image, the generated DJMIs contain new and different spatio-temporal information, which enables DNNs to be trained well on very few samples. Our method was evaluated on three benchmark datasets, namely, Florence-3D, UTKinect-Action3D and MSR Action3D. The results showed that our method achieved a recognition speed of 37 fps with competitive accuracy on these datasets. The time efficiency and few-shot learning capability of our method enable it to be used in real-time surveillance. Human action recognition Motion image Action encoding Few-shot learning Skeleton-based action recognition Yang, Wei verfasserin aut Huang, Wei verfasserin aut Enthalten in Applied soft computing Amsterdam [u.a.] : Elsevier Science, 2001 97 Online-Ressource (DE-627)334375754 (DE-600)2057709-6 (DE-576)256145733 1568-4946 nnns volume:97 GBV_USEFLAG_U SYSFLAG_U GBV_ELV GBV_ILN_20 GBV_ILN_22 GBV_ILN_23 GBV_ILN_24 GBV_ILN_31 GBV_ILN_32 GBV_ILN_40 GBV_ILN_60 GBV_ILN_62 GBV_ILN_63 GBV_ILN_65 GBV_ILN_69 GBV_ILN_70 GBV_ILN_73 GBV_ILN_74 GBV_ILN_90 GBV_ILN_95 GBV_ILN_100 GBV_ILN_101 GBV_ILN_105 GBV_ILN_110 GBV_ILN_150 GBV_ILN_151 GBV_ILN_224 GBV_ILN_370 GBV_ILN_602 GBV_ILN_702 GBV_ILN_2003 GBV_ILN_2004 GBV_ILN_2005 GBV_ILN_2006 GBV_ILN_2008 GBV_ILN_2011 GBV_ILN_2014 GBV_ILN_2015 GBV_ILN_2020 GBV_ILN_2021 GBV_ILN_2025 GBV_ILN_2027 GBV_ILN_2034 GBV_ILN_2038 GBV_ILN_2044 GBV_ILN_2048 GBV_ILN_2049 GBV_ILN_2050 GBV_ILN_2056 GBV_ILN_2059 GBV_ILN_2061 GBV_ILN_2064 GBV_ILN_2065 GBV_ILN_2068 GBV_ILN_2088 GBV_ILN_2111 GBV_ILN_2112 GBV_ILN_2113 GBV_ILN_2118 GBV_ILN_2122 GBV_ILN_2129 GBV_ILN_2143 GBV_ILN_2147 GBV_ILN_2148 GBV_ILN_2152 GBV_ILN_2153 GBV_ILN_2190 GBV_ILN_2336 GBV_ILN_2470 GBV_ILN_2507 GBV_ILN_2522 GBV_ILN_4035 GBV_ILN_4037 GBV_ILN_4046 GBV_ILN_4112 GBV_ILN_4125 GBV_ILN_4126 GBV_ILN_4242 GBV_ILN_4251 GBV_ILN_4305 GBV_ILN_4313 GBV_ILN_4322 GBV_ILN_4323 GBV_ILN_4324 GBV_ILN_4325 GBV_ILN_4326 GBV_ILN_4333 GBV_ILN_4334 GBV_ILN_4335 GBV_ILN_4338 GBV_ILN_4393 54.00 Informatik: Allgemeines AR 97 |
language |
English |
source |
Enthalten in Applied soft computing 97 volume:97 |
sourceStr |
Enthalten in Applied soft computing 97 volume:97 |
format_phy_str_mv |
Article |
bklname |
Informatik: Allgemeines |
institution |
findex.gbv.de |
topic_facet |
Human action recognition Motion image Action encoding Few-shot learning Skeleton-based action recognition |
dewey-raw |
004 |
isfreeaccess_bool |
false |
container_title |
Applied soft computing |
authorswithroles_txt_mv |
Yao, Leiyue @@aut@@ Yang, Wei @@aut@@ Huang, Wei @@aut@@ |
publishDateDaySort_date |
2020-01-01T00:00:00Z |
hierarchy_top_id |
334375754 |
dewey-sort |
14 |
id |
ELV005141869 |
language_de |
englisch |
fullrecord |
<?xml version="1.0" encoding="UTF-8"?><collection xmlns="http://www.loc.gov/MARC21/slim"><record><leader>01000caa a22002652 4500</leader><controlfield tag="001">ELV005141869</controlfield><controlfield tag="003">DE-627</controlfield><controlfield tag="005">20230524152337.0</controlfield><controlfield tag="007">cr uuu---uuuuu</controlfield><controlfield tag="008">230503s2020 xx |||||o 00| ||eng c</controlfield><datafield tag="024" ind1="7" ind2=" "><subfield code="a">10.1016/j.asoc.2020.106713</subfield><subfield code="2">doi</subfield></datafield><datafield tag="035" ind1=" " ind2=" "><subfield code="a">(DE-627)ELV005141869</subfield></datafield><datafield tag="035" ind1=" " ind2=" "><subfield code="a">(ELSEVIER)S1568-4946(20)30651-7</subfield></datafield><datafield tag="040" ind1=" " ind2=" "><subfield code="a">DE-627</subfield><subfield code="b">ger</subfield><subfield code="c">DE-627</subfield><subfield code="e">rda</subfield></datafield><datafield tag="041" ind1=" " ind2=" "><subfield code="a">eng</subfield></datafield><datafield tag="082" ind1="0" ind2="4"><subfield code="a">004</subfield><subfield code="q">DE-600</subfield></datafield><datafield tag="084" ind1=" " ind2=" "><subfield code="a">54.00</subfield><subfield code="2">bkl</subfield></datafield><datafield tag="100" ind1="1" ind2=" "><subfield code="a">Yao, Leiyue</subfield><subfield code="e">verfasserin</subfield><subfield code="0">(orcid)0000-0003-0726-3711</subfield><subfield code="4">aut</subfield></datafield><datafield tag="245" ind1="1" ind2="0"><subfield code="a">A data augmentation method for human action recognition using dense joint motion images</subfield></datafield><datafield tag="264" ind1=" " ind2="1"><subfield code="c">2020</subfield></datafield><datafield tag="336" ind1=" " ind2=" "><subfield code="a">nicht spezifiziert</subfield><subfield code="b">zzz</subfield><subfield code="2">rdacontent</subfield></datafield><datafield tag="337" ind1=" " ind2=" "><subfield code="a">Computermedien</subfield><subfield code="b">c</subfield><subfield code="2">rdamedia</subfield></datafield><datafield tag="338" ind1=" " ind2=" "><subfield code="a">Online-Ressource</subfield><subfield code="b">cr</subfield><subfield code="2">rdacarrier</subfield></datafield><datafield tag="520" ind1=" " ind2=" "><subfield code="a">With the development of deep learning and neural network techniques, human action recognition has made great progress in recent years. However, it remains challenging to analyse temporal information and identify human actions with few training samples. In this paper, an effective motion image called a dense joint motion image (DJMI) was proposed to transform an action to an image. Our method was compared with state-of-the-art methods, and its contributions are mainly reflected in three characteristics. First, in contrast to the current classic joint trajectory map (JTM), every pixel of the DJMI is useful and contains essential spatio-temporal information. Thus, the input parameters of the deep neural network (DNN) are reduced by an order of magnitude, and the efficiency of action recognition is improved. Second, each frame of an action video is encoded as an independent slice of the DJMI, which avoids the information loss caused by action trajectory overlap. Third, by using DJMIs, proven algorithms for graphics and images can be used to generate training samples. Compared with the original image, the generated DJMIs contain new and different spatio-temporal information, which enables DNNs to be trained well on very few samples. Our method was evaluated on three benchmark datasets, namely, Florence-3D, UTKinect-Action3D and MSR Action3D. The results showed that our method achieved a recognition speed of 37 fps with competitive accuracy on these datasets. The time efficiency and few-shot learning capability of our method enable it to be used in real-time surveillance.</subfield></datafield><datafield tag="650" ind1=" " ind2="4"><subfield code="a">Human action recognition</subfield></datafield><datafield tag="650" ind1=" " ind2="4"><subfield code="a">Motion image</subfield></datafield><datafield tag="650" ind1=" " ind2="4"><subfield code="a">Action encoding</subfield></datafield><datafield tag="650" ind1=" " ind2="4"><subfield code="a">Few-shot learning</subfield></datafield><datafield tag="650" ind1=" " ind2="4"><subfield code="a">Skeleton-based action recognition</subfield></datafield><datafield tag="700" ind1="1" ind2=" "><subfield code="a">Yang, Wei</subfield><subfield code="e">verfasserin</subfield><subfield code="4">aut</subfield></datafield><datafield tag="700" ind1="1" ind2=" "><subfield code="a">Huang, Wei</subfield><subfield code="e">verfasserin</subfield><subfield code="4">aut</subfield></datafield><datafield tag="773" ind1="0" ind2="8"><subfield code="i">Enthalten in</subfield><subfield code="t">Applied soft computing</subfield><subfield code="d">Amsterdam [u.a.] : Elsevier Science, 2001</subfield><subfield code="g">97</subfield><subfield code="h">Online-Ressource</subfield><subfield code="w">(DE-627)334375754</subfield><subfield code="w">(DE-600)2057709-6</subfield><subfield code="w">(DE-576)256145733</subfield><subfield code="x">1568-4946</subfield><subfield code="7">nnns</subfield></datafield><datafield tag="773" ind1="1" ind2="8"><subfield code="g">volume:97</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_USEFLAG_U</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">SYSFLAG_U</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ELV</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_20</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_22</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_23</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_24</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_31</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_32</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_40</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_60</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_62</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_63</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_65</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_69</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_70</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_73</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_74</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_90</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_95</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_100</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_101</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_105</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_110</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_150</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_151</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_224</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_370</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_602</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_702</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2003</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2004</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2005</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2006</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2008</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2011</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2014</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2015</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2020</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2021</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2025</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2027</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2034</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2038</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2044</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2048</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2049</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2050</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2056</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2059</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2061</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2064</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2065</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2068</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2088</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2111</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2112</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2113</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2118</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2122</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2129</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2143</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2147</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2148</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2152</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2153</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2190</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2336</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2470</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2507</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2522</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4035</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4037</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4046</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4112</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4125</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4126</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4242</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4251</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4305</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4313</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4322</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4323</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4324</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4325</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4326</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4333</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4334</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4335</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4338</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4393</subfield></datafield><datafield tag="936" ind1="b" ind2="k"><subfield code="a">54.00</subfield><subfield code="j">Informatik: Allgemeines</subfield></datafield><datafield tag="951" ind1=" " ind2=" "><subfield code="a">AR</subfield></datafield><datafield tag="952" ind1=" " ind2=" "><subfield code="d">97</subfield></datafield></record></collection>
|
author |
Yao, Leiyue |
spellingShingle |
Yao, Leiyue ddc 004 bkl 54.00 misc Human action recognition misc Motion image misc Action encoding misc Few-shot learning misc Skeleton-based action recognition A data augmentation method for human action recognition using dense joint motion images |
authorStr |
Yao, Leiyue |
ppnlink_with_tag_str_mv |
@@773@@(DE-627)334375754 |
format |
electronic Article |
dewey-ones |
004 - Data processing & computer science |
delete_txt_mv |
keep |
author_role |
aut aut aut |
collection |
elsevier |
remote_str |
true |
illustrated |
Not Illustrated |
issn |
1568-4946 |
topic_title |
004 DE-600 54.00 bkl A data augmentation method for human action recognition using dense joint motion images Human action recognition Motion image Action encoding Few-shot learning Skeleton-based action recognition |
topic |
ddc 004 bkl 54.00 misc Human action recognition misc Motion image misc Action encoding misc Few-shot learning misc Skeleton-based action recognition |
topic_unstemmed |
ddc 004 bkl 54.00 misc Human action recognition misc Motion image misc Action encoding misc Few-shot learning misc Skeleton-based action recognition |
topic_browse |
ddc 004 bkl 54.00 misc Human action recognition misc Motion image misc Action encoding misc Few-shot learning misc Skeleton-based action recognition |
format_facet |
Elektronische Aufsätze Aufsätze Elektronische Ressource |
format_main_str_mv |
Text Zeitschrift/Artikel |
carriertype_str_mv |
cr |
hierarchy_parent_title |
Applied soft computing |
hierarchy_parent_id |
334375754 |
dewey-tens |
000 - Computer science, knowledge & systems |
hierarchy_top_title |
Applied soft computing |
isfreeaccess_txt |
false |
familylinks_str_mv |
(DE-627)334375754 (DE-600)2057709-6 (DE-576)256145733 |
title |
A data augmentation method for human action recognition using dense joint motion images |
ctrlnum |
(DE-627)ELV005141869 (ELSEVIER)S1568-4946(20)30651-7 |
title_full |
A data augmentation method for human action recognition using dense joint motion images |
author_sort |
Yao, Leiyue |
journal |
Applied soft computing |
journalStr |
Applied soft computing |
lang_code |
eng |
isOA_bool |
false |
dewey-hundreds |
000 - Computer science, information & general works |
recordtype |
marc |
publishDateSort |
2020 |
contenttype_str_mv |
zzz |
author_browse |
Yao, Leiyue Yang, Wei Huang, Wei |
container_volume |
97 |
class |
004 DE-600 54.00 bkl |
format_se |
Elektronische Aufsätze |
author-letter |
Yao, Leiyue |
doi_str_mv |
10.1016/j.asoc.2020.106713 |
normlink |
(ORCID)0000-0003-0726-3711 |
normlink_prefix_str_mv |
(orcid)0000-0003-0726-3711 |
dewey-full |
004 |
author2-role |
verfasserin |
title_sort |
a data augmentation method for human action recognition using dense joint motion images |
title_auth |
A data augmentation method for human action recognition using dense joint motion images |
abstract |
With the development of deep learning and neural network techniques, human action recognition has made great progress in recent years. However, it remains challenging to analyse temporal information and identify human actions with few training samples. In this paper, an effective motion image called a dense joint motion image (DJMI) was proposed to transform an action to an image. Our method was compared with state-of-the-art methods, and its contributions are mainly reflected in three characteristics. First, in contrast to the current classic joint trajectory map (JTM), every pixel of the DJMI is useful and contains essential spatio-temporal information. Thus, the input parameters of the deep neural network (DNN) are reduced by an order of magnitude, and the efficiency of action recognition is improved. Second, each frame of an action video is encoded as an independent slice of the DJMI, which avoids the information loss caused by action trajectory overlap. Third, by using DJMIs, proven algorithms for graphics and images can be used to generate training samples. Compared with the original image, the generated DJMIs contain new and different spatio-temporal information, which enables DNNs to be trained well on very few samples. Our method was evaluated on three benchmark datasets, namely, Florence-3D, UTKinect-Action3D and MSR Action3D. The results showed that our method achieved a recognition speed of 37 fps with competitive accuracy on these datasets. The time efficiency and few-shot learning capability of our method enable it to be used in real-time surveillance. |
abstractGer |
With the development of deep learning and neural network techniques, human action recognition has made great progress in recent years. However, it remains challenging to analyse temporal information and identify human actions with few training samples. In this paper, an effective motion image called a dense joint motion image (DJMI) was proposed to transform an action to an image. Our method was compared with state-of-the-art methods, and its contributions are mainly reflected in three characteristics. First, in contrast to the current classic joint trajectory map (JTM), every pixel of the DJMI is useful and contains essential spatio-temporal information. Thus, the input parameters of the deep neural network (DNN) are reduced by an order of magnitude, and the efficiency of action recognition is improved. Second, each frame of an action video is encoded as an independent slice of the DJMI, which avoids the information loss caused by action trajectory overlap. Third, by using DJMIs, proven algorithms for graphics and images can be used to generate training samples. Compared with the original image, the generated DJMIs contain new and different spatio-temporal information, which enables DNNs to be trained well on very few samples. Our method was evaluated on three benchmark datasets, namely, Florence-3D, UTKinect-Action3D and MSR Action3D. The results showed that our method achieved a recognition speed of 37 fps with competitive accuracy on these datasets. The time efficiency and few-shot learning capability of our method enable it to be used in real-time surveillance. |
abstract_unstemmed |
With the development of deep learning and neural network techniques, human action recognition has made great progress in recent years. However, it remains challenging to analyse temporal information and identify human actions with few training samples. In this paper, an effective motion image called a dense joint motion image (DJMI) was proposed to transform an action to an image. Our method was compared with state-of-the-art methods, and its contributions are mainly reflected in three characteristics. First, in contrast to the current classic joint trajectory map (JTM), every pixel of the DJMI is useful and contains essential spatio-temporal information. Thus, the input parameters of the deep neural network (DNN) are reduced by an order of magnitude, and the efficiency of action recognition is improved. Second, each frame of an action video is encoded as an independent slice of the DJMI, which avoids the information loss caused by action trajectory overlap. Third, by using DJMIs, proven algorithms for graphics and images can be used to generate training samples. Compared with the original image, the generated DJMIs contain new and different spatio-temporal information, which enables DNNs to be trained well on very few samples. Our method was evaluated on three benchmark datasets, namely, Florence-3D, UTKinect-Action3D and MSR Action3D. The results showed that our method achieved a recognition speed of 37 fps with competitive accuracy on these datasets. The time efficiency and few-shot learning capability of our method enable it to be used in real-time surveillance. |
collection_details |
GBV_USEFLAG_U SYSFLAG_U GBV_ELV GBV_ILN_20 GBV_ILN_22 GBV_ILN_23 GBV_ILN_24 GBV_ILN_31 GBV_ILN_32 GBV_ILN_40 GBV_ILN_60 GBV_ILN_62 GBV_ILN_63 GBV_ILN_65 GBV_ILN_69 GBV_ILN_70 GBV_ILN_73 GBV_ILN_74 GBV_ILN_90 GBV_ILN_95 GBV_ILN_100 GBV_ILN_101 GBV_ILN_105 GBV_ILN_110 GBV_ILN_150 GBV_ILN_151 GBV_ILN_224 GBV_ILN_370 GBV_ILN_602 GBV_ILN_702 GBV_ILN_2003 GBV_ILN_2004 GBV_ILN_2005 GBV_ILN_2006 GBV_ILN_2008 GBV_ILN_2011 GBV_ILN_2014 GBV_ILN_2015 GBV_ILN_2020 GBV_ILN_2021 GBV_ILN_2025 GBV_ILN_2027 GBV_ILN_2034 GBV_ILN_2038 GBV_ILN_2044 GBV_ILN_2048 GBV_ILN_2049 GBV_ILN_2050 GBV_ILN_2056 GBV_ILN_2059 GBV_ILN_2061 GBV_ILN_2064 GBV_ILN_2065 GBV_ILN_2068 GBV_ILN_2088 GBV_ILN_2111 GBV_ILN_2112 GBV_ILN_2113 GBV_ILN_2118 GBV_ILN_2122 GBV_ILN_2129 GBV_ILN_2143 GBV_ILN_2147 GBV_ILN_2148 GBV_ILN_2152 GBV_ILN_2153 GBV_ILN_2190 GBV_ILN_2336 GBV_ILN_2470 GBV_ILN_2507 GBV_ILN_2522 GBV_ILN_4035 GBV_ILN_4037 GBV_ILN_4046 GBV_ILN_4112 GBV_ILN_4125 GBV_ILN_4126 GBV_ILN_4242 GBV_ILN_4251 GBV_ILN_4305 GBV_ILN_4313 GBV_ILN_4322 GBV_ILN_4323 GBV_ILN_4324 GBV_ILN_4325 GBV_ILN_4326 GBV_ILN_4333 GBV_ILN_4334 GBV_ILN_4335 GBV_ILN_4338 GBV_ILN_4393 |
title_short |
A data augmentation method for human action recognition using dense joint motion images |
remote_bool |
true |
author2 |
Yang, Wei Huang, Wei |
author2Str |
Yang, Wei Huang, Wei |
ppnlink |
334375754 |
mediatype_str_mv |
c |
isOA_txt |
false |
hochschulschrift_bool |
false |
doi_str |
10.1016/j.asoc.2020.106713 |
up_date |
2024-07-06T16:57:21.746Z |
_version_ |
1803849607939096576 |
fullrecord_marcxml |
<?xml version="1.0" encoding="UTF-8"?><collection xmlns="http://www.loc.gov/MARC21/slim"><record><leader>01000caa a22002652 4500</leader><controlfield tag="001">ELV005141869</controlfield><controlfield tag="003">DE-627</controlfield><controlfield tag="005">20230524152337.0</controlfield><controlfield tag="007">cr uuu---uuuuu</controlfield><controlfield tag="008">230503s2020 xx |||||o 00| ||eng c</controlfield><datafield tag="024" ind1="7" ind2=" "><subfield code="a">10.1016/j.asoc.2020.106713</subfield><subfield code="2">doi</subfield></datafield><datafield tag="035" ind1=" " ind2=" "><subfield code="a">(DE-627)ELV005141869</subfield></datafield><datafield tag="035" ind1=" " ind2=" "><subfield code="a">(ELSEVIER)S1568-4946(20)30651-7</subfield></datafield><datafield tag="040" ind1=" " ind2=" "><subfield code="a">DE-627</subfield><subfield code="b">ger</subfield><subfield code="c">DE-627</subfield><subfield code="e">rda</subfield></datafield><datafield tag="041" ind1=" " ind2=" "><subfield code="a">eng</subfield></datafield><datafield tag="082" ind1="0" ind2="4"><subfield code="a">004</subfield><subfield code="q">DE-600</subfield></datafield><datafield tag="084" ind1=" " ind2=" "><subfield code="a">54.00</subfield><subfield code="2">bkl</subfield></datafield><datafield tag="100" ind1="1" ind2=" "><subfield code="a">Yao, Leiyue</subfield><subfield code="e">verfasserin</subfield><subfield code="0">(orcid)0000-0003-0726-3711</subfield><subfield code="4">aut</subfield></datafield><datafield tag="245" ind1="1" ind2="0"><subfield code="a">A data augmentation method for human action recognition using dense joint motion images</subfield></datafield><datafield tag="264" ind1=" " ind2="1"><subfield code="c">2020</subfield></datafield><datafield tag="336" ind1=" " ind2=" "><subfield code="a">nicht spezifiziert</subfield><subfield code="b">zzz</subfield><subfield code="2">rdacontent</subfield></datafield><datafield tag="337" ind1=" " ind2=" "><subfield code="a">Computermedien</subfield><subfield code="b">c</subfield><subfield code="2">rdamedia</subfield></datafield><datafield tag="338" ind1=" " ind2=" "><subfield code="a">Online-Ressource</subfield><subfield code="b">cr</subfield><subfield code="2">rdacarrier</subfield></datafield><datafield tag="520" ind1=" " ind2=" "><subfield code="a">With the development of deep learning and neural network techniques, human action recognition has made great progress in recent years. However, it remains challenging to analyse temporal information and identify human actions with few training samples. In this paper, an effective motion image called a dense joint motion image (DJMI) was proposed to transform an action to an image. Our method was compared with state-of-the-art methods, and its contributions are mainly reflected in three characteristics. First, in contrast to the current classic joint trajectory map (JTM), every pixel of the DJMI is useful and contains essential spatio-temporal information. Thus, the input parameters of the deep neural network (DNN) are reduced by an order of magnitude, and the efficiency of action recognition is improved. Second, each frame of an action video is encoded as an independent slice of the DJMI, which avoids the information loss caused by action trajectory overlap. Third, by using DJMIs, proven algorithms for graphics and images can be used to generate training samples. Compared with the original image, the generated DJMIs contain new and different spatio-temporal information, which enables DNNs to be trained well on very few samples. Our method was evaluated on three benchmark datasets, namely, Florence-3D, UTKinect-Action3D and MSR Action3D. The results showed that our method achieved a recognition speed of 37 fps with competitive accuracy on these datasets. The time efficiency and few-shot learning capability of our method enable it to be used in real-time surveillance.</subfield></datafield><datafield tag="650" ind1=" " ind2="4"><subfield code="a">Human action recognition</subfield></datafield><datafield tag="650" ind1=" " ind2="4"><subfield code="a">Motion image</subfield></datafield><datafield tag="650" ind1=" " ind2="4"><subfield code="a">Action encoding</subfield></datafield><datafield tag="650" ind1=" " ind2="4"><subfield code="a">Few-shot learning</subfield></datafield><datafield tag="650" ind1=" " ind2="4"><subfield code="a">Skeleton-based action recognition</subfield></datafield><datafield tag="700" ind1="1" ind2=" "><subfield code="a">Yang, Wei</subfield><subfield code="e">verfasserin</subfield><subfield code="4">aut</subfield></datafield><datafield tag="700" ind1="1" ind2=" "><subfield code="a">Huang, Wei</subfield><subfield code="e">verfasserin</subfield><subfield code="4">aut</subfield></datafield><datafield tag="773" ind1="0" ind2="8"><subfield code="i">Enthalten in</subfield><subfield code="t">Applied soft computing</subfield><subfield code="d">Amsterdam [u.a.] : Elsevier Science, 2001</subfield><subfield code="g">97</subfield><subfield code="h">Online-Ressource</subfield><subfield code="w">(DE-627)334375754</subfield><subfield code="w">(DE-600)2057709-6</subfield><subfield code="w">(DE-576)256145733</subfield><subfield code="x">1568-4946</subfield><subfield code="7">nnns</subfield></datafield><datafield tag="773" ind1="1" ind2="8"><subfield code="g">volume:97</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_USEFLAG_U</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">SYSFLAG_U</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ELV</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_20</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_22</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_23</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_24</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_31</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_32</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_40</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_60</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_62</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_63</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_65</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_69</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_70</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_73</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_74</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_90</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_95</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_100</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_101</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_105</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_110</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_150</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_151</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_224</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_370</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_602</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_702</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2003</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2004</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2005</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2006</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2008</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2011</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2014</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2015</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2020</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2021</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2025</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2027</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2034</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2038</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2044</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2048</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2049</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2050</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2056</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2059</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2061</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2064</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2065</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2068</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2088</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2111</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2112</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2113</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2118</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2122</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2129</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2143</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2147</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2148</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2152</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2153</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2190</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2336</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2470</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2507</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2522</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4035</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4037</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4046</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4112</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4125</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4126</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4242</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4251</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4305</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4313</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4322</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4323</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4324</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4325</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4326</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4333</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4334</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4335</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4338</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4393</subfield></datafield><datafield tag="936" ind1="b" ind2="k"><subfield code="a">54.00</subfield><subfield code="j">Informatik: Allgemeines</subfield></datafield><datafield tag="951" ind1=" " ind2=" "><subfield code="a">AR</subfield></datafield><datafield tag="952" ind1=" " ind2=" "><subfield code="d">97</subfield></datafield></record></collection>
|
score |
7.4000816 |