Action recognition using vague division DMMs
This study presents a novel human action recognition method based on the sequences of depth maps, which provide additional body shape and motion information for action recognition. First, the authors divide each depth sequence into a number of sub-sequences. All these sub-sequences are of uniform le...
Ausführliche Beschreibung
Autor*in: |
Ke Jin [verfasserIn] Min Jiang [verfasserIn] Jun Kong [verfasserIn] Hongtao Huo [verfasserIn] Xiaofeng Wang [verfasserIn] |
---|
Format: |
E-Artikel |
---|---|
Sprache: |
Englisch |
Erschienen: |
2017 |
---|
Schlagwörter: |
human action recognition method robust probabilistic collaborative representation classification |
---|
Übergeordnetes Werk: |
In: The Journal of Engineering - Wiley, 2013, (2017) |
---|---|
Übergeordnetes Werk: |
year:2017 |
Links: |
---|
DOI / URN: |
10.1049/joe.2016.0330 |
---|
Katalog-ID: |
DOAJ001269283 |
---|
LEADER | 01000caa a22002652 4500 | ||
---|---|---|---|
001 | DOAJ001269283 | ||
003 | DE-627 | ||
005 | 20230309162235.0 | ||
007 | cr uuu---uuuuu | ||
008 | 230225s2017 xx |||||o 00| ||eng c | ||
024 | 7 | |a 10.1049/joe.2016.0330 |2 doi | |
035 | |a (DE-627)DOAJ001269283 | ||
035 | |a (DE-599)DOAJe034f549829c491ab8c858efc34ed963 | ||
040 | |a DE-627 |b ger |c DE-627 |e rakwb | ||
041 | |a eng | ||
050 | 0 | |a TA1-2040 | |
100 | 0 | |a Ke Jin |e verfasserin |4 aut | |
245 | 1 | 0 | |a Action recognition using vague division DMMs |
264 | 1 | |c 2017 | |
336 | |a Text |b txt |2 rdacontent | ||
337 | |a Computermedien |b c |2 rdamedia | ||
338 | |a Online-Ressource |b cr |2 rdacarrier | ||
520 | |a This study presents a novel human action recognition method based on the sequences of depth maps, which provide additional body shape and motion information for action recognition. First, the authors divide each depth sequence into a number of sub-sequences. All these sub-sequences are of uniform length. By controlling vague boundary (VB), they construct a VB-sequence which consists of an original sub-sequence and its adjacent sequences. Then, each depth frame in a VB-sequence is projected onto three orthogonal Cartesian planes, and the absolute value of the difference between two consecutive projected maps is accumulated to form a depth motion map (DMM) to describe the dynamic feature of a VB-sequence. Finally, they concatenate the DMMs of all the VB-sequences in one video sequence to describe an action. Collectively, they call them VB division of depth model. For classification, they apply robust probabilistic collaborative representation classification. The recognition results applied to the MSR Action Pairs, MSR Gesture 3D, MSR Action3D, and UTD-MHAD datasets indicate superior performance of their method over most existing methods. | ||
650 | 4 | |a object recognition | |
650 | 4 | |a video signal processing | |
650 | 4 | |a image sequences | |
650 | 4 | |a image motion analysis | |
650 | 4 | |a image classification | |
650 | 4 | |a probability | |
650 | 4 | |a image representation | |
650 | 4 | |a human action recognition method | |
650 | 4 | |a vague division DMM | |
650 | 4 | |a depth map sequences | |
650 | 4 | |a vague boundary | |
650 | 4 | |a VB-sequence | |
650 | 4 | |a adjacent sequences | |
650 | 4 | |a original subsequence | |
650 | 4 | |a orthogonal Cartesian planes | |
650 | 4 | |a absolute value | |
650 | 4 | |a depth motion map | |
650 | 4 | |a video sequence | |
650 | 4 | |a robust probabilistic collaborative representation classification | |
650 | 4 | |a MSR Action Pairs dataset | |
650 | 4 | |a MSR Gesture 3D dataset | |
650 | 4 | |a MSR Action3D dataset | |
650 | 4 | |a UTD-MHAD dataset | |
653 | 0 | |a Engineering (General). Civil engineering (General) | |
700 | 0 | |a Min Jiang |e verfasserin |4 aut | |
700 | 0 | |a Jun Kong |e verfasserin |4 aut | |
700 | 0 | |a Hongtao Huo |e verfasserin |4 aut | |
700 | 0 | |a Xiaofeng Wang |e verfasserin |4 aut | |
773 | 0 | 8 | |i In |t The Journal of Engineering |d Wiley, 2013 |g (2017) |w (DE-627)75682270X |w (DE-600)2727074-9 |x 20513305 |7 nnns |
773 | 1 | 8 | |g year:2017 |
856 | 4 | 0 | |u https://doi.org/10.1049/joe.2016.0330 |z kostenfrei |
856 | 4 | 0 | |u https://doaj.org/article/e034f549829c491ab8c858efc34ed963 |z kostenfrei |
856 | 4 | 0 | |u http://digital-library.theiet.org/content/journals/10.1049/joe.2016.0330 |z kostenfrei |
856 | 4 | 2 | |u https://doaj.org/toc/2051-3305 |y Journal toc |z kostenfrei |
912 | |a GBV_USEFLAG_A | ||
912 | |a SYSFLAG_A | ||
912 | |a GBV_DOAJ | ||
912 | |a GBV_ILN_20 | ||
912 | |a GBV_ILN_22 | ||
912 | |a GBV_ILN_23 | ||
912 | |a GBV_ILN_24 | ||
912 | |a GBV_ILN_31 | ||
912 | |a GBV_ILN_39 | ||
912 | |a GBV_ILN_40 | ||
912 | |a GBV_ILN_60 | ||
912 | |a GBV_ILN_62 | ||
912 | |a GBV_ILN_63 | ||
912 | |a GBV_ILN_65 | ||
912 | |a GBV_ILN_69 | ||
912 | |a GBV_ILN_70 | ||
912 | |a GBV_ILN_73 | ||
912 | |a GBV_ILN_95 | ||
912 | |a GBV_ILN_105 | ||
912 | |a GBV_ILN_110 | ||
912 | |a GBV_ILN_151 | ||
912 | |a GBV_ILN_161 | ||
912 | |a GBV_ILN_170 | ||
912 | |a GBV_ILN_171 | ||
912 | |a GBV_ILN_213 | ||
912 | |a GBV_ILN_224 | ||
912 | |a GBV_ILN_230 | ||
912 | |a GBV_ILN_285 | ||
912 | |a GBV_ILN_293 | ||
912 | |a GBV_ILN_370 | ||
912 | |a GBV_ILN_602 | ||
912 | |a GBV_ILN_636 | ||
912 | |a GBV_ILN_2004 | ||
912 | |a GBV_ILN_2005 | ||
912 | |a GBV_ILN_2006 | ||
912 | |a GBV_ILN_2007 | ||
912 | |a GBV_ILN_2010 | ||
912 | |a GBV_ILN_2011 | ||
912 | |a GBV_ILN_2014 | ||
912 | |a GBV_ILN_2026 | ||
912 | |a GBV_ILN_2027 | ||
912 | |a GBV_ILN_2034 | ||
912 | |a GBV_ILN_2037 | ||
912 | |a GBV_ILN_2038 | ||
912 | |a GBV_ILN_2044 | ||
912 | |a GBV_ILN_2048 | ||
912 | |a GBV_ILN_2049 | ||
912 | |a GBV_ILN_2050 | ||
912 | |a GBV_ILN_2055 | ||
912 | |a GBV_ILN_2056 | ||
912 | |a GBV_ILN_2057 | ||
912 | |a GBV_ILN_2059 | ||
912 | |a GBV_ILN_2061 | ||
912 | |a GBV_ILN_2064 | ||
912 | |a GBV_ILN_2068 | ||
912 | |a GBV_ILN_2088 | ||
912 | |a GBV_ILN_2106 | ||
912 | |a GBV_ILN_2108 | ||
912 | |a GBV_ILN_2110 | ||
912 | |a GBV_ILN_2111 | ||
912 | |a GBV_ILN_2118 | ||
912 | |a GBV_ILN_2122 | ||
912 | |a GBV_ILN_2143 | ||
912 | |a GBV_ILN_2144 | ||
912 | |a GBV_ILN_2147 | ||
912 | |a GBV_ILN_2148 | ||
912 | |a GBV_ILN_2152 | ||
912 | |a GBV_ILN_2153 | ||
912 | |a GBV_ILN_2232 | ||
912 | |a GBV_ILN_2336 | ||
912 | |a GBV_ILN_2470 | ||
912 | |a GBV_ILN_2507 | ||
912 | |a GBV_ILN_2522 | ||
912 | |a GBV_ILN_4012 | ||
912 | |a GBV_ILN_4035 | ||
912 | |a GBV_ILN_4037 | ||
912 | |a GBV_ILN_4046 | ||
912 | |a GBV_ILN_4112 | ||
912 | |a GBV_ILN_4125 | ||
912 | |a GBV_ILN_4126 | ||
912 | |a GBV_ILN_4242 | ||
912 | |a GBV_ILN_4249 | ||
912 | |a GBV_ILN_4251 | ||
912 | |a GBV_ILN_4305 | ||
912 | |a GBV_ILN_4306 | ||
912 | |a GBV_ILN_4307 | ||
912 | |a GBV_ILN_4313 | ||
912 | |a GBV_ILN_4322 | ||
912 | |a GBV_ILN_4323 | ||
912 | |a GBV_ILN_4324 | ||
912 | |a GBV_ILN_4325 | ||
912 | |a GBV_ILN_4326 | ||
912 | |a GBV_ILN_4333 | ||
912 | |a GBV_ILN_4334 | ||
912 | |a GBV_ILN_4335 | ||
912 | |a GBV_ILN_4336 | ||
912 | |a GBV_ILN_4338 | ||
912 | |a GBV_ILN_4367 | ||
912 | |a GBV_ILN_4700 | ||
951 | |a AR | ||
952 | |j 2017 |
author_variant |
k j kj m j mj j k jk h h hh x w xw |
---|---|
matchkey_str |
article:20513305:2017----::cineontouigau |
hierarchy_sort_str |
2017 |
callnumber-subject-code |
TA |
publishDate |
2017 |
allfields |
10.1049/joe.2016.0330 doi (DE-627)DOAJ001269283 (DE-599)DOAJe034f549829c491ab8c858efc34ed963 DE-627 ger DE-627 rakwb eng TA1-2040 Ke Jin verfasserin aut Action recognition using vague division DMMs 2017 Text txt rdacontent Computermedien c rdamedia Online-Ressource cr rdacarrier This study presents a novel human action recognition method based on the sequences of depth maps, which provide additional body shape and motion information for action recognition. First, the authors divide each depth sequence into a number of sub-sequences. All these sub-sequences are of uniform length. By controlling vague boundary (VB), they construct a VB-sequence which consists of an original sub-sequence and its adjacent sequences. Then, each depth frame in a VB-sequence is projected onto three orthogonal Cartesian planes, and the absolute value of the difference between two consecutive projected maps is accumulated to form a depth motion map (DMM) to describe the dynamic feature of a VB-sequence. Finally, they concatenate the DMMs of all the VB-sequences in one video sequence to describe an action. Collectively, they call them VB division of depth model. For classification, they apply robust probabilistic collaborative representation classification. The recognition results applied to the MSR Action Pairs, MSR Gesture 3D, MSR Action3D, and UTD-MHAD datasets indicate superior performance of their method over most existing methods. object recognition video signal processing image sequences image motion analysis image classification probability image representation human action recognition method vague division DMM depth map sequences vague boundary VB-sequence adjacent sequences original subsequence orthogonal Cartesian planes absolute value depth motion map video sequence robust probabilistic collaborative representation classification MSR Action Pairs dataset MSR Gesture 3D dataset MSR Action3D dataset UTD-MHAD dataset Engineering (General). Civil engineering (General) Min Jiang verfasserin aut Jun Kong verfasserin aut Hongtao Huo verfasserin aut Xiaofeng Wang verfasserin aut In The Journal of Engineering Wiley, 2013 (2017) (DE-627)75682270X (DE-600)2727074-9 20513305 nnns year:2017 https://doi.org/10.1049/joe.2016.0330 kostenfrei https://doaj.org/article/e034f549829c491ab8c858efc34ed963 kostenfrei http://digital-library.theiet.org/content/journals/10.1049/joe.2016.0330 kostenfrei https://doaj.org/toc/2051-3305 Journal toc kostenfrei GBV_USEFLAG_A SYSFLAG_A GBV_DOAJ GBV_ILN_20 GBV_ILN_22 GBV_ILN_23 GBV_ILN_24 GBV_ILN_31 GBV_ILN_39 GBV_ILN_40 GBV_ILN_60 GBV_ILN_62 GBV_ILN_63 GBV_ILN_65 GBV_ILN_69 GBV_ILN_70 GBV_ILN_73 GBV_ILN_95 GBV_ILN_105 GBV_ILN_110 GBV_ILN_151 GBV_ILN_161 GBV_ILN_170 GBV_ILN_171 GBV_ILN_213 GBV_ILN_224 GBV_ILN_230 GBV_ILN_285 GBV_ILN_293 GBV_ILN_370 GBV_ILN_602 GBV_ILN_636 GBV_ILN_2004 GBV_ILN_2005 GBV_ILN_2006 GBV_ILN_2007 GBV_ILN_2010 GBV_ILN_2011 GBV_ILN_2014 GBV_ILN_2026 GBV_ILN_2027 GBV_ILN_2034 GBV_ILN_2037 GBV_ILN_2038 GBV_ILN_2044 GBV_ILN_2048 GBV_ILN_2049 GBV_ILN_2050 GBV_ILN_2055 GBV_ILN_2056 GBV_ILN_2057 GBV_ILN_2059 GBV_ILN_2061 GBV_ILN_2064 GBV_ILN_2068 GBV_ILN_2088 GBV_ILN_2106 GBV_ILN_2108 GBV_ILN_2110 GBV_ILN_2111 GBV_ILN_2118 GBV_ILN_2122 GBV_ILN_2143 GBV_ILN_2144 GBV_ILN_2147 GBV_ILN_2148 GBV_ILN_2152 GBV_ILN_2153 GBV_ILN_2232 GBV_ILN_2336 GBV_ILN_2470 GBV_ILN_2507 GBV_ILN_2522 GBV_ILN_4012 GBV_ILN_4035 GBV_ILN_4037 GBV_ILN_4046 GBV_ILN_4112 GBV_ILN_4125 GBV_ILN_4126 GBV_ILN_4242 GBV_ILN_4249 GBV_ILN_4251 GBV_ILN_4305 GBV_ILN_4306 GBV_ILN_4307 GBV_ILN_4313 GBV_ILN_4322 GBV_ILN_4323 GBV_ILN_4324 GBV_ILN_4325 GBV_ILN_4326 GBV_ILN_4333 GBV_ILN_4334 GBV_ILN_4335 GBV_ILN_4336 GBV_ILN_4338 GBV_ILN_4367 GBV_ILN_4700 AR 2017 |
spelling |
10.1049/joe.2016.0330 doi (DE-627)DOAJ001269283 (DE-599)DOAJe034f549829c491ab8c858efc34ed963 DE-627 ger DE-627 rakwb eng TA1-2040 Ke Jin verfasserin aut Action recognition using vague division DMMs 2017 Text txt rdacontent Computermedien c rdamedia Online-Ressource cr rdacarrier This study presents a novel human action recognition method based on the sequences of depth maps, which provide additional body shape and motion information for action recognition. First, the authors divide each depth sequence into a number of sub-sequences. All these sub-sequences are of uniform length. By controlling vague boundary (VB), they construct a VB-sequence which consists of an original sub-sequence and its adjacent sequences. Then, each depth frame in a VB-sequence is projected onto three orthogonal Cartesian planes, and the absolute value of the difference between two consecutive projected maps is accumulated to form a depth motion map (DMM) to describe the dynamic feature of a VB-sequence. Finally, they concatenate the DMMs of all the VB-sequences in one video sequence to describe an action. Collectively, they call them VB division of depth model. For classification, they apply robust probabilistic collaborative representation classification. The recognition results applied to the MSR Action Pairs, MSR Gesture 3D, MSR Action3D, and UTD-MHAD datasets indicate superior performance of their method over most existing methods. object recognition video signal processing image sequences image motion analysis image classification probability image representation human action recognition method vague division DMM depth map sequences vague boundary VB-sequence adjacent sequences original subsequence orthogonal Cartesian planes absolute value depth motion map video sequence robust probabilistic collaborative representation classification MSR Action Pairs dataset MSR Gesture 3D dataset MSR Action3D dataset UTD-MHAD dataset Engineering (General). Civil engineering (General) Min Jiang verfasserin aut Jun Kong verfasserin aut Hongtao Huo verfasserin aut Xiaofeng Wang verfasserin aut In The Journal of Engineering Wiley, 2013 (2017) (DE-627)75682270X (DE-600)2727074-9 20513305 nnns year:2017 https://doi.org/10.1049/joe.2016.0330 kostenfrei https://doaj.org/article/e034f549829c491ab8c858efc34ed963 kostenfrei http://digital-library.theiet.org/content/journals/10.1049/joe.2016.0330 kostenfrei https://doaj.org/toc/2051-3305 Journal toc kostenfrei GBV_USEFLAG_A SYSFLAG_A GBV_DOAJ GBV_ILN_20 GBV_ILN_22 GBV_ILN_23 GBV_ILN_24 GBV_ILN_31 GBV_ILN_39 GBV_ILN_40 GBV_ILN_60 GBV_ILN_62 GBV_ILN_63 GBV_ILN_65 GBV_ILN_69 GBV_ILN_70 GBV_ILN_73 GBV_ILN_95 GBV_ILN_105 GBV_ILN_110 GBV_ILN_151 GBV_ILN_161 GBV_ILN_170 GBV_ILN_171 GBV_ILN_213 GBV_ILN_224 GBV_ILN_230 GBV_ILN_285 GBV_ILN_293 GBV_ILN_370 GBV_ILN_602 GBV_ILN_636 GBV_ILN_2004 GBV_ILN_2005 GBV_ILN_2006 GBV_ILN_2007 GBV_ILN_2010 GBV_ILN_2011 GBV_ILN_2014 GBV_ILN_2026 GBV_ILN_2027 GBV_ILN_2034 GBV_ILN_2037 GBV_ILN_2038 GBV_ILN_2044 GBV_ILN_2048 GBV_ILN_2049 GBV_ILN_2050 GBV_ILN_2055 GBV_ILN_2056 GBV_ILN_2057 GBV_ILN_2059 GBV_ILN_2061 GBV_ILN_2064 GBV_ILN_2068 GBV_ILN_2088 GBV_ILN_2106 GBV_ILN_2108 GBV_ILN_2110 GBV_ILN_2111 GBV_ILN_2118 GBV_ILN_2122 GBV_ILN_2143 GBV_ILN_2144 GBV_ILN_2147 GBV_ILN_2148 GBV_ILN_2152 GBV_ILN_2153 GBV_ILN_2232 GBV_ILN_2336 GBV_ILN_2470 GBV_ILN_2507 GBV_ILN_2522 GBV_ILN_4012 GBV_ILN_4035 GBV_ILN_4037 GBV_ILN_4046 GBV_ILN_4112 GBV_ILN_4125 GBV_ILN_4126 GBV_ILN_4242 GBV_ILN_4249 GBV_ILN_4251 GBV_ILN_4305 GBV_ILN_4306 GBV_ILN_4307 GBV_ILN_4313 GBV_ILN_4322 GBV_ILN_4323 GBV_ILN_4324 GBV_ILN_4325 GBV_ILN_4326 GBV_ILN_4333 GBV_ILN_4334 GBV_ILN_4335 GBV_ILN_4336 GBV_ILN_4338 GBV_ILN_4367 GBV_ILN_4700 AR 2017 |
allfields_unstemmed |
10.1049/joe.2016.0330 doi (DE-627)DOAJ001269283 (DE-599)DOAJe034f549829c491ab8c858efc34ed963 DE-627 ger DE-627 rakwb eng TA1-2040 Ke Jin verfasserin aut Action recognition using vague division DMMs 2017 Text txt rdacontent Computermedien c rdamedia Online-Ressource cr rdacarrier This study presents a novel human action recognition method based on the sequences of depth maps, which provide additional body shape and motion information for action recognition. First, the authors divide each depth sequence into a number of sub-sequences. All these sub-sequences are of uniform length. By controlling vague boundary (VB), they construct a VB-sequence which consists of an original sub-sequence and its adjacent sequences. Then, each depth frame in a VB-sequence is projected onto three orthogonal Cartesian planes, and the absolute value of the difference between two consecutive projected maps is accumulated to form a depth motion map (DMM) to describe the dynamic feature of a VB-sequence. Finally, they concatenate the DMMs of all the VB-sequences in one video sequence to describe an action. Collectively, they call them VB division of depth model. For classification, they apply robust probabilistic collaborative representation classification. The recognition results applied to the MSR Action Pairs, MSR Gesture 3D, MSR Action3D, and UTD-MHAD datasets indicate superior performance of their method over most existing methods. object recognition video signal processing image sequences image motion analysis image classification probability image representation human action recognition method vague division DMM depth map sequences vague boundary VB-sequence adjacent sequences original subsequence orthogonal Cartesian planes absolute value depth motion map video sequence robust probabilistic collaborative representation classification MSR Action Pairs dataset MSR Gesture 3D dataset MSR Action3D dataset UTD-MHAD dataset Engineering (General). Civil engineering (General) Min Jiang verfasserin aut Jun Kong verfasserin aut Hongtao Huo verfasserin aut Xiaofeng Wang verfasserin aut In The Journal of Engineering Wiley, 2013 (2017) (DE-627)75682270X (DE-600)2727074-9 20513305 nnns year:2017 https://doi.org/10.1049/joe.2016.0330 kostenfrei https://doaj.org/article/e034f549829c491ab8c858efc34ed963 kostenfrei http://digital-library.theiet.org/content/journals/10.1049/joe.2016.0330 kostenfrei https://doaj.org/toc/2051-3305 Journal toc kostenfrei GBV_USEFLAG_A SYSFLAG_A GBV_DOAJ GBV_ILN_20 GBV_ILN_22 GBV_ILN_23 GBV_ILN_24 GBV_ILN_31 GBV_ILN_39 GBV_ILN_40 GBV_ILN_60 GBV_ILN_62 GBV_ILN_63 GBV_ILN_65 GBV_ILN_69 GBV_ILN_70 GBV_ILN_73 GBV_ILN_95 GBV_ILN_105 GBV_ILN_110 GBV_ILN_151 GBV_ILN_161 GBV_ILN_170 GBV_ILN_171 GBV_ILN_213 GBV_ILN_224 GBV_ILN_230 GBV_ILN_285 GBV_ILN_293 GBV_ILN_370 GBV_ILN_602 GBV_ILN_636 GBV_ILN_2004 GBV_ILN_2005 GBV_ILN_2006 GBV_ILN_2007 GBV_ILN_2010 GBV_ILN_2011 GBV_ILN_2014 GBV_ILN_2026 GBV_ILN_2027 GBV_ILN_2034 GBV_ILN_2037 GBV_ILN_2038 GBV_ILN_2044 GBV_ILN_2048 GBV_ILN_2049 GBV_ILN_2050 GBV_ILN_2055 GBV_ILN_2056 GBV_ILN_2057 GBV_ILN_2059 GBV_ILN_2061 GBV_ILN_2064 GBV_ILN_2068 GBV_ILN_2088 GBV_ILN_2106 GBV_ILN_2108 GBV_ILN_2110 GBV_ILN_2111 GBV_ILN_2118 GBV_ILN_2122 GBV_ILN_2143 GBV_ILN_2144 GBV_ILN_2147 GBV_ILN_2148 GBV_ILN_2152 GBV_ILN_2153 GBV_ILN_2232 GBV_ILN_2336 GBV_ILN_2470 GBV_ILN_2507 GBV_ILN_2522 GBV_ILN_4012 GBV_ILN_4035 GBV_ILN_4037 GBV_ILN_4046 GBV_ILN_4112 GBV_ILN_4125 GBV_ILN_4126 GBV_ILN_4242 GBV_ILN_4249 GBV_ILN_4251 GBV_ILN_4305 GBV_ILN_4306 GBV_ILN_4307 GBV_ILN_4313 GBV_ILN_4322 GBV_ILN_4323 GBV_ILN_4324 GBV_ILN_4325 GBV_ILN_4326 GBV_ILN_4333 GBV_ILN_4334 GBV_ILN_4335 GBV_ILN_4336 GBV_ILN_4338 GBV_ILN_4367 GBV_ILN_4700 AR 2017 |
allfieldsGer |
10.1049/joe.2016.0330 doi (DE-627)DOAJ001269283 (DE-599)DOAJe034f549829c491ab8c858efc34ed963 DE-627 ger DE-627 rakwb eng TA1-2040 Ke Jin verfasserin aut Action recognition using vague division DMMs 2017 Text txt rdacontent Computermedien c rdamedia Online-Ressource cr rdacarrier This study presents a novel human action recognition method based on the sequences of depth maps, which provide additional body shape and motion information for action recognition. First, the authors divide each depth sequence into a number of sub-sequences. All these sub-sequences are of uniform length. By controlling vague boundary (VB), they construct a VB-sequence which consists of an original sub-sequence and its adjacent sequences. Then, each depth frame in a VB-sequence is projected onto three orthogonal Cartesian planes, and the absolute value of the difference between two consecutive projected maps is accumulated to form a depth motion map (DMM) to describe the dynamic feature of a VB-sequence. Finally, they concatenate the DMMs of all the VB-sequences in one video sequence to describe an action. Collectively, they call them VB division of depth model. For classification, they apply robust probabilistic collaborative representation classification. The recognition results applied to the MSR Action Pairs, MSR Gesture 3D, MSR Action3D, and UTD-MHAD datasets indicate superior performance of their method over most existing methods. object recognition video signal processing image sequences image motion analysis image classification probability image representation human action recognition method vague division DMM depth map sequences vague boundary VB-sequence adjacent sequences original subsequence orthogonal Cartesian planes absolute value depth motion map video sequence robust probabilistic collaborative representation classification MSR Action Pairs dataset MSR Gesture 3D dataset MSR Action3D dataset UTD-MHAD dataset Engineering (General). Civil engineering (General) Min Jiang verfasserin aut Jun Kong verfasserin aut Hongtao Huo verfasserin aut Xiaofeng Wang verfasserin aut In The Journal of Engineering Wiley, 2013 (2017) (DE-627)75682270X (DE-600)2727074-9 20513305 nnns year:2017 https://doi.org/10.1049/joe.2016.0330 kostenfrei https://doaj.org/article/e034f549829c491ab8c858efc34ed963 kostenfrei http://digital-library.theiet.org/content/journals/10.1049/joe.2016.0330 kostenfrei https://doaj.org/toc/2051-3305 Journal toc kostenfrei GBV_USEFLAG_A SYSFLAG_A GBV_DOAJ GBV_ILN_20 GBV_ILN_22 GBV_ILN_23 GBV_ILN_24 GBV_ILN_31 GBV_ILN_39 GBV_ILN_40 GBV_ILN_60 GBV_ILN_62 GBV_ILN_63 GBV_ILN_65 GBV_ILN_69 GBV_ILN_70 GBV_ILN_73 GBV_ILN_95 GBV_ILN_105 GBV_ILN_110 GBV_ILN_151 GBV_ILN_161 GBV_ILN_170 GBV_ILN_171 GBV_ILN_213 GBV_ILN_224 GBV_ILN_230 GBV_ILN_285 GBV_ILN_293 GBV_ILN_370 GBV_ILN_602 GBV_ILN_636 GBV_ILN_2004 GBV_ILN_2005 GBV_ILN_2006 GBV_ILN_2007 GBV_ILN_2010 GBV_ILN_2011 GBV_ILN_2014 GBV_ILN_2026 GBV_ILN_2027 GBV_ILN_2034 GBV_ILN_2037 GBV_ILN_2038 GBV_ILN_2044 GBV_ILN_2048 GBV_ILN_2049 GBV_ILN_2050 GBV_ILN_2055 GBV_ILN_2056 GBV_ILN_2057 GBV_ILN_2059 GBV_ILN_2061 GBV_ILN_2064 GBV_ILN_2068 GBV_ILN_2088 GBV_ILN_2106 GBV_ILN_2108 GBV_ILN_2110 GBV_ILN_2111 GBV_ILN_2118 GBV_ILN_2122 GBV_ILN_2143 GBV_ILN_2144 GBV_ILN_2147 GBV_ILN_2148 GBV_ILN_2152 GBV_ILN_2153 GBV_ILN_2232 GBV_ILN_2336 GBV_ILN_2470 GBV_ILN_2507 GBV_ILN_2522 GBV_ILN_4012 GBV_ILN_4035 GBV_ILN_4037 GBV_ILN_4046 GBV_ILN_4112 GBV_ILN_4125 GBV_ILN_4126 GBV_ILN_4242 GBV_ILN_4249 GBV_ILN_4251 GBV_ILN_4305 GBV_ILN_4306 GBV_ILN_4307 GBV_ILN_4313 GBV_ILN_4322 GBV_ILN_4323 GBV_ILN_4324 GBV_ILN_4325 GBV_ILN_4326 GBV_ILN_4333 GBV_ILN_4334 GBV_ILN_4335 GBV_ILN_4336 GBV_ILN_4338 GBV_ILN_4367 GBV_ILN_4700 AR 2017 |
allfieldsSound |
10.1049/joe.2016.0330 doi (DE-627)DOAJ001269283 (DE-599)DOAJe034f549829c491ab8c858efc34ed963 DE-627 ger DE-627 rakwb eng TA1-2040 Ke Jin verfasserin aut Action recognition using vague division DMMs 2017 Text txt rdacontent Computermedien c rdamedia Online-Ressource cr rdacarrier This study presents a novel human action recognition method based on the sequences of depth maps, which provide additional body shape and motion information for action recognition. First, the authors divide each depth sequence into a number of sub-sequences. All these sub-sequences are of uniform length. By controlling vague boundary (VB), they construct a VB-sequence which consists of an original sub-sequence and its adjacent sequences. Then, each depth frame in a VB-sequence is projected onto three orthogonal Cartesian planes, and the absolute value of the difference between two consecutive projected maps is accumulated to form a depth motion map (DMM) to describe the dynamic feature of a VB-sequence. Finally, they concatenate the DMMs of all the VB-sequences in one video sequence to describe an action. Collectively, they call them VB division of depth model. For classification, they apply robust probabilistic collaborative representation classification. The recognition results applied to the MSR Action Pairs, MSR Gesture 3D, MSR Action3D, and UTD-MHAD datasets indicate superior performance of their method over most existing methods. object recognition video signal processing image sequences image motion analysis image classification probability image representation human action recognition method vague division DMM depth map sequences vague boundary VB-sequence adjacent sequences original subsequence orthogonal Cartesian planes absolute value depth motion map video sequence robust probabilistic collaborative representation classification MSR Action Pairs dataset MSR Gesture 3D dataset MSR Action3D dataset UTD-MHAD dataset Engineering (General). Civil engineering (General) Min Jiang verfasserin aut Jun Kong verfasserin aut Hongtao Huo verfasserin aut Xiaofeng Wang verfasserin aut In The Journal of Engineering Wiley, 2013 (2017) (DE-627)75682270X (DE-600)2727074-9 20513305 nnns year:2017 https://doi.org/10.1049/joe.2016.0330 kostenfrei https://doaj.org/article/e034f549829c491ab8c858efc34ed963 kostenfrei http://digital-library.theiet.org/content/journals/10.1049/joe.2016.0330 kostenfrei https://doaj.org/toc/2051-3305 Journal toc kostenfrei GBV_USEFLAG_A SYSFLAG_A GBV_DOAJ GBV_ILN_20 GBV_ILN_22 GBV_ILN_23 GBV_ILN_24 GBV_ILN_31 GBV_ILN_39 GBV_ILN_40 GBV_ILN_60 GBV_ILN_62 GBV_ILN_63 GBV_ILN_65 GBV_ILN_69 GBV_ILN_70 GBV_ILN_73 GBV_ILN_95 GBV_ILN_105 GBV_ILN_110 GBV_ILN_151 GBV_ILN_161 GBV_ILN_170 GBV_ILN_171 GBV_ILN_213 GBV_ILN_224 GBV_ILN_230 GBV_ILN_285 GBV_ILN_293 GBV_ILN_370 GBV_ILN_602 GBV_ILN_636 GBV_ILN_2004 GBV_ILN_2005 GBV_ILN_2006 GBV_ILN_2007 GBV_ILN_2010 GBV_ILN_2011 GBV_ILN_2014 GBV_ILN_2026 GBV_ILN_2027 GBV_ILN_2034 GBV_ILN_2037 GBV_ILN_2038 GBV_ILN_2044 GBV_ILN_2048 GBV_ILN_2049 GBV_ILN_2050 GBV_ILN_2055 GBV_ILN_2056 GBV_ILN_2057 GBV_ILN_2059 GBV_ILN_2061 GBV_ILN_2064 GBV_ILN_2068 GBV_ILN_2088 GBV_ILN_2106 GBV_ILN_2108 GBV_ILN_2110 GBV_ILN_2111 GBV_ILN_2118 GBV_ILN_2122 GBV_ILN_2143 GBV_ILN_2144 GBV_ILN_2147 GBV_ILN_2148 GBV_ILN_2152 GBV_ILN_2153 GBV_ILN_2232 GBV_ILN_2336 GBV_ILN_2470 GBV_ILN_2507 GBV_ILN_2522 GBV_ILN_4012 GBV_ILN_4035 GBV_ILN_4037 GBV_ILN_4046 GBV_ILN_4112 GBV_ILN_4125 GBV_ILN_4126 GBV_ILN_4242 GBV_ILN_4249 GBV_ILN_4251 GBV_ILN_4305 GBV_ILN_4306 GBV_ILN_4307 GBV_ILN_4313 GBV_ILN_4322 GBV_ILN_4323 GBV_ILN_4324 GBV_ILN_4325 GBV_ILN_4326 GBV_ILN_4333 GBV_ILN_4334 GBV_ILN_4335 GBV_ILN_4336 GBV_ILN_4338 GBV_ILN_4367 GBV_ILN_4700 AR 2017 |
language |
English |
source |
In The Journal of Engineering (2017) year:2017 |
sourceStr |
In The Journal of Engineering (2017) year:2017 |
format_phy_str_mv |
Article |
institution |
findex.gbv.de |
topic_facet |
object recognition video signal processing image sequences image motion analysis image classification probability image representation human action recognition method vague division DMM depth map sequences vague boundary VB-sequence adjacent sequences original subsequence orthogonal Cartesian planes absolute value depth motion map video sequence robust probabilistic collaborative representation classification MSR Action Pairs dataset MSR Gesture 3D dataset MSR Action3D dataset UTD-MHAD dataset Engineering (General). Civil engineering (General) |
isfreeaccess_bool |
true |
container_title |
The Journal of Engineering |
authorswithroles_txt_mv |
Ke Jin @@aut@@ Min Jiang @@aut@@ Jun Kong @@aut@@ Hongtao Huo @@aut@@ Xiaofeng Wang @@aut@@ |
publishDateDaySort_date |
2017-01-01T00:00:00Z |
hierarchy_top_id |
75682270X |
id |
DOAJ001269283 |
language_de |
englisch |
fullrecord |
<?xml version="1.0" encoding="UTF-8"?><collection xmlns="http://www.loc.gov/MARC21/slim"><record><leader>01000caa a22002652 4500</leader><controlfield tag="001">DOAJ001269283</controlfield><controlfield tag="003">DE-627</controlfield><controlfield tag="005">20230309162235.0</controlfield><controlfield tag="007">cr uuu---uuuuu</controlfield><controlfield tag="008">230225s2017 xx |||||o 00| ||eng c</controlfield><datafield tag="024" ind1="7" ind2=" "><subfield code="a">10.1049/joe.2016.0330</subfield><subfield code="2">doi</subfield></datafield><datafield tag="035" ind1=" " ind2=" "><subfield code="a">(DE-627)DOAJ001269283</subfield></datafield><datafield tag="035" ind1=" " ind2=" "><subfield code="a">(DE-599)DOAJe034f549829c491ab8c858efc34ed963</subfield></datafield><datafield tag="040" ind1=" " ind2=" "><subfield code="a">DE-627</subfield><subfield code="b">ger</subfield><subfield code="c">DE-627</subfield><subfield code="e">rakwb</subfield></datafield><datafield tag="041" ind1=" " ind2=" "><subfield code="a">eng</subfield></datafield><datafield tag="050" ind1=" " ind2="0"><subfield code="a">TA1-2040</subfield></datafield><datafield tag="100" ind1="0" ind2=" "><subfield code="a">Ke Jin</subfield><subfield code="e">verfasserin</subfield><subfield code="4">aut</subfield></datafield><datafield tag="245" ind1="1" ind2="0"><subfield code="a">Action recognition using vague division DMMs</subfield></datafield><datafield tag="264" ind1=" " ind2="1"><subfield code="c">2017</subfield></datafield><datafield tag="336" ind1=" " ind2=" "><subfield code="a">Text</subfield><subfield code="b">txt</subfield><subfield code="2">rdacontent</subfield></datafield><datafield tag="337" ind1=" " ind2=" "><subfield code="a">Computermedien</subfield><subfield code="b">c</subfield><subfield code="2">rdamedia</subfield></datafield><datafield tag="338" ind1=" " ind2=" "><subfield code="a">Online-Ressource</subfield><subfield code="b">cr</subfield><subfield code="2">rdacarrier</subfield></datafield><datafield tag="520" ind1=" " ind2=" "><subfield code="a">This study presents a novel human action recognition method based on the sequences of depth maps, which provide additional body shape and motion information for action recognition. First, the authors divide each depth sequence into a number of sub-sequences. All these sub-sequences are of uniform length. By controlling vague boundary (VB), they construct a VB-sequence which consists of an original sub-sequence and its adjacent sequences. Then, each depth frame in a VB-sequence is projected onto three orthogonal Cartesian planes, and the absolute value of the difference between two consecutive projected maps is accumulated to form a depth motion map (DMM) to describe the dynamic feature of a VB-sequence. Finally, they concatenate the DMMs of all the VB-sequences in one video sequence to describe an action. Collectively, they call them VB division of depth model. For classification, they apply robust probabilistic collaborative representation classification. The recognition results applied to the MSR Action Pairs, MSR Gesture 3D, MSR Action3D, and UTD-MHAD datasets indicate superior performance of their method over most existing methods.</subfield></datafield><datafield tag="650" ind1=" " ind2="4"><subfield code="a">object recognition</subfield></datafield><datafield tag="650" ind1=" " ind2="4"><subfield code="a">video signal processing</subfield></datafield><datafield tag="650" ind1=" " ind2="4"><subfield code="a">image sequences</subfield></datafield><datafield tag="650" ind1=" " ind2="4"><subfield code="a">image motion analysis</subfield></datafield><datafield tag="650" ind1=" " ind2="4"><subfield code="a">image classification</subfield></datafield><datafield tag="650" ind1=" " ind2="4"><subfield code="a">probability</subfield></datafield><datafield tag="650" ind1=" " ind2="4"><subfield code="a">image representation</subfield></datafield><datafield tag="650" ind1=" " ind2="4"><subfield code="a">human action recognition method</subfield></datafield><datafield tag="650" ind1=" " ind2="4"><subfield code="a">vague division DMM</subfield></datafield><datafield tag="650" ind1=" " ind2="4"><subfield code="a">depth map sequences</subfield></datafield><datafield tag="650" ind1=" " ind2="4"><subfield code="a">vague boundary</subfield></datafield><datafield tag="650" ind1=" " ind2="4"><subfield code="a">VB-sequence</subfield></datafield><datafield tag="650" ind1=" " ind2="4"><subfield code="a">adjacent sequences</subfield></datafield><datafield tag="650" ind1=" " ind2="4"><subfield code="a">original subsequence</subfield></datafield><datafield tag="650" ind1=" " ind2="4"><subfield code="a">orthogonal Cartesian planes</subfield></datafield><datafield tag="650" ind1=" " ind2="4"><subfield code="a">absolute value</subfield></datafield><datafield tag="650" ind1=" " ind2="4"><subfield code="a">depth motion map</subfield></datafield><datafield tag="650" ind1=" " ind2="4"><subfield code="a">video sequence</subfield></datafield><datafield tag="650" ind1=" " ind2="4"><subfield code="a">robust probabilistic collaborative representation classification</subfield></datafield><datafield tag="650" ind1=" " ind2="4"><subfield code="a">MSR Action Pairs dataset</subfield></datafield><datafield tag="650" ind1=" " ind2="4"><subfield code="a">MSR Gesture 3D dataset</subfield></datafield><datafield tag="650" ind1=" " ind2="4"><subfield code="a">MSR Action3D dataset</subfield></datafield><datafield tag="650" ind1=" " ind2="4"><subfield code="a">UTD-MHAD dataset</subfield></datafield><datafield tag="653" ind1=" " ind2="0"><subfield code="a">Engineering (General). Civil engineering (General)</subfield></datafield><datafield tag="700" ind1="0" ind2=" "><subfield code="a">Min Jiang</subfield><subfield code="e">verfasserin</subfield><subfield code="4">aut</subfield></datafield><datafield tag="700" ind1="0" ind2=" "><subfield code="a">Jun Kong</subfield><subfield code="e">verfasserin</subfield><subfield code="4">aut</subfield></datafield><datafield tag="700" ind1="0" ind2=" "><subfield code="a">Hongtao Huo</subfield><subfield code="e">verfasserin</subfield><subfield code="4">aut</subfield></datafield><datafield tag="700" ind1="0" ind2=" "><subfield code="a">Xiaofeng Wang</subfield><subfield code="e">verfasserin</subfield><subfield code="4">aut</subfield></datafield><datafield tag="773" ind1="0" ind2="8"><subfield code="i">In</subfield><subfield code="t">The Journal of Engineering</subfield><subfield code="d">Wiley, 2013</subfield><subfield code="g">(2017)</subfield><subfield code="w">(DE-627)75682270X</subfield><subfield code="w">(DE-600)2727074-9</subfield><subfield code="x">20513305</subfield><subfield code="7">nnns</subfield></datafield><datafield tag="773" ind1="1" ind2="8"><subfield code="g">year:2017</subfield></datafield><datafield tag="856" ind1="4" ind2="0"><subfield code="u">https://doi.org/10.1049/joe.2016.0330</subfield><subfield code="z">kostenfrei</subfield></datafield><datafield tag="856" ind1="4" ind2="0"><subfield code="u">https://doaj.org/article/e034f549829c491ab8c858efc34ed963</subfield><subfield code="z">kostenfrei</subfield></datafield><datafield tag="856" ind1="4" ind2="0"><subfield code="u">http://digital-library.theiet.org/content/journals/10.1049/joe.2016.0330</subfield><subfield code="z">kostenfrei</subfield></datafield><datafield tag="856" ind1="4" ind2="2"><subfield code="u">https://doaj.org/toc/2051-3305</subfield><subfield code="y">Journal toc</subfield><subfield code="z">kostenfrei</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_USEFLAG_A</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">SYSFLAG_A</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_DOAJ</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_20</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_22</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_23</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_24</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_31</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_39</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_40</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_60</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_62</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_63</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_65</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_69</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_70</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_73</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_95</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_105</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_110</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_151</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_161</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_170</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_171</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_213</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_224</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_230</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_285</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_293</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_370</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_602</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_636</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2004</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2005</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2006</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2007</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2010</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2011</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2014</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2026</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2027</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2034</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2037</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2038</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2044</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2048</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2049</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2050</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2055</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2056</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2057</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2059</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2061</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2064</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2068</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2088</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2106</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2108</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2110</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2111</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2118</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2122</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2143</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2144</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2147</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2148</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2152</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2153</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2232</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2336</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2470</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2507</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2522</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4012</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4035</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4037</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4046</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4112</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4125</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4126</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4242</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4249</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4251</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4305</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4306</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4307</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4313</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4322</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4323</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4324</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4325</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4326</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4333</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4334</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4335</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4336</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4338</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4367</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4700</subfield></datafield><datafield tag="951" ind1=" " ind2=" "><subfield code="a">AR</subfield></datafield><datafield tag="952" ind1=" " ind2=" "><subfield code="j">2017</subfield></datafield></record></collection>
|
callnumber-first |
T - Technology |
author |
Ke Jin |
spellingShingle |
Ke Jin misc TA1-2040 misc object recognition misc video signal processing misc image sequences misc image motion analysis misc image classification misc probability misc image representation misc human action recognition method misc vague division DMM misc depth map sequences misc vague boundary misc VB-sequence misc adjacent sequences misc original subsequence misc orthogonal Cartesian planes misc absolute value misc depth motion map misc video sequence misc robust probabilistic collaborative representation classification misc MSR Action Pairs dataset misc MSR Gesture 3D dataset misc MSR Action3D dataset misc UTD-MHAD dataset misc Engineering (General). Civil engineering (General) Action recognition using vague division DMMs |
authorStr |
Ke Jin |
ppnlink_with_tag_str_mv |
@@773@@(DE-627)75682270X |
format |
electronic Article |
delete_txt_mv |
keep |
author_role |
aut aut aut aut aut |
collection |
DOAJ |
remote_str |
true |
callnumber-label |
TA1-2040 |
illustrated |
Not Illustrated |
issn |
20513305 |
topic_title |
TA1-2040 Action recognition using vague division DMMs object recognition video signal processing image sequences image motion analysis image classification probability image representation human action recognition method vague division DMM depth map sequences vague boundary VB-sequence adjacent sequences original subsequence orthogonal Cartesian planes absolute value depth motion map video sequence robust probabilistic collaborative representation classification MSR Action Pairs dataset MSR Gesture 3D dataset MSR Action3D dataset UTD-MHAD dataset |
topic |
misc TA1-2040 misc object recognition misc video signal processing misc image sequences misc image motion analysis misc image classification misc probability misc image representation misc human action recognition method misc vague division DMM misc depth map sequences misc vague boundary misc VB-sequence misc adjacent sequences misc original subsequence misc orthogonal Cartesian planes misc absolute value misc depth motion map misc video sequence misc robust probabilistic collaborative representation classification misc MSR Action Pairs dataset misc MSR Gesture 3D dataset misc MSR Action3D dataset misc UTD-MHAD dataset misc Engineering (General). Civil engineering (General) |
topic_unstemmed |
misc TA1-2040 misc object recognition misc video signal processing misc image sequences misc image motion analysis misc image classification misc probability misc image representation misc human action recognition method misc vague division DMM misc depth map sequences misc vague boundary misc VB-sequence misc adjacent sequences misc original subsequence misc orthogonal Cartesian planes misc absolute value misc depth motion map misc video sequence misc robust probabilistic collaborative representation classification misc MSR Action Pairs dataset misc MSR Gesture 3D dataset misc MSR Action3D dataset misc UTD-MHAD dataset misc Engineering (General). Civil engineering (General) |
topic_browse |
misc TA1-2040 misc object recognition misc video signal processing misc image sequences misc image motion analysis misc image classification misc probability misc image representation misc human action recognition method misc vague division DMM misc depth map sequences misc vague boundary misc VB-sequence misc adjacent sequences misc original subsequence misc orthogonal Cartesian planes misc absolute value misc depth motion map misc video sequence misc robust probabilistic collaborative representation classification misc MSR Action Pairs dataset misc MSR Gesture 3D dataset misc MSR Action3D dataset misc UTD-MHAD dataset misc Engineering (General). Civil engineering (General) |
format_facet |
Elektronische Aufsätze Aufsätze Elektronische Ressource |
format_main_str_mv |
Text Zeitschrift/Artikel |
carriertype_str_mv |
cr |
hierarchy_parent_title |
The Journal of Engineering |
hierarchy_parent_id |
75682270X |
hierarchy_top_title |
The Journal of Engineering |
isfreeaccess_txt |
true |
familylinks_str_mv |
(DE-627)75682270X (DE-600)2727074-9 |
title |
Action recognition using vague division DMMs |
ctrlnum |
(DE-627)DOAJ001269283 (DE-599)DOAJe034f549829c491ab8c858efc34ed963 |
title_full |
Action recognition using vague division DMMs |
author_sort |
Ke Jin |
journal |
The Journal of Engineering |
journalStr |
The Journal of Engineering |
callnumber-first-code |
T |
lang_code |
eng |
isOA_bool |
true |
recordtype |
marc |
publishDateSort |
2017 |
contenttype_str_mv |
txt |
author_browse |
Ke Jin Min Jiang Jun Kong Hongtao Huo Xiaofeng Wang |
class |
TA1-2040 |
format_se |
Elektronische Aufsätze |
author-letter |
Ke Jin |
doi_str_mv |
10.1049/joe.2016.0330 |
author2-role |
verfasserin |
title_sort |
action recognition using vague division dmms |
callnumber |
TA1-2040 |
title_auth |
Action recognition using vague division DMMs |
abstract |
This study presents a novel human action recognition method based on the sequences of depth maps, which provide additional body shape and motion information for action recognition. First, the authors divide each depth sequence into a number of sub-sequences. All these sub-sequences are of uniform length. By controlling vague boundary (VB), they construct a VB-sequence which consists of an original sub-sequence and its adjacent sequences. Then, each depth frame in a VB-sequence is projected onto three orthogonal Cartesian planes, and the absolute value of the difference between two consecutive projected maps is accumulated to form a depth motion map (DMM) to describe the dynamic feature of a VB-sequence. Finally, they concatenate the DMMs of all the VB-sequences in one video sequence to describe an action. Collectively, they call them VB division of depth model. For classification, they apply robust probabilistic collaborative representation classification. The recognition results applied to the MSR Action Pairs, MSR Gesture 3D, MSR Action3D, and UTD-MHAD datasets indicate superior performance of their method over most existing methods. |
abstractGer |
This study presents a novel human action recognition method based on the sequences of depth maps, which provide additional body shape and motion information for action recognition. First, the authors divide each depth sequence into a number of sub-sequences. All these sub-sequences are of uniform length. By controlling vague boundary (VB), they construct a VB-sequence which consists of an original sub-sequence and its adjacent sequences. Then, each depth frame in a VB-sequence is projected onto three orthogonal Cartesian planes, and the absolute value of the difference between two consecutive projected maps is accumulated to form a depth motion map (DMM) to describe the dynamic feature of a VB-sequence. Finally, they concatenate the DMMs of all the VB-sequences in one video sequence to describe an action. Collectively, they call them VB division of depth model. For classification, they apply robust probabilistic collaborative representation classification. The recognition results applied to the MSR Action Pairs, MSR Gesture 3D, MSR Action3D, and UTD-MHAD datasets indicate superior performance of their method over most existing methods. |
abstract_unstemmed |
This study presents a novel human action recognition method based on the sequences of depth maps, which provide additional body shape and motion information for action recognition. First, the authors divide each depth sequence into a number of sub-sequences. All these sub-sequences are of uniform length. By controlling vague boundary (VB), they construct a VB-sequence which consists of an original sub-sequence and its adjacent sequences. Then, each depth frame in a VB-sequence is projected onto three orthogonal Cartesian planes, and the absolute value of the difference between two consecutive projected maps is accumulated to form a depth motion map (DMM) to describe the dynamic feature of a VB-sequence. Finally, they concatenate the DMMs of all the VB-sequences in one video sequence to describe an action. Collectively, they call them VB division of depth model. For classification, they apply robust probabilistic collaborative representation classification. The recognition results applied to the MSR Action Pairs, MSR Gesture 3D, MSR Action3D, and UTD-MHAD datasets indicate superior performance of their method over most existing methods. |
collection_details |
GBV_USEFLAG_A SYSFLAG_A GBV_DOAJ GBV_ILN_20 GBV_ILN_22 GBV_ILN_23 GBV_ILN_24 GBV_ILN_31 GBV_ILN_39 GBV_ILN_40 GBV_ILN_60 GBV_ILN_62 GBV_ILN_63 GBV_ILN_65 GBV_ILN_69 GBV_ILN_70 GBV_ILN_73 GBV_ILN_95 GBV_ILN_105 GBV_ILN_110 GBV_ILN_151 GBV_ILN_161 GBV_ILN_170 GBV_ILN_171 GBV_ILN_213 GBV_ILN_224 GBV_ILN_230 GBV_ILN_285 GBV_ILN_293 GBV_ILN_370 GBV_ILN_602 GBV_ILN_636 GBV_ILN_2004 GBV_ILN_2005 GBV_ILN_2006 GBV_ILN_2007 GBV_ILN_2010 GBV_ILN_2011 GBV_ILN_2014 GBV_ILN_2026 GBV_ILN_2027 GBV_ILN_2034 GBV_ILN_2037 GBV_ILN_2038 GBV_ILN_2044 GBV_ILN_2048 GBV_ILN_2049 GBV_ILN_2050 GBV_ILN_2055 GBV_ILN_2056 GBV_ILN_2057 GBV_ILN_2059 GBV_ILN_2061 GBV_ILN_2064 GBV_ILN_2068 GBV_ILN_2088 GBV_ILN_2106 GBV_ILN_2108 GBV_ILN_2110 GBV_ILN_2111 GBV_ILN_2118 GBV_ILN_2122 GBV_ILN_2143 GBV_ILN_2144 GBV_ILN_2147 GBV_ILN_2148 GBV_ILN_2152 GBV_ILN_2153 GBV_ILN_2232 GBV_ILN_2336 GBV_ILN_2470 GBV_ILN_2507 GBV_ILN_2522 GBV_ILN_4012 GBV_ILN_4035 GBV_ILN_4037 GBV_ILN_4046 GBV_ILN_4112 GBV_ILN_4125 GBV_ILN_4126 GBV_ILN_4242 GBV_ILN_4249 GBV_ILN_4251 GBV_ILN_4305 GBV_ILN_4306 GBV_ILN_4307 GBV_ILN_4313 GBV_ILN_4322 GBV_ILN_4323 GBV_ILN_4324 GBV_ILN_4325 GBV_ILN_4326 GBV_ILN_4333 GBV_ILN_4334 GBV_ILN_4335 GBV_ILN_4336 GBV_ILN_4338 GBV_ILN_4367 GBV_ILN_4700 |
title_short |
Action recognition using vague division DMMs |
url |
https://doi.org/10.1049/joe.2016.0330 https://doaj.org/article/e034f549829c491ab8c858efc34ed963 http://digital-library.theiet.org/content/journals/10.1049/joe.2016.0330 https://doaj.org/toc/2051-3305 |
remote_bool |
true |
author2 |
Min Jiang Jun Kong Hongtao Huo Xiaofeng Wang |
author2Str |
Min Jiang Jun Kong Hongtao Huo Xiaofeng Wang |
ppnlink |
75682270X |
callnumber-subject |
TA - General and Civil Engineering |
mediatype_str_mv |
c |
isOA_txt |
true |
hochschulschrift_bool |
false |
doi_str |
10.1049/joe.2016.0330 |
callnumber-a |
TA1-2040 |
up_date |
2024-07-03T19:25:35.962Z |
_version_ |
1803587143299235840 |
fullrecord_marcxml |
<?xml version="1.0" encoding="UTF-8"?><collection xmlns="http://www.loc.gov/MARC21/slim"><record><leader>01000caa a22002652 4500</leader><controlfield tag="001">DOAJ001269283</controlfield><controlfield tag="003">DE-627</controlfield><controlfield tag="005">20230309162235.0</controlfield><controlfield tag="007">cr uuu---uuuuu</controlfield><controlfield tag="008">230225s2017 xx |||||o 00| ||eng c</controlfield><datafield tag="024" ind1="7" ind2=" "><subfield code="a">10.1049/joe.2016.0330</subfield><subfield code="2">doi</subfield></datafield><datafield tag="035" ind1=" " ind2=" "><subfield code="a">(DE-627)DOAJ001269283</subfield></datafield><datafield tag="035" ind1=" " ind2=" "><subfield code="a">(DE-599)DOAJe034f549829c491ab8c858efc34ed963</subfield></datafield><datafield tag="040" ind1=" " ind2=" "><subfield code="a">DE-627</subfield><subfield code="b">ger</subfield><subfield code="c">DE-627</subfield><subfield code="e">rakwb</subfield></datafield><datafield tag="041" ind1=" " ind2=" "><subfield code="a">eng</subfield></datafield><datafield tag="050" ind1=" " ind2="0"><subfield code="a">TA1-2040</subfield></datafield><datafield tag="100" ind1="0" ind2=" "><subfield code="a">Ke Jin</subfield><subfield code="e">verfasserin</subfield><subfield code="4">aut</subfield></datafield><datafield tag="245" ind1="1" ind2="0"><subfield code="a">Action recognition using vague division DMMs</subfield></datafield><datafield tag="264" ind1=" " ind2="1"><subfield code="c">2017</subfield></datafield><datafield tag="336" ind1=" " ind2=" "><subfield code="a">Text</subfield><subfield code="b">txt</subfield><subfield code="2">rdacontent</subfield></datafield><datafield tag="337" ind1=" " ind2=" "><subfield code="a">Computermedien</subfield><subfield code="b">c</subfield><subfield code="2">rdamedia</subfield></datafield><datafield tag="338" ind1=" " ind2=" "><subfield code="a">Online-Ressource</subfield><subfield code="b">cr</subfield><subfield code="2">rdacarrier</subfield></datafield><datafield tag="520" ind1=" " ind2=" "><subfield code="a">This study presents a novel human action recognition method based on the sequences of depth maps, which provide additional body shape and motion information for action recognition. First, the authors divide each depth sequence into a number of sub-sequences. All these sub-sequences are of uniform length. By controlling vague boundary (VB), they construct a VB-sequence which consists of an original sub-sequence and its adjacent sequences. Then, each depth frame in a VB-sequence is projected onto three orthogonal Cartesian planes, and the absolute value of the difference between two consecutive projected maps is accumulated to form a depth motion map (DMM) to describe the dynamic feature of a VB-sequence. Finally, they concatenate the DMMs of all the VB-sequences in one video sequence to describe an action. Collectively, they call them VB division of depth model. For classification, they apply robust probabilistic collaborative representation classification. The recognition results applied to the MSR Action Pairs, MSR Gesture 3D, MSR Action3D, and UTD-MHAD datasets indicate superior performance of their method over most existing methods.</subfield></datafield><datafield tag="650" ind1=" " ind2="4"><subfield code="a">object recognition</subfield></datafield><datafield tag="650" ind1=" " ind2="4"><subfield code="a">video signal processing</subfield></datafield><datafield tag="650" ind1=" " ind2="4"><subfield code="a">image sequences</subfield></datafield><datafield tag="650" ind1=" " ind2="4"><subfield code="a">image motion analysis</subfield></datafield><datafield tag="650" ind1=" " ind2="4"><subfield code="a">image classification</subfield></datafield><datafield tag="650" ind1=" " ind2="4"><subfield code="a">probability</subfield></datafield><datafield tag="650" ind1=" " ind2="4"><subfield code="a">image representation</subfield></datafield><datafield tag="650" ind1=" " ind2="4"><subfield code="a">human action recognition method</subfield></datafield><datafield tag="650" ind1=" " ind2="4"><subfield code="a">vague division DMM</subfield></datafield><datafield tag="650" ind1=" " ind2="4"><subfield code="a">depth map sequences</subfield></datafield><datafield tag="650" ind1=" " ind2="4"><subfield code="a">vague boundary</subfield></datafield><datafield tag="650" ind1=" " ind2="4"><subfield code="a">VB-sequence</subfield></datafield><datafield tag="650" ind1=" " ind2="4"><subfield code="a">adjacent sequences</subfield></datafield><datafield tag="650" ind1=" " ind2="4"><subfield code="a">original subsequence</subfield></datafield><datafield tag="650" ind1=" " ind2="4"><subfield code="a">orthogonal Cartesian planes</subfield></datafield><datafield tag="650" ind1=" " ind2="4"><subfield code="a">absolute value</subfield></datafield><datafield tag="650" ind1=" " ind2="4"><subfield code="a">depth motion map</subfield></datafield><datafield tag="650" ind1=" " ind2="4"><subfield code="a">video sequence</subfield></datafield><datafield tag="650" ind1=" " ind2="4"><subfield code="a">robust probabilistic collaborative representation classification</subfield></datafield><datafield tag="650" ind1=" " ind2="4"><subfield code="a">MSR Action Pairs dataset</subfield></datafield><datafield tag="650" ind1=" " ind2="4"><subfield code="a">MSR Gesture 3D dataset</subfield></datafield><datafield tag="650" ind1=" " ind2="4"><subfield code="a">MSR Action3D dataset</subfield></datafield><datafield tag="650" ind1=" " ind2="4"><subfield code="a">UTD-MHAD dataset</subfield></datafield><datafield tag="653" ind1=" " ind2="0"><subfield code="a">Engineering (General). Civil engineering (General)</subfield></datafield><datafield tag="700" ind1="0" ind2=" "><subfield code="a">Min Jiang</subfield><subfield code="e">verfasserin</subfield><subfield code="4">aut</subfield></datafield><datafield tag="700" ind1="0" ind2=" "><subfield code="a">Jun Kong</subfield><subfield code="e">verfasserin</subfield><subfield code="4">aut</subfield></datafield><datafield tag="700" ind1="0" ind2=" "><subfield code="a">Hongtao Huo</subfield><subfield code="e">verfasserin</subfield><subfield code="4">aut</subfield></datafield><datafield tag="700" ind1="0" ind2=" "><subfield code="a">Xiaofeng Wang</subfield><subfield code="e">verfasserin</subfield><subfield code="4">aut</subfield></datafield><datafield tag="773" ind1="0" ind2="8"><subfield code="i">In</subfield><subfield code="t">The Journal of Engineering</subfield><subfield code="d">Wiley, 2013</subfield><subfield code="g">(2017)</subfield><subfield code="w">(DE-627)75682270X</subfield><subfield code="w">(DE-600)2727074-9</subfield><subfield code="x">20513305</subfield><subfield code="7">nnns</subfield></datafield><datafield tag="773" ind1="1" ind2="8"><subfield code="g">year:2017</subfield></datafield><datafield tag="856" ind1="4" ind2="0"><subfield code="u">https://doi.org/10.1049/joe.2016.0330</subfield><subfield code="z">kostenfrei</subfield></datafield><datafield tag="856" ind1="4" ind2="0"><subfield code="u">https://doaj.org/article/e034f549829c491ab8c858efc34ed963</subfield><subfield code="z">kostenfrei</subfield></datafield><datafield tag="856" ind1="4" ind2="0"><subfield code="u">http://digital-library.theiet.org/content/journals/10.1049/joe.2016.0330</subfield><subfield code="z">kostenfrei</subfield></datafield><datafield tag="856" ind1="4" ind2="2"><subfield code="u">https://doaj.org/toc/2051-3305</subfield><subfield code="y">Journal toc</subfield><subfield code="z">kostenfrei</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_USEFLAG_A</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">SYSFLAG_A</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_DOAJ</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_20</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_22</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_23</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_24</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_31</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_39</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_40</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_60</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_62</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_63</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_65</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_69</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_70</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_73</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_95</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_105</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_110</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_151</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_161</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_170</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_171</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_213</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_224</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_230</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_285</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_293</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_370</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_602</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_636</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2004</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2005</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2006</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2007</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2010</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2011</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2014</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2026</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2027</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2034</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2037</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2038</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2044</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2048</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2049</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2050</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2055</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2056</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2057</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2059</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2061</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2064</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2068</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2088</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2106</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2108</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2110</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2111</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2118</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2122</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2143</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2144</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2147</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2148</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2152</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2153</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2232</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2336</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2470</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2507</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2522</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4012</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4035</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4037</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4046</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4112</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4125</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4126</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4242</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4249</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4251</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4305</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4306</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4307</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4313</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4322</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4323</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4324</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4325</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4326</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4333</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4334</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4335</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4336</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4338</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4367</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4700</subfield></datafield><datafield tag="951" ind1=" " ind2=" "><subfield code="a">AR</subfield></datafield><datafield tag="952" ind1=" " ind2=" "><subfield code="j">2017</subfield></datafield></record></collection>
|
score |
7.4023485 |