Neural Hand Reconstruction Using A Single RGB Image
We present a neural hand reconstruction method for monocular 3D hand pose and shape estimation in this paper. Instead of directly representing hand with 3D data, a novel UV position map is introduced to represent hand pose and shape with 2D data, which maps 3D hand surface points to 2D image space....
Ausführliche Beschreibung
Autor*in: |
Mengcheng Li [verfasserIn] Liang An [verfasserIn] Tao Yu [verfasserIn] Yangang Wang [verfasserIn] Feng Chen [verfasserIn] Yebin Liu [verfasserIn] |
---|
Format: |
E-Artikel |
---|---|
Sprache: |
Englisch |
Erschienen: |
2020 |
---|
Schlagwörter: |
---|
Übergeordnetes Werk: |
In: Virtual Reality & Intelligent Hardware - KeAi Communications Co., Ltd., 2020, 2(2020), 3, Seite 276-289 |
---|---|
Übergeordnetes Werk: |
volume:2 ; year:2020 ; number:3 ; pages:276-289 |
Links: |
---|
DOI / URN: |
10.1016/j.vrih.2020.05.001 |
---|
Katalog-ID: |
DOAJ012672092 |
---|
LEADER | 01000caa a22002652 4500 | ||
---|---|---|---|
001 | DOAJ012672092 | ||
003 | DE-627 | ||
005 | 20230310045835.0 | ||
007 | cr uuu---uuuuu | ||
008 | 230225s2020 xx |||||o 00| ||eng c | ||
024 | 7 | |a 10.1016/j.vrih.2020.05.001 |2 doi | |
035 | |a (DE-627)DOAJ012672092 | ||
035 | |a (DE-599)DOAJ6714998cd21f458996bf9ca58fad890a | ||
040 | |a DE-627 |b ger |c DE-627 |e rakwb | ||
041 | |a eng | ||
050 | 0 | |a TK7885-7895 | |
100 | 0 | |a Mengcheng Li |e verfasserin |4 aut | |
245 | 1 | 0 | |a Neural Hand Reconstruction Using A Single RGB Image |
264 | 1 | |c 2020 | |
336 | |a Text |b txt |2 rdacontent | ||
337 | |a Computermedien |b c |2 rdamedia | ||
338 | |a Online-Ressource |b cr |2 rdacarrier | ||
520 | |a We present a neural hand reconstruction method for monocular 3D hand pose and shape estimation in this paper. Instead of directly representing hand with 3D data, a novel UV position map is introduced to represent hand pose and shape with 2D data, which maps 3D hand surface points to 2D image space. Furthermore, an encoder-decoder neural network is proposed to infer such UV position map from only single image. To train such network with the lack of ground truth training pairs, we propose a novel MANOReg module which employs MANO model as shape prior to constrain high-dimensional space of UV position map. Both quantitative and qualitative experiments demonstrate the effectiveness of our UV position map representation and MANOReg module. | ||
650 | 4 | |a hand reconstruction | |
650 | 4 | |a CNN | |
650 | 4 | |a single image | |
650 | 4 | |a motion capture | |
653 | 0 | |a Computer engineering. Computer hardware | |
700 | 0 | |a Liang An |e verfasserin |4 aut | |
700 | 0 | |a Tao Yu |e verfasserin |4 aut | |
700 | 0 | |a Yangang Wang |e verfasserin |4 aut | |
700 | 0 | |a Feng Chen |e verfasserin |4 aut | |
700 | 0 | |a Yebin Liu |e verfasserin |4 aut | |
773 | 0 | 8 | |i In |t Virtual Reality & Intelligent Hardware |d KeAi Communications Co., Ltd., 2020 |g 2(2020), 3, Seite 276-289 |w (DE-627)1692190121 |w (DE-600)3011166-3 |x 26661209 |7 nnns |
773 | 1 | 8 | |g volume:2 |g year:2020 |g number:3 |g pages:276-289 |
856 | 4 | 0 | |u https://doi.org/10.1016/j.vrih.2020.05.001 |z kostenfrei |
856 | 4 | 0 | |u https://doaj.org/article/6714998cd21f458996bf9ca58fad890a |z kostenfrei |
856 | 4 | 0 | |u http://www.sciencedirect.com/science/article/pii/S2096579620300371 |z kostenfrei |
856 | 4 | 2 | |u https://doaj.org/toc/2096-5796 |y Journal toc |z kostenfrei |
912 | |a GBV_USEFLAG_A | ||
912 | |a SYSFLAG_A | ||
912 | |a GBV_DOAJ | ||
912 | |a GBV_ILN_11 | ||
912 | |a GBV_ILN_20 | ||
912 | |a GBV_ILN_21 | ||
912 | |a GBV_ILN_22 | ||
912 | |a GBV_ILN_23 | ||
912 | |a GBV_ILN_24 | ||
912 | |a GBV_ILN_31 | ||
912 | |a GBV_ILN_39 | ||
912 | |a GBV_ILN_40 | ||
912 | |a GBV_ILN_60 | ||
912 | |a GBV_ILN_62 | ||
912 | |a GBV_ILN_63 | ||
912 | |a GBV_ILN_65 | ||
912 | |a GBV_ILN_69 | ||
912 | |a GBV_ILN_70 | ||
912 | |a GBV_ILN_73 | ||
912 | |a GBV_ILN_95 | ||
912 | |a GBV_ILN_105 | ||
912 | |a GBV_ILN_110 | ||
912 | |a GBV_ILN_151 | ||
912 | |a GBV_ILN_161 | ||
912 | |a GBV_ILN_170 | ||
912 | |a GBV_ILN_213 | ||
912 | |a GBV_ILN_230 | ||
912 | |a GBV_ILN_285 | ||
912 | |a GBV_ILN_293 | ||
912 | |a GBV_ILN_370 | ||
912 | |a GBV_ILN_602 | ||
912 | |a GBV_ILN_2014 | ||
912 | |a GBV_ILN_4012 | ||
912 | |a GBV_ILN_4037 | ||
912 | |a GBV_ILN_4112 | ||
912 | |a GBV_ILN_4125 | ||
912 | |a GBV_ILN_4126 | ||
912 | |a GBV_ILN_4249 | ||
912 | |a GBV_ILN_4305 | ||
912 | |a GBV_ILN_4306 | ||
912 | |a GBV_ILN_4307 | ||
912 | |a GBV_ILN_4313 | ||
912 | |a GBV_ILN_4322 | ||
912 | |a GBV_ILN_4323 | ||
912 | |a GBV_ILN_4324 | ||
912 | |a GBV_ILN_4325 | ||
912 | |a GBV_ILN_4326 | ||
912 | |a GBV_ILN_4335 | ||
912 | |a GBV_ILN_4338 | ||
912 | |a GBV_ILN_4367 | ||
912 | |a GBV_ILN_4700 | ||
951 | |a AR | ||
952 | |d 2 |j 2020 |e 3 |h 276-289 |
author_variant |
m l ml l a la t y ty y w yw f c fc y l yl |
---|---|
matchkey_str |
article:26661209:2020----::erladeosrcinsnai |
hierarchy_sort_str |
2020 |
callnumber-subject-code |
TK |
publishDate |
2020 |
allfields |
10.1016/j.vrih.2020.05.001 doi (DE-627)DOAJ012672092 (DE-599)DOAJ6714998cd21f458996bf9ca58fad890a DE-627 ger DE-627 rakwb eng TK7885-7895 Mengcheng Li verfasserin aut Neural Hand Reconstruction Using A Single RGB Image 2020 Text txt rdacontent Computermedien c rdamedia Online-Ressource cr rdacarrier We present a neural hand reconstruction method for monocular 3D hand pose and shape estimation in this paper. Instead of directly representing hand with 3D data, a novel UV position map is introduced to represent hand pose and shape with 2D data, which maps 3D hand surface points to 2D image space. Furthermore, an encoder-decoder neural network is proposed to infer such UV position map from only single image. To train such network with the lack of ground truth training pairs, we propose a novel MANOReg module which employs MANO model as shape prior to constrain high-dimensional space of UV position map. Both quantitative and qualitative experiments demonstrate the effectiveness of our UV position map representation and MANOReg module. hand reconstruction CNN single image motion capture Computer engineering. Computer hardware Liang An verfasserin aut Tao Yu verfasserin aut Yangang Wang verfasserin aut Feng Chen verfasserin aut Yebin Liu verfasserin aut In Virtual Reality & Intelligent Hardware KeAi Communications Co., Ltd., 2020 2(2020), 3, Seite 276-289 (DE-627)1692190121 (DE-600)3011166-3 26661209 nnns volume:2 year:2020 number:3 pages:276-289 https://doi.org/10.1016/j.vrih.2020.05.001 kostenfrei https://doaj.org/article/6714998cd21f458996bf9ca58fad890a kostenfrei http://www.sciencedirect.com/science/article/pii/S2096579620300371 kostenfrei https://doaj.org/toc/2096-5796 Journal toc kostenfrei GBV_USEFLAG_A SYSFLAG_A GBV_DOAJ GBV_ILN_11 GBV_ILN_20 GBV_ILN_21 GBV_ILN_22 GBV_ILN_23 GBV_ILN_24 GBV_ILN_31 GBV_ILN_39 GBV_ILN_40 GBV_ILN_60 GBV_ILN_62 GBV_ILN_63 GBV_ILN_65 GBV_ILN_69 GBV_ILN_70 GBV_ILN_73 GBV_ILN_95 GBV_ILN_105 GBV_ILN_110 GBV_ILN_151 GBV_ILN_161 GBV_ILN_170 GBV_ILN_213 GBV_ILN_230 GBV_ILN_285 GBV_ILN_293 GBV_ILN_370 GBV_ILN_602 GBV_ILN_2014 GBV_ILN_4012 GBV_ILN_4037 GBV_ILN_4112 GBV_ILN_4125 GBV_ILN_4126 GBV_ILN_4249 GBV_ILN_4305 GBV_ILN_4306 GBV_ILN_4307 GBV_ILN_4313 GBV_ILN_4322 GBV_ILN_4323 GBV_ILN_4324 GBV_ILN_4325 GBV_ILN_4326 GBV_ILN_4335 GBV_ILN_4338 GBV_ILN_4367 GBV_ILN_4700 AR 2 2020 3 276-289 |
spelling |
10.1016/j.vrih.2020.05.001 doi (DE-627)DOAJ012672092 (DE-599)DOAJ6714998cd21f458996bf9ca58fad890a DE-627 ger DE-627 rakwb eng TK7885-7895 Mengcheng Li verfasserin aut Neural Hand Reconstruction Using A Single RGB Image 2020 Text txt rdacontent Computermedien c rdamedia Online-Ressource cr rdacarrier We present a neural hand reconstruction method for monocular 3D hand pose and shape estimation in this paper. Instead of directly representing hand with 3D data, a novel UV position map is introduced to represent hand pose and shape with 2D data, which maps 3D hand surface points to 2D image space. Furthermore, an encoder-decoder neural network is proposed to infer such UV position map from only single image. To train such network with the lack of ground truth training pairs, we propose a novel MANOReg module which employs MANO model as shape prior to constrain high-dimensional space of UV position map. Both quantitative and qualitative experiments demonstrate the effectiveness of our UV position map representation and MANOReg module. hand reconstruction CNN single image motion capture Computer engineering. Computer hardware Liang An verfasserin aut Tao Yu verfasserin aut Yangang Wang verfasserin aut Feng Chen verfasserin aut Yebin Liu verfasserin aut In Virtual Reality & Intelligent Hardware KeAi Communications Co., Ltd., 2020 2(2020), 3, Seite 276-289 (DE-627)1692190121 (DE-600)3011166-3 26661209 nnns volume:2 year:2020 number:3 pages:276-289 https://doi.org/10.1016/j.vrih.2020.05.001 kostenfrei https://doaj.org/article/6714998cd21f458996bf9ca58fad890a kostenfrei http://www.sciencedirect.com/science/article/pii/S2096579620300371 kostenfrei https://doaj.org/toc/2096-5796 Journal toc kostenfrei GBV_USEFLAG_A SYSFLAG_A GBV_DOAJ GBV_ILN_11 GBV_ILN_20 GBV_ILN_21 GBV_ILN_22 GBV_ILN_23 GBV_ILN_24 GBV_ILN_31 GBV_ILN_39 GBV_ILN_40 GBV_ILN_60 GBV_ILN_62 GBV_ILN_63 GBV_ILN_65 GBV_ILN_69 GBV_ILN_70 GBV_ILN_73 GBV_ILN_95 GBV_ILN_105 GBV_ILN_110 GBV_ILN_151 GBV_ILN_161 GBV_ILN_170 GBV_ILN_213 GBV_ILN_230 GBV_ILN_285 GBV_ILN_293 GBV_ILN_370 GBV_ILN_602 GBV_ILN_2014 GBV_ILN_4012 GBV_ILN_4037 GBV_ILN_4112 GBV_ILN_4125 GBV_ILN_4126 GBV_ILN_4249 GBV_ILN_4305 GBV_ILN_4306 GBV_ILN_4307 GBV_ILN_4313 GBV_ILN_4322 GBV_ILN_4323 GBV_ILN_4324 GBV_ILN_4325 GBV_ILN_4326 GBV_ILN_4335 GBV_ILN_4338 GBV_ILN_4367 GBV_ILN_4700 AR 2 2020 3 276-289 |
allfields_unstemmed |
10.1016/j.vrih.2020.05.001 doi (DE-627)DOAJ012672092 (DE-599)DOAJ6714998cd21f458996bf9ca58fad890a DE-627 ger DE-627 rakwb eng TK7885-7895 Mengcheng Li verfasserin aut Neural Hand Reconstruction Using A Single RGB Image 2020 Text txt rdacontent Computermedien c rdamedia Online-Ressource cr rdacarrier We present a neural hand reconstruction method for monocular 3D hand pose and shape estimation in this paper. Instead of directly representing hand with 3D data, a novel UV position map is introduced to represent hand pose and shape with 2D data, which maps 3D hand surface points to 2D image space. Furthermore, an encoder-decoder neural network is proposed to infer such UV position map from only single image. To train such network with the lack of ground truth training pairs, we propose a novel MANOReg module which employs MANO model as shape prior to constrain high-dimensional space of UV position map. Both quantitative and qualitative experiments demonstrate the effectiveness of our UV position map representation and MANOReg module. hand reconstruction CNN single image motion capture Computer engineering. Computer hardware Liang An verfasserin aut Tao Yu verfasserin aut Yangang Wang verfasserin aut Feng Chen verfasserin aut Yebin Liu verfasserin aut In Virtual Reality & Intelligent Hardware KeAi Communications Co., Ltd., 2020 2(2020), 3, Seite 276-289 (DE-627)1692190121 (DE-600)3011166-3 26661209 nnns volume:2 year:2020 number:3 pages:276-289 https://doi.org/10.1016/j.vrih.2020.05.001 kostenfrei https://doaj.org/article/6714998cd21f458996bf9ca58fad890a kostenfrei http://www.sciencedirect.com/science/article/pii/S2096579620300371 kostenfrei https://doaj.org/toc/2096-5796 Journal toc kostenfrei GBV_USEFLAG_A SYSFLAG_A GBV_DOAJ GBV_ILN_11 GBV_ILN_20 GBV_ILN_21 GBV_ILN_22 GBV_ILN_23 GBV_ILN_24 GBV_ILN_31 GBV_ILN_39 GBV_ILN_40 GBV_ILN_60 GBV_ILN_62 GBV_ILN_63 GBV_ILN_65 GBV_ILN_69 GBV_ILN_70 GBV_ILN_73 GBV_ILN_95 GBV_ILN_105 GBV_ILN_110 GBV_ILN_151 GBV_ILN_161 GBV_ILN_170 GBV_ILN_213 GBV_ILN_230 GBV_ILN_285 GBV_ILN_293 GBV_ILN_370 GBV_ILN_602 GBV_ILN_2014 GBV_ILN_4012 GBV_ILN_4037 GBV_ILN_4112 GBV_ILN_4125 GBV_ILN_4126 GBV_ILN_4249 GBV_ILN_4305 GBV_ILN_4306 GBV_ILN_4307 GBV_ILN_4313 GBV_ILN_4322 GBV_ILN_4323 GBV_ILN_4324 GBV_ILN_4325 GBV_ILN_4326 GBV_ILN_4335 GBV_ILN_4338 GBV_ILN_4367 GBV_ILN_4700 AR 2 2020 3 276-289 |
allfieldsGer |
10.1016/j.vrih.2020.05.001 doi (DE-627)DOAJ012672092 (DE-599)DOAJ6714998cd21f458996bf9ca58fad890a DE-627 ger DE-627 rakwb eng TK7885-7895 Mengcheng Li verfasserin aut Neural Hand Reconstruction Using A Single RGB Image 2020 Text txt rdacontent Computermedien c rdamedia Online-Ressource cr rdacarrier We present a neural hand reconstruction method for monocular 3D hand pose and shape estimation in this paper. Instead of directly representing hand with 3D data, a novel UV position map is introduced to represent hand pose and shape with 2D data, which maps 3D hand surface points to 2D image space. Furthermore, an encoder-decoder neural network is proposed to infer such UV position map from only single image. To train such network with the lack of ground truth training pairs, we propose a novel MANOReg module which employs MANO model as shape prior to constrain high-dimensional space of UV position map. Both quantitative and qualitative experiments demonstrate the effectiveness of our UV position map representation and MANOReg module. hand reconstruction CNN single image motion capture Computer engineering. Computer hardware Liang An verfasserin aut Tao Yu verfasserin aut Yangang Wang verfasserin aut Feng Chen verfasserin aut Yebin Liu verfasserin aut In Virtual Reality & Intelligent Hardware KeAi Communications Co., Ltd., 2020 2(2020), 3, Seite 276-289 (DE-627)1692190121 (DE-600)3011166-3 26661209 nnns volume:2 year:2020 number:3 pages:276-289 https://doi.org/10.1016/j.vrih.2020.05.001 kostenfrei https://doaj.org/article/6714998cd21f458996bf9ca58fad890a kostenfrei http://www.sciencedirect.com/science/article/pii/S2096579620300371 kostenfrei https://doaj.org/toc/2096-5796 Journal toc kostenfrei GBV_USEFLAG_A SYSFLAG_A GBV_DOAJ GBV_ILN_11 GBV_ILN_20 GBV_ILN_21 GBV_ILN_22 GBV_ILN_23 GBV_ILN_24 GBV_ILN_31 GBV_ILN_39 GBV_ILN_40 GBV_ILN_60 GBV_ILN_62 GBV_ILN_63 GBV_ILN_65 GBV_ILN_69 GBV_ILN_70 GBV_ILN_73 GBV_ILN_95 GBV_ILN_105 GBV_ILN_110 GBV_ILN_151 GBV_ILN_161 GBV_ILN_170 GBV_ILN_213 GBV_ILN_230 GBV_ILN_285 GBV_ILN_293 GBV_ILN_370 GBV_ILN_602 GBV_ILN_2014 GBV_ILN_4012 GBV_ILN_4037 GBV_ILN_4112 GBV_ILN_4125 GBV_ILN_4126 GBV_ILN_4249 GBV_ILN_4305 GBV_ILN_4306 GBV_ILN_4307 GBV_ILN_4313 GBV_ILN_4322 GBV_ILN_4323 GBV_ILN_4324 GBV_ILN_4325 GBV_ILN_4326 GBV_ILN_4335 GBV_ILN_4338 GBV_ILN_4367 GBV_ILN_4700 AR 2 2020 3 276-289 |
allfieldsSound |
10.1016/j.vrih.2020.05.001 doi (DE-627)DOAJ012672092 (DE-599)DOAJ6714998cd21f458996bf9ca58fad890a DE-627 ger DE-627 rakwb eng TK7885-7895 Mengcheng Li verfasserin aut Neural Hand Reconstruction Using A Single RGB Image 2020 Text txt rdacontent Computermedien c rdamedia Online-Ressource cr rdacarrier We present a neural hand reconstruction method for monocular 3D hand pose and shape estimation in this paper. Instead of directly representing hand with 3D data, a novel UV position map is introduced to represent hand pose and shape with 2D data, which maps 3D hand surface points to 2D image space. Furthermore, an encoder-decoder neural network is proposed to infer such UV position map from only single image. To train such network with the lack of ground truth training pairs, we propose a novel MANOReg module which employs MANO model as shape prior to constrain high-dimensional space of UV position map. Both quantitative and qualitative experiments demonstrate the effectiveness of our UV position map representation and MANOReg module. hand reconstruction CNN single image motion capture Computer engineering. Computer hardware Liang An verfasserin aut Tao Yu verfasserin aut Yangang Wang verfasserin aut Feng Chen verfasserin aut Yebin Liu verfasserin aut In Virtual Reality & Intelligent Hardware KeAi Communications Co., Ltd., 2020 2(2020), 3, Seite 276-289 (DE-627)1692190121 (DE-600)3011166-3 26661209 nnns volume:2 year:2020 number:3 pages:276-289 https://doi.org/10.1016/j.vrih.2020.05.001 kostenfrei https://doaj.org/article/6714998cd21f458996bf9ca58fad890a kostenfrei http://www.sciencedirect.com/science/article/pii/S2096579620300371 kostenfrei https://doaj.org/toc/2096-5796 Journal toc kostenfrei GBV_USEFLAG_A SYSFLAG_A GBV_DOAJ GBV_ILN_11 GBV_ILN_20 GBV_ILN_21 GBV_ILN_22 GBV_ILN_23 GBV_ILN_24 GBV_ILN_31 GBV_ILN_39 GBV_ILN_40 GBV_ILN_60 GBV_ILN_62 GBV_ILN_63 GBV_ILN_65 GBV_ILN_69 GBV_ILN_70 GBV_ILN_73 GBV_ILN_95 GBV_ILN_105 GBV_ILN_110 GBV_ILN_151 GBV_ILN_161 GBV_ILN_170 GBV_ILN_213 GBV_ILN_230 GBV_ILN_285 GBV_ILN_293 GBV_ILN_370 GBV_ILN_602 GBV_ILN_2014 GBV_ILN_4012 GBV_ILN_4037 GBV_ILN_4112 GBV_ILN_4125 GBV_ILN_4126 GBV_ILN_4249 GBV_ILN_4305 GBV_ILN_4306 GBV_ILN_4307 GBV_ILN_4313 GBV_ILN_4322 GBV_ILN_4323 GBV_ILN_4324 GBV_ILN_4325 GBV_ILN_4326 GBV_ILN_4335 GBV_ILN_4338 GBV_ILN_4367 GBV_ILN_4700 AR 2 2020 3 276-289 |
language |
English |
source |
In Virtual Reality & Intelligent Hardware 2(2020), 3, Seite 276-289 volume:2 year:2020 number:3 pages:276-289 |
sourceStr |
In Virtual Reality & Intelligent Hardware 2(2020), 3, Seite 276-289 volume:2 year:2020 number:3 pages:276-289 |
format_phy_str_mv |
Article |
institution |
findex.gbv.de |
topic_facet |
hand reconstruction CNN single image motion capture Computer engineering. Computer hardware |
isfreeaccess_bool |
true |
container_title |
Virtual Reality & Intelligent Hardware |
authorswithroles_txt_mv |
Mengcheng Li @@aut@@ Liang An @@aut@@ Tao Yu @@aut@@ Yangang Wang @@aut@@ Feng Chen @@aut@@ Yebin Liu @@aut@@ |
publishDateDaySort_date |
2020-01-01T00:00:00Z |
hierarchy_top_id |
1692190121 |
id |
DOAJ012672092 |
language_de |
englisch |
fullrecord |
<?xml version="1.0" encoding="UTF-8"?><collection xmlns="http://www.loc.gov/MARC21/slim"><record><leader>01000caa a22002652 4500</leader><controlfield tag="001">DOAJ012672092</controlfield><controlfield tag="003">DE-627</controlfield><controlfield tag="005">20230310045835.0</controlfield><controlfield tag="007">cr uuu---uuuuu</controlfield><controlfield tag="008">230225s2020 xx |||||o 00| ||eng c</controlfield><datafield tag="024" ind1="7" ind2=" "><subfield code="a">10.1016/j.vrih.2020.05.001</subfield><subfield code="2">doi</subfield></datafield><datafield tag="035" ind1=" " ind2=" "><subfield code="a">(DE-627)DOAJ012672092</subfield></datafield><datafield tag="035" ind1=" " ind2=" "><subfield code="a">(DE-599)DOAJ6714998cd21f458996bf9ca58fad890a</subfield></datafield><datafield tag="040" ind1=" " ind2=" "><subfield code="a">DE-627</subfield><subfield code="b">ger</subfield><subfield code="c">DE-627</subfield><subfield code="e">rakwb</subfield></datafield><datafield tag="041" ind1=" " ind2=" "><subfield code="a">eng</subfield></datafield><datafield tag="050" ind1=" " ind2="0"><subfield code="a">TK7885-7895</subfield></datafield><datafield tag="100" ind1="0" ind2=" "><subfield code="a">Mengcheng Li</subfield><subfield code="e">verfasserin</subfield><subfield code="4">aut</subfield></datafield><datafield tag="245" ind1="1" ind2="0"><subfield code="a">Neural Hand Reconstruction Using A Single RGB Image</subfield></datafield><datafield tag="264" ind1=" " ind2="1"><subfield code="c">2020</subfield></datafield><datafield tag="336" ind1=" " ind2=" "><subfield code="a">Text</subfield><subfield code="b">txt</subfield><subfield code="2">rdacontent</subfield></datafield><datafield tag="337" ind1=" " ind2=" "><subfield code="a">Computermedien</subfield><subfield code="b">c</subfield><subfield code="2">rdamedia</subfield></datafield><datafield tag="338" ind1=" " ind2=" "><subfield code="a">Online-Ressource</subfield><subfield code="b">cr</subfield><subfield code="2">rdacarrier</subfield></datafield><datafield tag="520" ind1=" " ind2=" "><subfield code="a">We present a neural hand reconstruction method for monocular 3D hand pose and shape estimation in this paper. Instead of directly representing hand with 3D data, a novel UV position map is introduced to represent hand pose and shape with 2D data, which maps 3D hand surface points to 2D image space. Furthermore, an encoder-decoder neural network is proposed to infer such UV position map from only single image. To train such network with the lack of ground truth training pairs, we propose a novel MANOReg module which employs MANO model as shape prior to constrain high-dimensional space of UV position map. Both quantitative and qualitative experiments demonstrate the effectiveness of our UV position map representation and MANOReg module.</subfield></datafield><datafield tag="650" ind1=" " ind2="4"><subfield code="a">hand reconstruction</subfield></datafield><datafield tag="650" ind1=" " ind2="4"><subfield code="a">CNN</subfield></datafield><datafield tag="650" ind1=" " ind2="4"><subfield code="a">single image</subfield></datafield><datafield tag="650" ind1=" " ind2="4"><subfield code="a">motion capture</subfield></datafield><datafield tag="653" ind1=" " ind2="0"><subfield code="a">Computer engineering. Computer hardware</subfield></datafield><datafield tag="700" ind1="0" ind2=" "><subfield code="a">Liang An</subfield><subfield code="e">verfasserin</subfield><subfield code="4">aut</subfield></datafield><datafield tag="700" ind1="0" ind2=" "><subfield code="a">Tao Yu</subfield><subfield code="e">verfasserin</subfield><subfield code="4">aut</subfield></datafield><datafield tag="700" ind1="0" ind2=" "><subfield code="a">Yangang Wang</subfield><subfield code="e">verfasserin</subfield><subfield code="4">aut</subfield></datafield><datafield tag="700" ind1="0" ind2=" "><subfield code="a">Feng Chen</subfield><subfield code="e">verfasserin</subfield><subfield code="4">aut</subfield></datafield><datafield tag="700" ind1="0" ind2=" "><subfield code="a">Yebin Liu</subfield><subfield code="e">verfasserin</subfield><subfield code="4">aut</subfield></datafield><datafield tag="773" ind1="0" ind2="8"><subfield code="i">In</subfield><subfield code="t">Virtual Reality & Intelligent Hardware</subfield><subfield code="d">KeAi Communications Co., Ltd., 2020</subfield><subfield code="g">2(2020), 3, Seite 276-289</subfield><subfield code="w">(DE-627)1692190121</subfield><subfield code="w">(DE-600)3011166-3</subfield><subfield code="x">26661209</subfield><subfield code="7">nnns</subfield></datafield><datafield tag="773" ind1="1" ind2="8"><subfield code="g">volume:2</subfield><subfield code="g">year:2020</subfield><subfield code="g">number:3</subfield><subfield code="g">pages:276-289</subfield></datafield><datafield tag="856" ind1="4" ind2="0"><subfield code="u">https://doi.org/10.1016/j.vrih.2020.05.001</subfield><subfield code="z">kostenfrei</subfield></datafield><datafield tag="856" ind1="4" ind2="0"><subfield code="u">https://doaj.org/article/6714998cd21f458996bf9ca58fad890a</subfield><subfield code="z">kostenfrei</subfield></datafield><datafield tag="856" ind1="4" ind2="0"><subfield code="u">http://www.sciencedirect.com/science/article/pii/S2096579620300371</subfield><subfield code="z">kostenfrei</subfield></datafield><datafield tag="856" ind1="4" ind2="2"><subfield code="u">https://doaj.org/toc/2096-5796</subfield><subfield code="y">Journal toc</subfield><subfield code="z">kostenfrei</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_USEFLAG_A</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">SYSFLAG_A</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_DOAJ</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_11</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_20</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_21</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_22</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_23</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_24</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_31</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_39</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_40</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_60</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_62</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_63</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_65</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_69</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_70</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_73</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_95</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_105</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_110</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_151</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_161</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_170</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_213</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_230</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_285</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_293</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_370</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_602</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2014</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4012</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4037</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4112</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4125</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4126</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4249</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4305</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4306</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4307</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4313</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4322</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4323</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4324</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4325</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4326</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4335</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4338</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4367</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4700</subfield></datafield><datafield tag="951" ind1=" " ind2=" "><subfield code="a">AR</subfield></datafield><datafield tag="952" ind1=" " ind2=" "><subfield code="d">2</subfield><subfield code="j">2020</subfield><subfield code="e">3</subfield><subfield code="h">276-289</subfield></datafield></record></collection>
|
callnumber-first |
T - Technology |
author |
Mengcheng Li |
spellingShingle |
Mengcheng Li misc TK7885-7895 misc hand reconstruction misc CNN misc single image misc motion capture misc Computer engineering. Computer hardware Neural Hand Reconstruction Using A Single RGB Image |
authorStr |
Mengcheng Li |
ppnlink_with_tag_str_mv |
@@773@@(DE-627)1692190121 |
format |
electronic Article |
delete_txt_mv |
keep |
author_role |
aut aut aut aut aut aut |
collection |
DOAJ |
remote_str |
true |
callnumber-label |
TK7885-7895 |
illustrated |
Not Illustrated |
issn |
26661209 |
topic_title |
TK7885-7895 Neural Hand Reconstruction Using A Single RGB Image hand reconstruction CNN single image motion capture |
topic |
misc TK7885-7895 misc hand reconstruction misc CNN misc single image misc motion capture misc Computer engineering. Computer hardware |
topic_unstemmed |
misc TK7885-7895 misc hand reconstruction misc CNN misc single image misc motion capture misc Computer engineering. Computer hardware |
topic_browse |
misc TK7885-7895 misc hand reconstruction misc CNN misc single image misc motion capture misc Computer engineering. Computer hardware |
format_facet |
Elektronische Aufsätze Aufsätze Elektronische Ressource |
format_main_str_mv |
Text Zeitschrift/Artikel |
carriertype_str_mv |
cr |
hierarchy_parent_title |
Virtual Reality & Intelligent Hardware |
hierarchy_parent_id |
1692190121 |
hierarchy_top_title |
Virtual Reality & Intelligent Hardware |
isfreeaccess_txt |
true |
familylinks_str_mv |
(DE-627)1692190121 (DE-600)3011166-3 |
title |
Neural Hand Reconstruction Using A Single RGB Image |
ctrlnum |
(DE-627)DOAJ012672092 (DE-599)DOAJ6714998cd21f458996bf9ca58fad890a |
title_full |
Neural Hand Reconstruction Using A Single RGB Image |
author_sort |
Mengcheng Li |
journal |
Virtual Reality & Intelligent Hardware |
journalStr |
Virtual Reality & Intelligent Hardware |
callnumber-first-code |
T |
lang_code |
eng |
isOA_bool |
true |
recordtype |
marc |
publishDateSort |
2020 |
contenttype_str_mv |
txt |
container_start_page |
276 |
author_browse |
Mengcheng Li Liang An Tao Yu Yangang Wang Feng Chen Yebin Liu |
container_volume |
2 |
class |
TK7885-7895 |
format_se |
Elektronische Aufsätze |
author-letter |
Mengcheng Li |
doi_str_mv |
10.1016/j.vrih.2020.05.001 |
author2-role |
verfasserin |
title_sort |
neural hand reconstruction using a single rgb image |
callnumber |
TK7885-7895 |
title_auth |
Neural Hand Reconstruction Using A Single RGB Image |
abstract |
We present a neural hand reconstruction method for monocular 3D hand pose and shape estimation in this paper. Instead of directly representing hand with 3D data, a novel UV position map is introduced to represent hand pose and shape with 2D data, which maps 3D hand surface points to 2D image space. Furthermore, an encoder-decoder neural network is proposed to infer such UV position map from only single image. To train such network with the lack of ground truth training pairs, we propose a novel MANOReg module which employs MANO model as shape prior to constrain high-dimensional space of UV position map. Both quantitative and qualitative experiments demonstrate the effectiveness of our UV position map representation and MANOReg module. |
abstractGer |
We present a neural hand reconstruction method for monocular 3D hand pose and shape estimation in this paper. Instead of directly representing hand with 3D data, a novel UV position map is introduced to represent hand pose and shape with 2D data, which maps 3D hand surface points to 2D image space. Furthermore, an encoder-decoder neural network is proposed to infer such UV position map from only single image. To train such network with the lack of ground truth training pairs, we propose a novel MANOReg module which employs MANO model as shape prior to constrain high-dimensional space of UV position map. Both quantitative and qualitative experiments demonstrate the effectiveness of our UV position map representation and MANOReg module. |
abstract_unstemmed |
We present a neural hand reconstruction method for monocular 3D hand pose and shape estimation in this paper. Instead of directly representing hand with 3D data, a novel UV position map is introduced to represent hand pose and shape with 2D data, which maps 3D hand surface points to 2D image space. Furthermore, an encoder-decoder neural network is proposed to infer such UV position map from only single image. To train such network with the lack of ground truth training pairs, we propose a novel MANOReg module which employs MANO model as shape prior to constrain high-dimensional space of UV position map. Both quantitative and qualitative experiments demonstrate the effectiveness of our UV position map representation and MANOReg module. |
collection_details |
GBV_USEFLAG_A SYSFLAG_A GBV_DOAJ GBV_ILN_11 GBV_ILN_20 GBV_ILN_21 GBV_ILN_22 GBV_ILN_23 GBV_ILN_24 GBV_ILN_31 GBV_ILN_39 GBV_ILN_40 GBV_ILN_60 GBV_ILN_62 GBV_ILN_63 GBV_ILN_65 GBV_ILN_69 GBV_ILN_70 GBV_ILN_73 GBV_ILN_95 GBV_ILN_105 GBV_ILN_110 GBV_ILN_151 GBV_ILN_161 GBV_ILN_170 GBV_ILN_213 GBV_ILN_230 GBV_ILN_285 GBV_ILN_293 GBV_ILN_370 GBV_ILN_602 GBV_ILN_2014 GBV_ILN_4012 GBV_ILN_4037 GBV_ILN_4112 GBV_ILN_4125 GBV_ILN_4126 GBV_ILN_4249 GBV_ILN_4305 GBV_ILN_4306 GBV_ILN_4307 GBV_ILN_4313 GBV_ILN_4322 GBV_ILN_4323 GBV_ILN_4324 GBV_ILN_4325 GBV_ILN_4326 GBV_ILN_4335 GBV_ILN_4338 GBV_ILN_4367 GBV_ILN_4700 |
container_issue |
3 |
title_short |
Neural Hand Reconstruction Using A Single RGB Image |
url |
https://doi.org/10.1016/j.vrih.2020.05.001 https://doaj.org/article/6714998cd21f458996bf9ca58fad890a http://www.sciencedirect.com/science/article/pii/S2096579620300371 https://doaj.org/toc/2096-5796 |
remote_bool |
true |
author2 |
Liang An Tao Yu Yangang Wang Feng Chen Yebin Liu |
author2Str |
Liang An Tao Yu Yangang Wang Feng Chen Yebin Liu |
ppnlink |
1692190121 |
callnumber-subject |
TK - Electrical and Nuclear Engineering |
mediatype_str_mv |
c |
isOA_txt |
true |
hochschulschrift_bool |
false |
doi_str |
10.1016/j.vrih.2020.05.001 |
callnumber-a |
TK7885-7895 |
up_date |
2024-07-03T13:24:30.135Z |
_version_ |
1803564425034072064 |
fullrecord_marcxml |
<?xml version="1.0" encoding="UTF-8"?><collection xmlns="http://www.loc.gov/MARC21/slim"><record><leader>01000caa a22002652 4500</leader><controlfield tag="001">DOAJ012672092</controlfield><controlfield tag="003">DE-627</controlfield><controlfield tag="005">20230310045835.0</controlfield><controlfield tag="007">cr uuu---uuuuu</controlfield><controlfield tag="008">230225s2020 xx |||||o 00| ||eng c</controlfield><datafield tag="024" ind1="7" ind2=" "><subfield code="a">10.1016/j.vrih.2020.05.001</subfield><subfield code="2">doi</subfield></datafield><datafield tag="035" ind1=" " ind2=" "><subfield code="a">(DE-627)DOAJ012672092</subfield></datafield><datafield tag="035" ind1=" " ind2=" "><subfield code="a">(DE-599)DOAJ6714998cd21f458996bf9ca58fad890a</subfield></datafield><datafield tag="040" ind1=" " ind2=" "><subfield code="a">DE-627</subfield><subfield code="b">ger</subfield><subfield code="c">DE-627</subfield><subfield code="e">rakwb</subfield></datafield><datafield tag="041" ind1=" " ind2=" "><subfield code="a">eng</subfield></datafield><datafield tag="050" ind1=" " ind2="0"><subfield code="a">TK7885-7895</subfield></datafield><datafield tag="100" ind1="0" ind2=" "><subfield code="a">Mengcheng Li</subfield><subfield code="e">verfasserin</subfield><subfield code="4">aut</subfield></datafield><datafield tag="245" ind1="1" ind2="0"><subfield code="a">Neural Hand Reconstruction Using A Single RGB Image</subfield></datafield><datafield tag="264" ind1=" " ind2="1"><subfield code="c">2020</subfield></datafield><datafield tag="336" ind1=" " ind2=" "><subfield code="a">Text</subfield><subfield code="b">txt</subfield><subfield code="2">rdacontent</subfield></datafield><datafield tag="337" ind1=" " ind2=" "><subfield code="a">Computermedien</subfield><subfield code="b">c</subfield><subfield code="2">rdamedia</subfield></datafield><datafield tag="338" ind1=" " ind2=" "><subfield code="a">Online-Ressource</subfield><subfield code="b">cr</subfield><subfield code="2">rdacarrier</subfield></datafield><datafield tag="520" ind1=" " ind2=" "><subfield code="a">We present a neural hand reconstruction method for monocular 3D hand pose and shape estimation in this paper. Instead of directly representing hand with 3D data, a novel UV position map is introduced to represent hand pose and shape with 2D data, which maps 3D hand surface points to 2D image space. Furthermore, an encoder-decoder neural network is proposed to infer such UV position map from only single image. To train such network with the lack of ground truth training pairs, we propose a novel MANOReg module which employs MANO model as shape prior to constrain high-dimensional space of UV position map. Both quantitative and qualitative experiments demonstrate the effectiveness of our UV position map representation and MANOReg module.</subfield></datafield><datafield tag="650" ind1=" " ind2="4"><subfield code="a">hand reconstruction</subfield></datafield><datafield tag="650" ind1=" " ind2="4"><subfield code="a">CNN</subfield></datafield><datafield tag="650" ind1=" " ind2="4"><subfield code="a">single image</subfield></datafield><datafield tag="650" ind1=" " ind2="4"><subfield code="a">motion capture</subfield></datafield><datafield tag="653" ind1=" " ind2="0"><subfield code="a">Computer engineering. Computer hardware</subfield></datafield><datafield tag="700" ind1="0" ind2=" "><subfield code="a">Liang An</subfield><subfield code="e">verfasserin</subfield><subfield code="4">aut</subfield></datafield><datafield tag="700" ind1="0" ind2=" "><subfield code="a">Tao Yu</subfield><subfield code="e">verfasserin</subfield><subfield code="4">aut</subfield></datafield><datafield tag="700" ind1="0" ind2=" "><subfield code="a">Yangang Wang</subfield><subfield code="e">verfasserin</subfield><subfield code="4">aut</subfield></datafield><datafield tag="700" ind1="0" ind2=" "><subfield code="a">Feng Chen</subfield><subfield code="e">verfasserin</subfield><subfield code="4">aut</subfield></datafield><datafield tag="700" ind1="0" ind2=" "><subfield code="a">Yebin Liu</subfield><subfield code="e">verfasserin</subfield><subfield code="4">aut</subfield></datafield><datafield tag="773" ind1="0" ind2="8"><subfield code="i">In</subfield><subfield code="t">Virtual Reality & Intelligent Hardware</subfield><subfield code="d">KeAi Communications Co., Ltd., 2020</subfield><subfield code="g">2(2020), 3, Seite 276-289</subfield><subfield code="w">(DE-627)1692190121</subfield><subfield code="w">(DE-600)3011166-3</subfield><subfield code="x">26661209</subfield><subfield code="7">nnns</subfield></datafield><datafield tag="773" ind1="1" ind2="8"><subfield code="g">volume:2</subfield><subfield code="g">year:2020</subfield><subfield code="g">number:3</subfield><subfield code="g">pages:276-289</subfield></datafield><datafield tag="856" ind1="4" ind2="0"><subfield code="u">https://doi.org/10.1016/j.vrih.2020.05.001</subfield><subfield code="z">kostenfrei</subfield></datafield><datafield tag="856" ind1="4" ind2="0"><subfield code="u">https://doaj.org/article/6714998cd21f458996bf9ca58fad890a</subfield><subfield code="z">kostenfrei</subfield></datafield><datafield tag="856" ind1="4" ind2="0"><subfield code="u">http://www.sciencedirect.com/science/article/pii/S2096579620300371</subfield><subfield code="z">kostenfrei</subfield></datafield><datafield tag="856" ind1="4" ind2="2"><subfield code="u">https://doaj.org/toc/2096-5796</subfield><subfield code="y">Journal toc</subfield><subfield code="z">kostenfrei</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_USEFLAG_A</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">SYSFLAG_A</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_DOAJ</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_11</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_20</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_21</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_22</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_23</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_24</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_31</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_39</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_40</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_60</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_62</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_63</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_65</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_69</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_70</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_73</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_95</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_105</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_110</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_151</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_161</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_170</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_213</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_230</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_285</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_293</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_370</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_602</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2014</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4012</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4037</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4112</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4125</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4126</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4249</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4305</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4306</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4307</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4313</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4322</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4323</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4324</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4325</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4326</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4335</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4338</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4367</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4700</subfield></datafield><datafield tag="951" ind1=" " ind2=" "><subfield code="a">AR</subfield></datafield><datafield tag="952" ind1=" " ind2=" "><subfield code="d">2</subfield><subfield code="j">2020</subfield><subfield code="e">3</subfield><subfield code="h">276-289</subfield></datafield></record></collection>
|
score |
7.399637 |