Construction of Virtual Video Scene and Its Visualization During Sports Training
This article studies the actual captured human motion data for human motion synthesis and style transfer, constructs a scene of motion virtual video, and attempts to directly generate human motion style video to establish a sports style transfer model that combines and self-encoding. The original hu...
Ausführliche Beschreibung
Autor*in: |
Rui Yuan [verfasserIn] Zhendong Zhang [verfasserIn] Pengwei Song [verfasserIn] Jia Zhang [verfasserIn] Long Qin [verfasserIn] |
---|
Format: |
E-Artikel |
---|---|
Sprache: |
Englisch |
Erschienen: |
2020 |
---|
Schlagwörter: |
---|
Übergeordnetes Werk: |
In: IEEE Access - IEEE, 2014, 8(2020), Seite 124999-125012 |
---|---|
Übergeordnetes Werk: |
volume:8 ; year:2020 ; pages:124999-125012 |
Links: |
---|
DOI / URN: |
10.1109/ACCESS.2020.3007897 |
---|
Katalog-ID: |
DOAJ072526084 |
---|
LEADER | 01000caa a22002652 4500 | ||
---|---|---|---|
001 | DOAJ072526084 | ||
003 | DE-627 | ||
005 | 20230309110713.0 | ||
007 | cr uuu---uuuuu | ||
008 | 230228s2020 xx |||||o 00| ||eng c | ||
024 | 7 | |a 10.1109/ACCESS.2020.3007897 |2 doi | |
035 | |a (DE-627)DOAJ072526084 | ||
035 | |a (DE-599)DOAJ9d84ac67cf804009a9f194407024c90d | ||
040 | |a DE-627 |b ger |c DE-627 |e rakwb | ||
041 | |a eng | ||
050 | 0 | |a TK1-9971 | |
100 | 0 | |a Rui Yuan |e verfasserin |4 aut | |
245 | 1 | 0 | |a Construction of Virtual Video Scene and Its Visualization During Sports Training |
264 | 1 | |c 2020 | |
336 | |a Text |b txt |2 rdacontent | ||
337 | |a Computermedien |b c |2 rdamedia | ||
338 | |a Online-Ressource |b cr |2 rdacarrier | ||
520 | |a This article studies the actual captured human motion data for human motion synthesis and style transfer, constructs a scene of motion virtual video, and attempts to directly generate human motion style video to establish a sports style transfer model that combines and self-encoding. The original human motion capture data mapped to the motion feature space for style transfer synthesis. The coding network used to map the high-dimensional motion capture data to the low-dimensional feature space, and the motion style transfer constraints established in the feature space, and the human body motion results after the style transfer obtained by decoding. This paper proposes a pixel-level human motion style transfer model based on conditional adversarial networks and uses convolution and convolution to establish two branch coding networks to extract the features of the input style video and content pictures. The decoding network decodes the combined two features and generates a human motion video data frame by frame. The Gram matrix establishes constraints on the encoding and decoding features, controls the movement style of the human body, and finally realizes the visualization of the movement process. The incremental learning method based on the cascade network can improve the high accuracy and achieve the posture measurement frequency of 200Hz. The research results provide a key foundation for improving the immersion sensation of sport visual and tactile interaction simulation. | ||
650 | 4 | |a Virtual video | |
650 | 4 | |a scene construction | |
650 | 4 | |a movement process | |
650 | 4 | |a visualization | |
653 | 0 | |a Electrical engineering. Electronics. Nuclear engineering | |
700 | 0 | |a Zhendong Zhang |e verfasserin |4 aut | |
700 | 0 | |a Pengwei Song |e verfasserin |4 aut | |
700 | 0 | |a Jia Zhang |e verfasserin |4 aut | |
700 | 0 | |a Long Qin |e verfasserin |4 aut | |
773 | 0 | 8 | |i In |t IEEE Access |d IEEE, 2014 |g 8(2020), Seite 124999-125012 |w (DE-627)728440385 |w (DE-600)2687964-5 |x 21693536 |7 nnns |
773 | 1 | 8 | |g volume:8 |g year:2020 |g pages:124999-125012 |
856 | 4 | 0 | |u https://doi.org/10.1109/ACCESS.2020.3007897 |z kostenfrei |
856 | 4 | 0 | |u https://doaj.org/article/9d84ac67cf804009a9f194407024c90d |z kostenfrei |
856 | 4 | 0 | |u https://ieeexplore.ieee.org/document/9136704/ |z kostenfrei |
856 | 4 | 2 | |u https://doaj.org/toc/2169-3536 |y Journal toc |z kostenfrei |
912 | |a GBV_USEFLAG_A | ||
912 | |a SYSFLAG_A | ||
912 | |a GBV_DOAJ | ||
912 | |a GBV_ILN_11 | ||
912 | |a GBV_ILN_20 | ||
912 | |a GBV_ILN_22 | ||
912 | |a GBV_ILN_23 | ||
912 | |a GBV_ILN_24 | ||
912 | |a GBV_ILN_31 | ||
912 | |a GBV_ILN_39 | ||
912 | |a GBV_ILN_40 | ||
912 | |a GBV_ILN_60 | ||
912 | |a GBV_ILN_62 | ||
912 | |a GBV_ILN_63 | ||
912 | |a GBV_ILN_65 | ||
912 | |a GBV_ILN_69 | ||
912 | |a GBV_ILN_70 | ||
912 | |a GBV_ILN_73 | ||
912 | |a GBV_ILN_95 | ||
912 | |a GBV_ILN_105 | ||
912 | |a GBV_ILN_110 | ||
912 | |a GBV_ILN_151 | ||
912 | |a GBV_ILN_161 | ||
912 | |a GBV_ILN_170 | ||
912 | |a GBV_ILN_213 | ||
912 | |a GBV_ILN_230 | ||
912 | |a GBV_ILN_285 | ||
912 | |a GBV_ILN_293 | ||
912 | |a GBV_ILN_370 | ||
912 | |a GBV_ILN_602 | ||
912 | |a GBV_ILN_2014 | ||
912 | |a GBV_ILN_4012 | ||
912 | |a GBV_ILN_4037 | ||
912 | |a GBV_ILN_4112 | ||
912 | |a GBV_ILN_4125 | ||
912 | |a GBV_ILN_4126 | ||
912 | |a GBV_ILN_4249 | ||
912 | |a GBV_ILN_4305 | ||
912 | |a GBV_ILN_4306 | ||
912 | |a GBV_ILN_4307 | ||
912 | |a GBV_ILN_4313 | ||
912 | |a GBV_ILN_4322 | ||
912 | |a GBV_ILN_4323 | ||
912 | |a GBV_ILN_4324 | ||
912 | |a GBV_ILN_4325 | ||
912 | |a GBV_ILN_4335 | ||
912 | |a GBV_ILN_4338 | ||
912 | |a GBV_ILN_4367 | ||
912 | |a GBV_ILN_4700 | ||
951 | |a AR | ||
952 | |d 8 |j 2020 |h 124999-125012 |
author_variant |
r y ry z z zz p s ps j z jz l q lq |
---|---|
matchkey_str |
article:21693536:2020----::osrcinfitavdocnadtvsaiain |
hierarchy_sort_str |
2020 |
callnumber-subject-code |
TK |
publishDate |
2020 |
allfields |
10.1109/ACCESS.2020.3007897 doi (DE-627)DOAJ072526084 (DE-599)DOAJ9d84ac67cf804009a9f194407024c90d DE-627 ger DE-627 rakwb eng TK1-9971 Rui Yuan verfasserin aut Construction of Virtual Video Scene and Its Visualization During Sports Training 2020 Text txt rdacontent Computermedien c rdamedia Online-Ressource cr rdacarrier This article studies the actual captured human motion data for human motion synthesis and style transfer, constructs a scene of motion virtual video, and attempts to directly generate human motion style video to establish a sports style transfer model that combines and self-encoding. The original human motion capture data mapped to the motion feature space for style transfer synthesis. The coding network used to map the high-dimensional motion capture data to the low-dimensional feature space, and the motion style transfer constraints established in the feature space, and the human body motion results after the style transfer obtained by decoding. This paper proposes a pixel-level human motion style transfer model based on conditional adversarial networks and uses convolution and convolution to establish two branch coding networks to extract the features of the input style video and content pictures. The decoding network decodes the combined two features and generates a human motion video data frame by frame. The Gram matrix establishes constraints on the encoding and decoding features, controls the movement style of the human body, and finally realizes the visualization of the movement process. The incremental learning method based on the cascade network can improve the high accuracy and achieve the posture measurement frequency of 200Hz. The research results provide a key foundation for improving the immersion sensation of sport visual and tactile interaction simulation. Virtual video scene construction movement process visualization Electrical engineering. Electronics. Nuclear engineering Zhendong Zhang verfasserin aut Pengwei Song verfasserin aut Jia Zhang verfasserin aut Long Qin verfasserin aut In IEEE Access IEEE, 2014 8(2020), Seite 124999-125012 (DE-627)728440385 (DE-600)2687964-5 21693536 nnns volume:8 year:2020 pages:124999-125012 https://doi.org/10.1109/ACCESS.2020.3007897 kostenfrei https://doaj.org/article/9d84ac67cf804009a9f194407024c90d kostenfrei https://ieeexplore.ieee.org/document/9136704/ kostenfrei https://doaj.org/toc/2169-3536 Journal toc kostenfrei GBV_USEFLAG_A SYSFLAG_A GBV_DOAJ GBV_ILN_11 GBV_ILN_20 GBV_ILN_22 GBV_ILN_23 GBV_ILN_24 GBV_ILN_31 GBV_ILN_39 GBV_ILN_40 GBV_ILN_60 GBV_ILN_62 GBV_ILN_63 GBV_ILN_65 GBV_ILN_69 GBV_ILN_70 GBV_ILN_73 GBV_ILN_95 GBV_ILN_105 GBV_ILN_110 GBV_ILN_151 GBV_ILN_161 GBV_ILN_170 GBV_ILN_213 GBV_ILN_230 GBV_ILN_285 GBV_ILN_293 GBV_ILN_370 GBV_ILN_602 GBV_ILN_2014 GBV_ILN_4012 GBV_ILN_4037 GBV_ILN_4112 GBV_ILN_4125 GBV_ILN_4126 GBV_ILN_4249 GBV_ILN_4305 GBV_ILN_4306 GBV_ILN_4307 GBV_ILN_4313 GBV_ILN_4322 GBV_ILN_4323 GBV_ILN_4324 GBV_ILN_4325 GBV_ILN_4335 GBV_ILN_4338 GBV_ILN_4367 GBV_ILN_4700 AR 8 2020 124999-125012 |
spelling |
10.1109/ACCESS.2020.3007897 doi (DE-627)DOAJ072526084 (DE-599)DOAJ9d84ac67cf804009a9f194407024c90d DE-627 ger DE-627 rakwb eng TK1-9971 Rui Yuan verfasserin aut Construction of Virtual Video Scene and Its Visualization During Sports Training 2020 Text txt rdacontent Computermedien c rdamedia Online-Ressource cr rdacarrier This article studies the actual captured human motion data for human motion synthesis and style transfer, constructs a scene of motion virtual video, and attempts to directly generate human motion style video to establish a sports style transfer model that combines and self-encoding. The original human motion capture data mapped to the motion feature space for style transfer synthesis. The coding network used to map the high-dimensional motion capture data to the low-dimensional feature space, and the motion style transfer constraints established in the feature space, and the human body motion results after the style transfer obtained by decoding. This paper proposes a pixel-level human motion style transfer model based on conditional adversarial networks and uses convolution and convolution to establish two branch coding networks to extract the features of the input style video and content pictures. The decoding network decodes the combined two features and generates a human motion video data frame by frame. The Gram matrix establishes constraints on the encoding and decoding features, controls the movement style of the human body, and finally realizes the visualization of the movement process. The incremental learning method based on the cascade network can improve the high accuracy and achieve the posture measurement frequency of 200Hz. The research results provide a key foundation for improving the immersion sensation of sport visual and tactile interaction simulation. Virtual video scene construction movement process visualization Electrical engineering. Electronics. Nuclear engineering Zhendong Zhang verfasserin aut Pengwei Song verfasserin aut Jia Zhang verfasserin aut Long Qin verfasserin aut In IEEE Access IEEE, 2014 8(2020), Seite 124999-125012 (DE-627)728440385 (DE-600)2687964-5 21693536 nnns volume:8 year:2020 pages:124999-125012 https://doi.org/10.1109/ACCESS.2020.3007897 kostenfrei https://doaj.org/article/9d84ac67cf804009a9f194407024c90d kostenfrei https://ieeexplore.ieee.org/document/9136704/ kostenfrei https://doaj.org/toc/2169-3536 Journal toc kostenfrei GBV_USEFLAG_A SYSFLAG_A GBV_DOAJ GBV_ILN_11 GBV_ILN_20 GBV_ILN_22 GBV_ILN_23 GBV_ILN_24 GBV_ILN_31 GBV_ILN_39 GBV_ILN_40 GBV_ILN_60 GBV_ILN_62 GBV_ILN_63 GBV_ILN_65 GBV_ILN_69 GBV_ILN_70 GBV_ILN_73 GBV_ILN_95 GBV_ILN_105 GBV_ILN_110 GBV_ILN_151 GBV_ILN_161 GBV_ILN_170 GBV_ILN_213 GBV_ILN_230 GBV_ILN_285 GBV_ILN_293 GBV_ILN_370 GBV_ILN_602 GBV_ILN_2014 GBV_ILN_4012 GBV_ILN_4037 GBV_ILN_4112 GBV_ILN_4125 GBV_ILN_4126 GBV_ILN_4249 GBV_ILN_4305 GBV_ILN_4306 GBV_ILN_4307 GBV_ILN_4313 GBV_ILN_4322 GBV_ILN_4323 GBV_ILN_4324 GBV_ILN_4325 GBV_ILN_4335 GBV_ILN_4338 GBV_ILN_4367 GBV_ILN_4700 AR 8 2020 124999-125012 |
allfields_unstemmed |
10.1109/ACCESS.2020.3007897 doi (DE-627)DOAJ072526084 (DE-599)DOAJ9d84ac67cf804009a9f194407024c90d DE-627 ger DE-627 rakwb eng TK1-9971 Rui Yuan verfasserin aut Construction of Virtual Video Scene and Its Visualization During Sports Training 2020 Text txt rdacontent Computermedien c rdamedia Online-Ressource cr rdacarrier This article studies the actual captured human motion data for human motion synthesis and style transfer, constructs a scene of motion virtual video, and attempts to directly generate human motion style video to establish a sports style transfer model that combines and self-encoding. The original human motion capture data mapped to the motion feature space for style transfer synthesis. The coding network used to map the high-dimensional motion capture data to the low-dimensional feature space, and the motion style transfer constraints established in the feature space, and the human body motion results after the style transfer obtained by decoding. This paper proposes a pixel-level human motion style transfer model based on conditional adversarial networks and uses convolution and convolution to establish two branch coding networks to extract the features of the input style video and content pictures. The decoding network decodes the combined two features and generates a human motion video data frame by frame. The Gram matrix establishes constraints on the encoding and decoding features, controls the movement style of the human body, and finally realizes the visualization of the movement process. The incremental learning method based on the cascade network can improve the high accuracy and achieve the posture measurement frequency of 200Hz. The research results provide a key foundation for improving the immersion sensation of sport visual and tactile interaction simulation. Virtual video scene construction movement process visualization Electrical engineering. Electronics. Nuclear engineering Zhendong Zhang verfasserin aut Pengwei Song verfasserin aut Jia Zhang verfasserin aut Long Qin verfasserin aut In IEEE Access IEEE, 2014 8(2020), Seite 124999-125012 (DE-627)728440385 (DE-600)2687964-5 21693536 nnns volume:8 year:2020 pages:124999-125012 https://doi.org/10.1109/ACCESS.2020.3007897 kostenfrei https://doaj.org/article/9d84ac67cf804009a9f194407024c90d kostenfrei https://ieeexplore.ieee.org/document/9136704/ kostenfrei https://doaj.org/toc/2169-3536 Journal toc kostenfrei GBV_USEFLAG_A SYSFLAG_A GBV_DOAJ GBV_ILN_11 GBV_ILN_20 GBV_ILN_22 GBV_ILN_23 GBV_ILN_24 GBV_ILN_31 GBV_ILN_39 GBV_ILN_40 GBV_ILN_60 GBV_ILN_62 GBV_ILN_63 GBV_ILN_65 GBV_ILN_69 GBV_ILN_70 GBV_ILN_73 GBV_ILN_95 GBV_ILN_105 GBV_ILN_110 GBV_ILN_151 GBV_ILN_161 GBV_ILN_170 GBV_ILN_213 GBV_ILN_230 GBV_ILN_285 GBV_ILN_293 GBV_ILN_370 GBV_ILN_602 GBV_ILN_2014 GBV_ILN_4012 GBV_ILN_4037 GBV_ILN_4112 GBV_ILN_4125 GBV_ILN_4126 GBV_ILN_4249 GBV_ILN_4305 GBV_ILN_4306 GBV_ILN_4307 GBV_ILN_4313 GBV_ILN_4322 GBV_ILN_4323 GBV_ILN_4324 GBV_ILN_4325 GBV_ILN_4335 GBV_ILN_4338 GBV_ILN_4367 GBV_ILN_4700 AR 8 2020 124999-125012 |
allfieldsGer |
10.1109/ACCESS.2020.3007897 doi (DE-627)DOAJ072526084 (DE-599)DOAJ9d84ac67cf804009a9f194407024c90d DE-627 ger DE-627 rakwb eng TK1-9971 Rui Yuan verfasserin aut Construction of Virtual Video Scene and Its Visualization During Sports Training 2020 Text txt rdacontent Computermedien c rdamedia Online-Ressource cr rdacarrier This article studies the actual captured human motion data for human motion synthesis and style transfer, constructs a scene of motion virtual video, and attempts to directly generate human motion style video to establish a sports style transfer model that combines and self-encoding. The original human motion capture data mapped to the motion feature space for style transfer synthesis. The coding network used to map the high-dimensional motion capture data to the low-dimensional feature space, and the motion style transfer constraints established in the feature space, and the human body motion results after the style transfer obtained by decoding. This paper proposes a pixel-level human motion style transfer model based on conditional adversarial networks and uses convolution and convolution to establish two branch coding networks to extract the features of the input style video and content pictures. The decoding network decodes the combined two features and generates a human motion video data frame by frame. The Gram matrix establishes constraints on the encoding and decoding features, controls the movement style of the human body, and finally realizes the visualization of the movement process. The incremental learning method based on the cascade network can improve the high accuracy and achieve the posture measurement frequency of 200Hz. The research results provide a key foundation for improving the immersion sensation of sport visual and tactile interaction simulation. Virtual video scene construction movement process visualization Electrical engineering. Electronics. Nuclear engineering Zhendong Zhang verfasserin aut Pengwei Song verfasserin aut Jia Zhang verfasserin aut Long Qin verfasserin aut In IEEE Access IEEE, 2014 8(2020), Seite 124999-125012 (DE-627)728440385 (DE-600)2687964-5 21693536 nnns volume:8 year:2020 pages:124999-125012 https://doi.org/10.1109/ACCESS.2020.3007897 kostenfrei https://doaj.org/article/9d84ac67cf804009a9f194407024c90d kostenfrei https://ieeexplore.ieee.org/document/9136704/ kostenfrei https://doaj.org/toc/2169-3536 Journal toc kostenfrei GBV_USEFLAG_A SYSFLAG_A GBV_DOAJ GBV_ILN_11 GBV_ILN_20 GBV_ILN_22 GBV_ILN_23 GBV_ILN_24 GBV_ILN_31 GBV_ILN_39 GBV_ILN_40 GBV_ILN_60 GBV_ILN_62 GBV_ILN_63 GBV_ILN_65 GBV_ILN_69 GBV_ILN_70 GBV_ILN_73 GBV_ILN_95 GBV_ILN_105 GBV_ILN_110 GBV_ILN_151 GBV_ILN_161 GBV_ILN_170 GBV_ILN_213 GBV_ILN_230 GBV_ILN_285 GBV_ILN_293 GBV_ILN_370 GBV_ILN_602 GBV_ILN_2014 GBV_ILN_4012 GBV_ILN_4037 GBV_ILN_4112 GBV_ILN_4125 GBV_ILN_4126 GBV_ILN_4249 GBV_ILN_4305 GBV_ILN_4306 GBV_ILN_4307 GBV_ILN_4313 GBV_ILN_4322 GBV_ILN_4323 GBV_ILN_4324 GBV_ILN_4325 GBV_ILN_4335 GBV_ILN_4338 GBV_ILN_4367 GBV_ILN_4700 AR 8 2020 124999-125012 |
allfieldsSound |
10.1109/ACCESS.2020.3007897 doi (DE-627)DOAJ072526084 (DE-599)DOAJ9d84ac67cf804009a9f194407024c90d DE-627 ger DE-627 rakwb eng TK1-9971 Rui Yuan verfasserin aut Construction of Virtual Video Scene and Its Visualization During Sports Training 2020 Text txt rdacontent Computermedien c rdamedia Online-Ressource cr rdacarrier This article studies the actual captured human motion data for human motion synthesis and style transfer, constructs a scene of motion virtual video, and attempts to directly generate human motion style video to establish a sports style transfer model that combines and self-encoding. The original human motion capture data mapped to the motion feature space for style transfer synthesis. The coding network used to map the high-dimensional motion capture data to the low-dimensional feature space, and the motion style transfer constraints established in the feature space, and the human body motion results after the style transfer obtained by decoding. This paper proposes a pixel-level human motion style transfer model based on conditional adversarial networks and uses convolution and convolution to establish two branch coding networks to extract the features of the input style video and content pictures. The decoding network decodes the combined two features and generates a human motion video data frame by frame. The Gram matrix establishes constraints on the encoding and decoding features, controls the movement style of the human body, and finally realizes the visualization of the movement process. The incremental learning method based on the cascade network can improve the high accuracy and achieve the posture measurement frequency of 200Hz. The research results provide a key foundation for improving the immersion sensation of sport visual and tactile interaction simulation. Virtual video scene construction movement process visualization Electrical engineering. Electronics. Nuclear engineering Zhendong Zhang verfasserin aut Pengwei Song verfasserin aut Jia Zhang verfasserin aut Long Qin verfasserin aut In IEEE Access IEEE, 2014 8(2020), Seite 124999-125012 (DE-627)728440385 (DE-600)2687964-5 21693536 nnns volume:8 year:2020 pages:124999-125012 https://doi.org/10.1109/ACCESS.2020.3007897 kostenfrei https://doaj.org/article/9d84ac67cf804009a9f194407024c90d kostenfrei https://ieeexplore.ieee.org/document/9136704/ kostenfrei https://doaj.org/toc/2169-3536 Journal toc kostenfrei GBV_USEFLAG_A SYSFLAG_A GBV_DOAJ GBV_ILN_11 GBV_ILN_20 GBV_ILN_22 GBV_ILN_23 GBV_ILN_24 GBV_ILN_31 GBV_ILN_39 GBV_ILN_40 GBV_ILN_60 GBV_ILN_62 GBV_ILN_63 GBV_ILN_65 GBV_ILN_69 GBV_ILN_70 GBV_ILN_73 GBV_ILN_95 GBV_ILN_105 GBV_ILN_110 GBV_ILN_151 GBV_ILN_161 GBV_ILN_170 GBV_ILN_213 GBV_ILN_230 GBV_ILN_285 GBV_ILN_293 GBV_ILN_370 GBV_ILN_602 GBV_ILN_2014 GBV_ILN_4012 GBV_ILN_4037 GBV_ILN_4112 GBV_ILN_4125 GBV_ILN_4126 GBV_ILN_4249 GBV_ILN_4305 GBV_ILN_4306 GBV_ILN_4307 GBV_ILN_4313 GBV_ILN_4322 GBV_ILN_4323 GBV_ILN_4324 GBV_ILN_4325 GBV_ILN_4335 GBV_ILN_4338 GBV_ILN_4367 GBV_ILN_4700 AR 8 2020 124999-125012 |
language |
English |
source |
In IEEE Access 8(2020), Seite 124999-125012 volume:8 year:2020 pages:124999-125012 |
sourceStr |
In IEEE Access 8(2020), Seite 124999-125012 volume:8 year:2020 pages:124999-125012 |
format_phy_str_mv |
Article |
institution |
findex.gbv.de |
topic_facet |
Virtual video scene construction movement process visualization Electrical engineering. Electronics. Nuclear engineering |
isfreeaccess_bool |
true |
container_title |
IEEE Access |
authorswithroles_txt_mv |
Rui Yuan @@aut@@ Zhendong Zhang @@aut@@ Pengwei Song @@aut@@ Jia Zhang @@aut@@ Long Qin @@aut@@ |
publishDateDaySort_date |
2020-01-01T00:00:00Z |
hierarchy_top_id |
728440385 |
id |
DOAJ072526084 |
language_de |
englisch |
fullrecord |
<?xml version="1.0" encoding="UTF-8"?><collection xmlns="http://www.loc.gov/MARC21/slim"><record><leader>01000caa a22002652 4500</leader><controlfield tag="001">DOAJ072526084</controlfield><controlfield tag="003">DE-627</controlfield><controlfield tag="005">20230309110713.0</controlfield><controlfield tag="007">cr uuu---uuuuu</controlfield><controlfield tag="008">230228s2020 xx |||||o 00| ||eng c</controlfield><datafield tag="024" ind1="7" ind2=" "><subfield code="a">10.1109/ACCESS.2020.3007897</subfield><subfield code="2">doi</subfield></datafield><datafield tag="035" ind1=" " ind2=" "><subfield code="a">(DE-627)DOAJ072526084</subfield></datafield><datafield tag="035" ind1=" " ind2=" "><subfield code="a">(DE-599)DOAJ9d84ac67cf804009a9f194407024c90d</subfield></datafield><datafield tag="040" ind1=" " ind2=" "><subfield code="a">DE-627</subfield><subfield code="b">ger</subfield><subfield code="c">DE-627</subfield><subfield code="e">rakwb</subfield></datafield><datafield tag="041" ind1=" " ind2=" "><subfield code="a">eng</subfield></datafield><datafield tag="050" ind1=" " ind2="0"><subfield code="a">TK1-9971</subfield></datafield><datafield tag="100" ind1="0" ind2=" "><subfield code="a">Rui Yuan</subfield><subfield code="e">verfasserin</subfield><subfield code="4">aut</subfield></datafield><datafield tag="245" ind1="1" ind2="0"><subfield code="a">Construction of Virtual Video Scene and Its Visualization During Sports Training</subfield></datafield><datafield tag="264" ind1=" " ind2="1"><subfield code="c">2020</subfield></datafield><datafield tag="336" ind1=" " ind2=" "><subfield code="a">Text</subfield><subfield code="b">txt</subfield><subfield code="2">rdacontent</subfield></datafield><datafield tag="337" ind1=" " ind2=" "><subfield code="a">Computermedien</subfield><subfield code="b">c</subfield><subfield code="2">rdamedia</subfield></datafield><datafield tag="338" ind1=" " ind2=" "><subfield code="a">Online-Ressource</subfield><subfield code="b">cr</subfield><subfield code="2">rdacarrier</subfield></datafield><datafield tag="520" ind1=" " ind2=" "><subfield code="a">This article studies the actual captured human motion data for human motion synthesis and style transfer, constructs a scene of motion virtual video, and attempts to directly generate human motion style video to establish a sports style transfer model that combines and self-encoding. The original human motion capture data mapped to the motion feature space for style transfer synthesis. The coding network used to map the high-dimensional motion capture data to the low-dimensional feature space, and the motion style transfer constraints established in the feature space, and the human body motion results after the style transfer obtained by decoding. This paper proposes a pixel-level human motion style transfer model based on conditional adversarial networks and uses convolution and convolution to establish two branch coding networks to extract the features of the input style video and content pictures. The decoding network decodes the combined two features and generates a human motion video data frame by frame. The Gram matrix establishes constraints on the encoding and decoding features, controls the movement style of the human body, and finally realizes the visualization of the movement process. The incremental learning method based on the cascade network can improve the high accuracy and achieve the posture measurement frequency of 200Hz. The research results provide a key foundation for improving the immersion sensation of sport visual and tactile interaction simulation.</subfield></datafield><datafield tag="650" ind1=" " ind2="4"><subfield code="a">Virtual video</subfield></datafield><datafield tag="650" ind1=" " ind2="4"><subfield code="a">scene construction</subfield></datafield><datafield tag="650" ind1=" " ind2="4"><subfield code="a">movement process</subfield></datafield><datafield tag="650" ind1=" " ind2="4"><subfield code="a">visualization</subfield></datafield><datafield tag="653" ind1=" " ind2="0"><subfield code="a">Electrical engineering. Electronics. Nuclear engineering</subfield></datafield><datafield tag="700" ind1="0" ind2=" "><subfield code="a">Zhendong Zhang</subfield><subfield code="e">verfasserin</subfield><subfield code="4">aut</subfield></datafield><datafield tag="700" ind1="0" ind2=" "><subfield code="a">Pengwei Song</subfield><subfield code="e">verfasserin</subfield><subfield code="4">aut</subfield></datafield><datafield tag="700" ind1="0" ind2=" "><subfield code="a">Jia Zhang</subfield><subfield code="e">verfasserin</subfield><subfield code="4">aut</subfield></datafield><datafield tag="700" ind1="0" ind2=" "><subfield code="a">Long Qin</subfield><subfield code="e">verfasserin</subfield><subfield code="4">aut</subfield></datafield><datafield tag="773" ind1="0" ind2="8"><subfield code="i">In</subfield><subfield code="t">IEEE Access</subfield><subfield code="d">IEEE, 2014</subfield><subfield code="g">8(2020), Seite 124999-125012</subfield><subfield code="w">(DE-627)728440385</subfield><subfield code="w">(DE-600)2687964-5</subfield><subfield code="x">21693536</subfield><subfield code="7">nnns</subfield></datafield><datafield tag="773" ind1="1" ind2="8"><subfield code="g">volume:8</subfield><subfield code="g">year:2020</subfield><subfield code="g">pages:124999-125012</subfield></datafield><datafield tag="856" ind1="4" ind2="0"><subfield code="u">https://doi.org/10.1109/ACCESS.2020.3007897</subfield><subfield code="z">kostenfrei</subfield></datafield><datafield tag="856" ind1="4" ind2="0"><subfield code="u">https://doaj.org/article/9d84ac67cf804009a9f194407024c90d</subfield><subfield code="z">kostenfrei</subfield></datafield><datafield tag="856" ind1="4" ind2="0"><subfield code="u">https://ieeexplore.ieee.org/document/9136704/</subfield><subfield code="z">kostenfrei</subfield></datafield><datafield tag="856" ind1="4" ind2="2"><subfield code="u">https://doaj.org/toc/2169-3536</subfield><subfield code="y">Journal toc</subfield><subfield code="z">kostenfrei</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_USEFLAG_A</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">SYSFLAG_A</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_DOAJ</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_11</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_20</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_22</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_23</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_24</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_31</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_39</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_40</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_60</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_62</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_63</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_65</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_69</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_70</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_73</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_95</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_105</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_110</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_151</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_161</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_170</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_213</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_230</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_285</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_293</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_370</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_602</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2014</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4012</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4037</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4112</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4125</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4126</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4249</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4305</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4306</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4307</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4313</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4322</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4323</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4324</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4325</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4335</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4338</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4367</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4700</subfield></datafield><datafield tag="951" ind1=" " ind2=" "><subfield code="a">AR</subfield></datafield><datafield tag="952" ind1=" " ind2=" "><subfield code="d">8</subfield><subfield code="j">2020</subfield><subfield code="h">124999-125012</subfield></datafield></record></collection>
|
callnumber-first |
T - Technology |
author |
Rui Yuan |
spellingShingle |
Rui Yuan misc TK1-9971 misc Virtual video misc scene construction misc movement process misc visualization misc Electrical engineering. Electronics. Nuclear engineering Construction of Virtual Video Scene and Its Visualization During Sports Training |
authorStr |
Rui Yuan |
ppnlink_with_tag_str_mv |
@@773@@(DE-627)728440385 |
format |
electronic Article |
delete_txt_mv |
keep |
author_role |
aut aut aut aut aut |
collection |
DOAJ |
remote_str |
true |
callnumber-label |
TK1-9971 |
illustrated |
Not Illustrated |
issn |
21693536 |
topic_title |
TK1-9971 Construction of Virtual Video Scene and Its Visualization During Sports Training Virtual video scene construction movement process visualization |
topic |
misc TK1-9971 misc Virtual video misc scene construction misc movement process misc visualization misc Electrical engineering. Electronics. Nuclear engineering |
topic_unstemmed |
misc TK1-9971 misc Virtual video misc scene construction misc movement process misc visualization misc Electrical engineering. Electronics. Nuclear engineering |
topic_browse |
misc TK1-9971 misc Virtual video misc scene construction misc movement process misc visualization misc Electrical engineering. Electronics. Nuclear engineering |
format_facet |
Elektronische Aufsätze Aufsätze Elektronische Ressource |
format_main_str_mv |
Text Zeitschrift/Artikel |
carriertype_str_mv |
cr |
hierarchy_parent_title |
IEEE Access |
hierarchy_parent_id |
728440385 |
hierarchy_top_title |
IEEE Access |
isfreeaccess_txt |
true |
familylinks_str_mv |
(DE-627)728440385 (DE-600)2687964-5 |
title |
Construction of Virtual Video Scene and Its Visualization During Sports Training |
ctrlnum |
(DE-627)DOAJ072526084 (DE-599)DOAJ9d84ac67cf804009a9f194407024c90d |
title_full |
Construction of Virtual Video Scene and Its Visualization During Sports Training |
author_sort |
Rui Yuan |
journal |
IEEE Access |
journalStr |
IEEE Access |
callnumber-first-code |
T |
lang_code |
eng |
isOA_bool |
true |
recordtype |
marc |
publishDateSort |
2020 |
contenttype_str_mv |
txt |
container_start_page |
124999 |
author_browse |
Rui Yuan Zhendong Zhang Pengwei Song Jia Zhang Long Qin |
container_volume |
8 |
class |
TK1-9971 |
format_se |
Elektronische Aufsätze |
author-letter |
Rui Yuan |
doi_str_mv |
10.1109/ACCESS.2020.3007897 |
author2-role |
verfasserin |
title_sort |
construction of virtual video scene and its visualization during sports training |
callnumber |
TK1-9971 |
title_auth |
Construction of Virtual Video Scene and Its Visualization During Sports Training |
abstract |
This article studies the actual captured human motion data for human motion synthesis and style transfer, constructs a scene of motion virtual video, and attempts to directly generate human motion style video to establish a sports style transfer model that combines and self-encoding. The original human motion capture data mapped to the motion feature space for style transfer synthesis. The coding network used to map the high-dimensional motion capture data to the low-dimensional feature space, and the motion style transfer constraints established in the feature space, and the human body motion results after the style transfer obtained by decoding. This paper proposes a pixel-level human motion style transfer model based on conditional adversarial networks and uses convolution and convolution to establish two branch coding networks to extract the features of the input style video and content pictures. The decoding network decodes the combined two features and generates a human motion video data frame by frame. The Gram matrix establishes constraints on the encoding and decoding features, controls the movement style of the human body, and finally realizes the visualization of the movement process. The incremental learning method based on the cascade network can improve the high accuracy and achieve the posture measurement frequency of 200Hz. The research results provide a key foundation for improving the immersion sensation of sport visual and tactile interaction simulation. |
abstractGer |
This article studies the actual captured human motion data for human motion synthesis and style transfer, constructs a scene of motion virtual video, and attempts to directly generate human motion style video to establish a sports style transfer model that combines and self-encoding. The original human motion capture data mapped to the motion feature space for style transfer synthesis. The coding network used to map the high-dimensional motion capture data to the low-dimensional feature space, and the motion style transfer constraints established in the feature space, and the human body motion results after the style transfer obtained by decoding. This paper proposes a pixel-level human motion style transfer model based on conditional adversarial networks and uses convolution and convolution to establish two branch coding networks to extract the features of the input style video and content pictures. The decoding network decodes the combined two features and generates a human motion video data frame by frame. The Gram matrix establishes constraints on the encoding and decoding features, controls the movement style of the human body, and finally realizes the visualization of the movement process. The incremental learning method based on the cascade network can improve the high accuracy and achieve the posture measurement frequency of 200Hz. The research results provide a key foundation for improving the immersion sensation of sport visual and tactile interaction simulation. |
abstract_unstemmed |
This article studies the actual captured human motion data for human motion synthesis and style transfer, constructs a scene of motion virtual video, and attempts to directly generate human motion style video to establish a sports style transfer model that combines and self-encoding. The original human motion capture data mapped to the motion feature space for style transfer synthesis. The coding network used to map the high-dimensional motion capture data to the low-dimensional feature space, and the motion style transfer constraints established in the feature space, and the human body motion results after the style transfer obtained by decoding. This paper proposes a pixel-level human motion style transfer model based on conditional adversarial networks and uses convolution and convolution to establish two branch coding networks to extract the features of the input style video and content pictures. The decoding network decodes the combined two features and generates a human motion video data frame by frame. The Gram matrix establishes constraints on the encoding and decoding features, controls the movement style of the human body, and finally realizes the visualization of the movement process. The incremental learning method based on the cascade network can improve the high accuracy and achieve the posture measurement frequency of 200Hz. The research results provide a key foundation for improving the immersion sensation of sport visual and tactile interaction simulation. |
collection_details |
GBV_USEFLAG_A SYSFLAG_A GBV_DOAJ GBV_ILN_11 GBV_ILN_20 GBV_ILN_22 GBV_ILN_23 GBV_ILN_24 GBV_ILN_31 GBV_ILN_39 GBV_ILN_40 GBV_ILN_60 GBV_ILN_62 GBV_ILN_63 GBV_ILN_65 GBV_ILN_69 GBV_ILN_70 GBV_ILN_73 GBV_ILN_95 GBV_ILN_105 GBV_ILN_110 GBV_ILN_151 GBV_ILN_161 GBV_ILN_170 GBV_ILN_213 GBV_ILN_230 GBV_ILN_285 GBV_ILN_293 GBV_ILN_370 GBV_ILN_602 GBV_ILN_2014 GBV_ILN_4012 GBV_ILN_4037 GBV_ILN_4112 GBV_ILN_4125 GBV_ILN_4126 GBV_ILN_4249 GBV_ILN_4305 GBV_ILN_4306 GBV_ILN_4307 GBV_ILN_4313 GBV_ILN_4322 GBV_ILN_4323 GBV_ILN_4324 GBV_ILN_4325 GBV_ILN_4335 GBV_ILN_4338 GBV_ILN_4367 GBV_ILN_4700 |
title_short |
Construction of Virtual Video Scene and Its Visualization During Sports Training |
url |
https://doi.org/10.1109/ACCESS.2020.3007897 https://doaj.org/article/9d84ac67cf804009a9f194407024c90d https://ieeexplore.ieee.org/document/9136704/ https://doaj.org/toc/2169-3536 |
remote_bool |
true |
author2 |
Zhendong Zhang Pengwei Song Jia Zhang Long Qin |
author2Str |
Zhendong Zhang Pengwei Song Jia Zhang Long Qin |
ppnlink |
728440385 |
callnumber-subject |
TK - Electrical and Nuclear Engineering |
mediatype_str_mv |
c |
isOA_txt |
true |
hochschulschrift_bool |
false |
doi_str |
10.1109/ACCESS.2020.3007897 |
callnumber-a |
TK1-9971 |
up_date |
2024-07-04T01:24:32.426Z |
_version_ |
1803609725917462528 |
fullrecord_marcxml |
<?xml version="1.0" encoding="UTF-8"?><collection xmlns="http://www.loc.gov/MARC21/slim"><record><leader>01000caa a22002652 4500</leader><controlfield tag="001">DOAJ072526084</controlfield><controlfield tag="003">DE-627</controlfield><controlfield tag="005">20230309110713.0</controlfield><controlfield tag="007">cr uuu---uuuuu</controlfield><controlfield tag="008">230228s2020 xx |||||o 00| ||eng c</controlfield><datafield tag="024" ind1="7" ind2=" "><subfield code="a">10.1109/ACCESS.2020.3007897</subfield><subfield code="2">doi</subfield></datafield><datafield tag="035" ind1=" " ind2=" "><subfield code="a">(DE-627)DOAJ072526084</subfield></datafield><datafield tag="035" ind1=" " ind2=" "><subfield code="a">(DE-599)DOAJ9d84ac67cf804009a9f194407024c90d</subfield></datafield><datafield tag="040" ind1=" " ind2=" "><subfield code="a">DE-627</subfield><subfield code="b">ger</subfield><subfield code="c">DE-627</subfield><subfield code="e">rakwb</subfield></datafield><datafield tag="041" ind1=" " ind2=" "><subfield code="a">eng</subfield></datafield><datafield tag="050" ind1=" " ind2="0"><subfield code="a">TK1-9971</subfield></datafield><datafield tag="100" ind1="0" ind2=" "><subfield code="a">Rui Yuan</subfield><subfield code="e">verfasserin</subfield><subfield code="4">aut</subfield></datafield><datafield tag="245" ind1="1" ind2="0"><subfield code="a">Construction of Virtual Video Scene and Its Visualization During Sports Training</subfield></datafield><datafield tag="264" ind1=" " ind2="1"><subfield code="c">2020</subfield></datafield><datafield tag="336" ind1=" " ind2=" "><subfield code="a">Text</subfield><subfield code="b">txt</subfield><subfield code="2">rdacontent</subfield></datafield><datafield tag="337" ind1=" " ind2=" "><subfield code="a">Computermedien</subfield><subfield code="b">c</subfield><subfield code="2">rdamedia</subfield></datafield><datafield tag="338" ind1=" " ind2=" "><subfield code="a">Online-Ressource</subfield><subfield code="b">cr</subfield><subfield code="2">rdacarrier</subfield></datafield><datafield tag="520" ind1=" " ind2=" "><subfield code="a">This article studies the actual captured human motion data for human motion synthesis and style transfer, constructs a scene of motion virtual video, and attempts to directly generate human motion style video to establish a sports style transfer model that combines and self-encoding. The original human motion capture data mapped to the motion feature space for style transfer synthesis. The coding network used to map the high-dimensional motion capture data to the low-dimensional feature space, and the motion style transfer constraints established in the feature space, and the human body motion results after the style transfer obtained by decoding. This paper proposes a pixel-level human motion style transfer model based on conditional adversarial networks and uses convolution and convolution to establish two branch coding networks to extract the features of the input style video and content pictures. The decoding network decodes the combined two features and generates a human motion video data frame by frame. The Gram matrix establishes constraints on the encoding and decoding features, controls the movement style of the human body, and finally realizes the visualization of the movement process. The incremental learning method based on the cascade network can improve the high accuracy and achieve the posture measurement frequency of 200Hz. The research results provide a key foundation for improving the immersion sensation of sport visual and tactile interaction simulation.</subfield></datafield><datafield tag="650" ind1=" " ind2="4"><subfield code="a">Virtual video</subfield></datafield><datafield tag="650" ind1=" " ind2="4"><subfield code="a">scene construction</subfield></datafield><datafield tag="650" ind1=" " ind2="4"><subfield code="a">movement process</subfield></datafield><datafield tag="650" ind1=" " ind2="4"><subfield code="a">visualization</subfield></datafield><datafield tag="653" ind1=" " ind2="0"><subfield code="a">Electrical engineering. Electronics. Nuclear engineering</subfield></datafield><datafield tag="700" ind1="0" ind2=" "><subfield code="a">Zhendong Zhang</subfield><subfield code="e">verfasserin</subfield><subfield code="4">aut</subfield></datafield><datafield tag="700" ind1="0" ind2=" "><subfield code="a">Pengwei Song</subfield><subfield code="e">verfasserin</subfield><subfield code="4">aut</subfield></datafield><datafield tag="700" ind1="0" ind2=" "><subfield code="a">Jia Zhang</subfield><subfield code="e">verfasserin</subfield><subfield code="4">aut</subfield></datafield><datafield tag="700" ind1="0" ind2=" "><subfield code="a">Long Qin</subfield><subfield code="e">verfasserin</subfield><subfield code="4">aut</subfield></datafield><datafield tag="773" ind1="0" ind2="8"><subfield code="i">In</subfield><subfield code="t">IEEE Access</subfield><subfield code="d">IEEE, 2014</subfield><subfield code="g">8(2020), Seite 124999-125012</subfield><subfield code="w">(DE-627)728440385</subfield><subfield code="w">(DE-600)2687964-5</subfield><subfield code="x">21693536</subfield><subfield code="7">nnns</subfield></datafield><datafield tag="773" ind1="1" ind2="8"><subfield code="g">volume:8</subfield><subfield code="g">year:2020</subfield><subfield code="g">pages:124999-125012</subfield></datafield><datafield tag="856" ind1="4" ind2="0"><subfield code="u">https://doi.org/10.1109/ACCESS.2020.3007897</subfield><subfield code="z">kostenfrei</subfield></datafield><datafield tag="856" ind1="4" ind2="0"><subfield code="u">https://doaj.org/article/9d84ac67cf804009a9f194407024c90d</subfield><subfield code="z">kostenfrei</subfield></datafield><datafield tag="856" ind1="4" ind2="0"><subfield code="u">https://ieeexplore.ieee.org/document/9136704/</subfield><subfield code="z">kostenfrei</subfield></datafield><datafield tag="856" ind1="4" ind2="2"><subfield code="u">https://doaj.org/toc/2169-3536</subfield><subfield code="y">Journal toc</subfield><subfield code="z">kostenfrei</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_USEFLAG_A</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">SYSFLAG_A</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_DOAJ</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_11</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_20</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_22</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_23</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_24</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_31</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_39</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_40</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_60</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_62</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_63</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_65</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_69</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_70</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_73</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_95</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_105</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_110</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_151</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_161</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_170</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_213</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_230</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_285</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_293</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_370</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_602</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2014</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4012</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4037</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4112</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4125</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4126</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4249</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4305</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4306</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4307</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4313</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4322</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4323</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4324</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4325</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4335</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4338</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4367</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4700</subfield></datafield><datafield tag="951" ind1=" " ind2=" "><subfield code="a">AR</subfield></datafield><datafield tag="952" ind1=" " ind2=" "><subfield code="d">8</subfield><subfield code="j">2020</subfield><subfield code="h">124999-125012</subfield></datafield></record></collection>
|
score |
7.3993597 |