Application of Multiscale Facial Feature Manifold Learning Based on VGG-16
Purpose. In order to solve the problems of small face image samples, high size, low structure, no label, and difficulty in tracking and recapture in security videos, we propose a popular multiscale facial feature manifold (MSFFM) algorithm based on VGG16. Method. We first build the VGG16 architectur...
Ausführliche Beschreibung
Autor*in: |
Huilin Ge [verfasserIn] Zhiyu Zhu [verfasserIn] Runbang Liu [verfasserIn] Xuedong Wu [verfasserIn] |
---|
Format: |
E-Artikel |
---|---|
Sprache: |
Englisch |
Erschienen: |
2021 |
---|
Übergeordnetes Werk: |
In: Journal of Sensors - Hindawi Limited, 2008, (2021) |
---|---|
Übergeordnetes Werk: |
year:2021 |
Links: |
Link aufrufen |
---|
DOI / URN: |
10.1155/2021/7129800 |
---|
Katalog-ID: |
DOAJ012022578 |
---|
LEADER | 01000caa a22002652 4500 | ||
---|---|---|---|
001 | DOAJ012022578 | ||
003 | DE-627 | ||
005 | 20230503073732.0 | ||
007 | cr uuu---uuuuu | ||
008 | 230225s2021 xx |||||o 00| ||eng c | ||
024 | 7 | |a 10.1155/2021/7129800 |2 doi | |
035 | |a (DE-627)DOAJ012022578 | ||
035 | |a (DE-599)DOAJ0eb618540c6445b0a686a783ceed09db | ||
040 | |a DE-627 |b ger |c DE-627 |e rakwb | ||
041 | |a eng | ||
050 | 0 | |a T1-995 | |
100 | 0 | |a Huilin Ge |e verfasserin |4 aut | |
245 | 1 | 0 | |a Application of Multiscale Facial Feature Manifold Learning Based on VGG-16 |
264 | 1 | |c 2021 | |
336 | |a Text |b txt |2 rdacontent | ||
337 | |a Computermedien |b c |2 rdamedia | ||
338 | |a Online-Ressource |b cr |2 rdacarrier | ||
520 | |a Purpose. In order to solve the problems of small face image samples, high size, low structure, no label, and difficulty in tracking and recapture in security videos, we propose a popular multiscale facial feature manifold (MSFFM) algorithm based on VGG16. Method. We first build the VGG16 architecture to obtain face features at different scales and construct a multiscale face feature manifold with face features at different scales as dimensions. At the same time, the recognition rate, accuracy rate, and running time are used to evaluate the performance of VGG16, LeNet-5, and DenseNet on the same database. Results. From the results of comparative experiments, it can be seen that the recognition rate and accuracy of VGG16 are the highest among the three networks. The recognition rate of VGG16 is 97.588%, and the accuracy is 95.889%. And the running time is only 3.5 seconds, which is 72.727% faster than LeNet-5 and 66.666% faster than DenseNet. Conclusion. The model proposed in this paper breaks through the key problem in the face detection and tracking problem in the public security field, predicts the position of the face target image in the time dimension manifold space, and improves the efficiency of face detection. | ||
653 | 0 | |a Technology (General) | |
700 | 0 | |a Zhiyu Zhu |e verfasserin |4 aut | |
700 | 0 | |a Runbang Liu |e verfasserin |4 aut | |
700 | 0 | |a Xuedong Wu |e verfasserin |4 aut | |
773 | 0 | 8 | |i In |t Journal of Sensors |d Hindawi Limited, 2008 |g (2021) |w (DE-627)550736751 |w (DE-600)2397931-8 |x 1687725X |7 nnns |
773 | 1 | 8 | |g year:2021 |
856 | 4 | 0 | |u https://doi.org/10.1155/2021/7129800 |z kostenfrei |
856 | 4 | 0 | |u https://doaj.org/article/0eb618540c6445b0a686a783ceed09db |z kostenfrei |
856 | 4 | 0 | |u http://dx.doi.org/10.1155/2021/7129800 |z kostenfrei |
856 | 4 | 2 | |u https://doaj.org/toc/1687-725X |y Journal toc |z kostenfrei |
856 | 4 | 2 | |u https://doaj.org/toc/1687-7268 |y Journal toc |z kostenfrei |
912 | |a GBV_USEFLAG_A | ||
912 | |a SYSFLAG_A | ||
912 | |a GBV_DOAJ | ||
912 | |a SSG-OLC-PHA | ||
912 | |a GBV_ILN_11 | ||
912 | |a GBV_ILN_20 | ||
912 | |a GBV_ILN_22 | ||
912 | |a GBV_ILN_23 | ||
912 | |a GBV_ILN_24 | ||
912 | |a GBV_ILN_31 | ||
912 | |a GBV_ILN_39 | ||
912 | |a GBV_ILN_40 | ||
912 | |a GBV_ILN_60 | ||
912 | |a GBV_ILN_62 | ||
912 | |a GBV_ILN_63 | ||
912 | |a GBV_ILN_65 | ||
912 | |a GBV_ILN_69 | ||
912 | |a GBV_ILN_70 | ||
912 | |a GBV_ILN_73 | ||
912 | |a GBV_ILN_95 | ||
912 | |a GBV_ILN_105 | ||
912 | |a GBV_ILN_110 | ||
912 | |a GBV_ILN_151 | ||
912 | |a GBV_ILN_161 | ||
912 | |a GBV_ILN_165 | ||
912 | |a GBV_ILN_170 | ||
912 | |a GBV_ILN_171 | ||
912 | |a GBV_ILN_213 | ||
912 | |a GBV_ILN_224 | ||
912 | |a GBV_ILN_230 | ||
912 | |a GBV_ILN_285 | ||
912 | |a GBV_ILN_293 | ||
912 | |a GBV_ILN_370 | ||
912 | |a GBV_ILN_602 | ||
912 | |a GBV_ILN_636 | ||
912 | |a GBV_ILN_2003 | ||
912 | |a GBV_ILN_2004 | ||
912 | |a GBV_ILN_2005 | ||
912 | |a GBV_ILN_2006 | ||
912 | |a GBV_ILN_2007 | ||
912 | |a GBV_ILN_2010 | ||
912 | |a GBV_ILN_2011 | ||
912 | |a GBV_ILN_2014 | ||
912 | |a GBV_ILN_2026 | ||
912 | |a GBV_ILN_2027 | ||
912 | |a GBV_ILN_2034 | ||
912 | |a GBV_ILN_2037 | ||
912 | |a GBV_ILN_2038 | ||
912 | |a GBV_ILN_2044 | ||
912 | |a GBV_ILN_2048 | ||
912 | |a GBV_ILN_2050 | ||
912 | |a GBV_ILN_2055 | ||
912 | |a GBV_ILN_2056 | ||
912 | |a GBV_ILN_2057 | ||
912 | |a GBV_ILN_2059 | ||
912 | |a GBV_ILN_2061 | ||
912 | |a GBV_ILN_2064 | ||
912 | |a GBV_ILN_2068 | ||
912 | |a GBV_ILN_2088 | ||
912 | |a GBV_ILN_2106 | ||
912 | |a GBV_ILN_2108 | ||
912 | |a GBV_ILN_2110 | ||
912 | |a GBV_ILN_2111 | ||
912 | |a GBV_ILN_2118 | ||
912 | |a GBV_ILN_2122 | ||
912 | |a GBV_ILN_2143 | ||
912 | |a GBV_ILN_2144 | ||
912 | |a GBV_ILN_2147 | ||
912 | |a GBV_ILN_2148 | ||
912 | |a GBV_ILN_2152 | ||
912 | |a GBV_ILN_2153 | ||
912 | |a GBV_ILN_2232 | ||
912 | |a GBV_ILN_2470 | ||
912 | |a GBV_ILN_2507 | ||
912 | |a GBV_ILN_2522 | ||
912 | |a GBV_ILN_4012 | ||
912 | |a GBV_ILN_4035 | ||
912 | |a GBV_ILN_4037 | ||
912 | |a GBV_ILN_4046 | ||
912 | |a GBV_ILN_4112 | ||
912 | |a GBV_ILN_4125 | ||
912 | |a GBV_ILN_4126 | ||
912 | |a GBV_ILN_4242 | ||
912 | |a GBV_ILN_4249 | ||
912 | |a GBV_ILN_4251 | ||
912 | |a GBV_ILN_4305 | ||
912 | |a GBV_ILN_4306 | ||
912 | |a GBV_ILN_4307 | ||
912 | |a GBV_ILN_4313 | ||
912 | |a GBV_ILN_4322 | ||
912 | |a GBV_ILN_4323 | ||
912 | |a GBV_ILN_4324 | ||
912 | |a GBV_ILN_4325 | ||
912 | |a GBV_ILN_4326 | ||
912 | |a GBV_ILN_4333 | ||
912 | |a GBV_ILN_4334 | ||
912 | |a GBV_ILN_4335 | ||
912 | |a GBV_ILN_4336 | ||
912 | |a GBV_ILN_4338 | ||
912 | |a GBV_ILN_4367 | ||
912 | |a GBV_ILN_4700 | ||
951 | |a AR | ||
952 | |j 2021 |
author_variant |
h g hg z z zz r l rl x w xw |
---|---|
matchkey_str |
article:1687725X:2021----::plctoomliclfcafaueaioder |
hierarchy_sort_str |
2021 |
callnumber-subject-code |
T |
publishDate |
2021 |
allfields |
10.1155/2021/7129800 doi (DE-627)DOAJ012022578 (DE-599)DOAJ0eb618540c6445b0a686a783ceed09db DE-627 ger DE-627 rakwb eng T1-995 Huilin Ge verfasserin aut Application of Multiscale Facial Feature Manifold Learning Based on VGG-16 2021 Text txt rdacontent Computermedien c rdamedia Online-Ressource cr rdacarrier Purpose. In order to solve the problems of small face image samples, high size, low structure, no label, and difficulty in tracking and recapture in security videos, we propose a popular multiscale facial feature manifold (MSFFM) algorithm based on VGG16. Method. We first build the VGG16 architecture to obtain face features at different scales and construct a multiscale face feature manifold with face features at different scales as dimensions. At the same time, the recognition rate, accuracy rate, and running time are used to evaluate the performance of VGG16, LeNet-5, and DenseNet on the same database. Results. From the results of comparative experiments, it can be seen that the recognition rate and accuracy of VGG16 are the highest among the three networks. The recognition rate of VGG16 is 97.588%, and the accuracy is 95.889%. And the running time is only 3.5 seconds, which is 72.727% faster than LeNet-5 and 66.666% faster than DenseNet. Conclusion. The model proposed in this paper breaks through the key problem in the face detection and tracking problem in the public security field, predicts the position of the face target image in the time dimension manifold space, and improves the efficiency of face detection. Technology (General) Zhiyu Zhu verfasserin aut Runbang Liu verfasserin aut Xuedong Wu verfasserin aut In Journal of Sensors Hindawi Limited, 2008 (2021) (DE-627)550736751 (DE-600)2397931-8 1687725X nnns year:2021 https://doi.org/10.1155/2021/7129800 kostenfrei https://doaj.org/article/0eb618540c6445b0a686a783ceed09db kostenfrei http://dx.doi.org/10.1155/2021/7129800 kostenfrei https://doaj.org/toc/1687-725X Journal toc kostenfrei https://doaj.org/toc/1687-7268 Journal toc kostenfrei GBV_USEFLAG_A SYSFLAG_A GBV_DOAJ SSG-OLC-PHA GBV_ILN_11 GBV_ILN_20 GBV_ILN_22 GBV_ILN_23 GBV_ILN_24 GBV_ILN_31 GBV_ILN_39 GBV_ILN_40 GBV_ILN_60 GBV_ILN_62 GBV_ILN_63 GBV_ILN_65 GBV_ILN_69 GBV_ILN_70 GBV_ILN_73 GBV_ILN_95 GBV_ILN_105 GBV_ILN_110 GBV_ILN_151 GBV_ILN_161 GBV_ILN_165 GBV_ILN_170 GBV_ILN_171 GBV_ILN_213 GBV_ILN_224 GBV_ILN_230 GBV_ILN_285 GBV_ILN_293 GBV_ILN_370 GBV_ILN_602 GBV_ILN_636 GBV_ILN_2003 GBV_ILN_2004 GBV_ILN_2005 GBV_ILN_2006 GBV_ILN_2007 GBV_ILN_2010 GBV_ILN_2011 GBV_ILN_2014 GBV_ILN_2026 GBV_ILN_2027 GBV_ILN_2034 GBV_ILN_2037 GBV_ILN_2038 GBV_ILN_2044 GBV_ILN_2048 GBV_ILN_2050 GBV_ILN_2055 GBV_ILN_2056 GBV_ILN_2057 GBV_ILN_2059 GBV_ILN_2061 GBV_ILN_2064 GBV_ILN_2068 GBV_ILN_2088 GBV_ILN_2106 GBV_ILN_2108 GBV_ILN_2110 GBV_ILN_2111 GBV_ILN_2118 GBV_ILN_2122 GBV_ILN_2143 GBV_ILN_2144 GBV_ILN_2147 GBV_ILN_2148 GBV_ILN_2152 GBV_ILN_2153 GBV_ILN_2232 GBV_ILN_2470 GBV_ILN_2507 GBV_ILN_2522 GBV_ILN_4012 GBV_ILN_4035 GBV_ILN_4037 GBV_ILN_4046 GBV_ILN_4112 GBV_ILN_4125 GBV_ILN_4126 GBV_ILN_4242 GBV_ILN_4249 GBV_ILN_4251 GBV_ILN_4305 GBV_ILN_4306 GBV_ILN_4307 GBV_ILN_4313 GBV_ILN_4322 GBV_ILN_4323 GBV_ILN_4324 GBV_ILN_4325 GBV_ILN_4326 GBV_ILN_4333 GBV_ILN_4334 GBV_ILN_4335 GBV_ILN_4336 GBV_ILN_4338 GBV_ILN_4367 GBV_ILN_4700 AR 2021 |
spelling |
10.1155/2021/7129800 doi (DE-627)DOAJ012022578 (DE-599)DOAJ0eb618540c6445b0a686a783ceed09db DE-627 ger DE-627 rakwb eng T1-995 Huilin Ge verfasserin aut Application of Multiscale Facial Feature Manifold Learning Based on VGG-16 2021 Text txt rdacontent Computermedien c rdamedia Online-Ressource cr rdacarrier Purpose. In order to solve the problems of small face image samples, high size, low structure, no label, and difficulty in tracking and recapture in security videos, we propose a popular multiscale facial feature manifold (MSFFM) algorithm based on VGG16. Method. We first build the VGG16 architecture to obtain face features at different scales and construct a multiscale face feature manifold with face features at different scales as dimensions. At the same time, the recognition rate, accuracy rate, and running time are used to evaluate the performance of VGG16, LeNet-5, and DenseNet on the same database. Results. From the results of comparative experiments, it can be seen that the recognition rate and accuracy of VGG16 are the highest among the three networks. The recognition rate of VGG16 is 97.588%, and the accuracy is 95.889%. And the running time is only 3.5 seconds, which is 72.727% faster than LeNet-5 and 66.666% faster than DenseNet. Conclusion. The model proposed in this paper breaks through the key problem in the face detection and tracking problem in the public security field, predicts the position of the face target image in the time dimension manifold space, and improves the efficiency of face detection. Technology (General) Zhiyu Zhu verfasserin aut Runbang Liu verfasserin aut Xuedong Wu verfasserin aut In Journal of Sensors Hindawi Limited, 2008 (2021) (DE-627)550736751 (DE-600)2397931-8 1687725X nnns year:2021 https://doi.org/10.1155/2021/7129800 kostenfrei https://doaj.org/article/0eb618540c6445b0a686a783ceed09db kostenfrei http://dx.doi.org/10.1155/2021/7129800 kostenfrei https://doaj.org/toc/1687-725X Journal toc kostenfrei https://doaj.org/toc/1687-7268 Journal toc kostenfrei GBV_USEFLAG_A SYSFLAG_A GBV_DOAJ SSG-OLC-PHA GBV_ILN_11 GBV_ILN_20 GBV_ILN_22 GBV_ILN_23 GBV_ILN_24 GBV_ILN_31 GBV_ILN_39 GBV_ILN_40 GBV_ILN_60 GBV_ILN_62 GBV_ILN_63 GBV_ILN_65 GBV_ILN_69 GBV_ILN_70 GBV_ILN_73 GBV_ILN_95 GBV_ILN_105 GBV_ILN_110 GBV_ILN_151 GBV_ILN_161 GBV_ILN_165 GBV_ILN_170 GBV_ILN_171 GBV_ILN_213 GBV_ILN_224 GBV_ILN_230 GBV_ILN_285 GBV_ILN_293 GBV_ILN_370 GBV_ILN_602 GBV_ILN_636 GBV_ILN_2003 GBV_ILN_2004 GBV_ILN_2005 GBV_ILN_2006 GBV_ILN_2007 GBV_ILN_2010 GBV_ILN_2011 GBV_ILN_2014 GBV_ILN_2026 GBV_ILN_2027 GBV_ILN_2034 GBV_ILN_2037 GBV_ILN_2038 GBV_ILN_2044 GBV_ILN_2048 GBV_ILN_2050 GBV_ILN_2055 GBV_ILN_2056 GBV_ILN_2057 GBV_ILN_2059 GBV_ILN_2061 GBV_ILN_2064 GBV_ILN_2068 GBV_ILN_2088 GBV_ILN_2106 GBV_ILN_2108 GBV_ILN_2110 GBV_ILN_2111 GBV_ILN_2118 GBV_ILN_2122 GBV_ILN_2143 GBV_ILN_2144 GBV_ILN_2147 GBV_ILN_2148 GBV_ILN_2152 GBV_ILN_2153 GBV_ILN_2232 GBV_ILN_2470 GBV_ILN_2507 GBV_ILN_2522 GBV_ILN_4012 GBV_ILN_4035 GBV_ILN_4037 GBV_ILN_4046 GBV_ILN_4112 GBV_ILN_4125 GBV_ILN_4126 GBV_ILN_4242 GBV_ILN_4249 GBV_ILN_4251 GBV_ILN_4305 GBV_ILN_4306 GBV_ILN_4307 GBV_ILN_4313 GBV_ILN_4322 GBV_ILN_4323 GBV_ILN_4324 GBV_ILN_4325 GBV_ILN_4326 GBV_ILN_4333 GBV_ILN_4334 GBV_ILN_4335 GBV_ILN_4336 GBV_ILN_4338 GBV_ILN_4367 GBV_ILN_4700 AR 2021 |
allfields_unstemmed |
10.1155/2021/7129800 doi (DE-627)DOAJ012022578 (DE-599)DOAJ0eb618540c6445b0a686a783ceed09db DE-627 ger DE-627 rakwb eng T1-995 Huilin Ge verfasserin aut Application of Multiscale Facial Feature Manifold Learning Based on VGG-16 2021 Text txt rdacontent Computermedien c rdamedia Online-Ressource cr rdacarrier Purpose. In order to solve the problems of small face image samples, high size, low structure, no label, and difficulty in tracking and recapture in security videos, we propose a popular multiscale facial feature manifold (MSFFM) algorithm based on VGG16. Method. We first build the VGG16 architecture to obtain face features at different scales and construct a multiscale face feature manifold with face features at different scales as dimensions. At the same time, the recognition rate, accuracy rate, and running time are used to evaluate the performance of VGG16, LeNet-5, and DenseNet on the same database. Results. From the results of comparative experiments, it can be seen that the recognition rate and accuracy of VGG16 are the highest among the three networks. The recognition rate of VGG16 is 97.588%, and the accuracy is 95.889%. And the running time is only 3.5 seconds, which is 72.727% faster than LeNet-5 and 66.666% faster than DenseNet. Conclusion. The model proposed in this paper breaks through the key problem in the face detection and tracking problem in the public security field, predicts the position of the face target image in the time dimension manifold space, and improves the efficiency of face detection. Technology (General) Zhiyu Zhu verfasserin aut Runbang Liu verfasserin aut Xuedong Wu verfasserin aut In Journal of Sensors Hindawi Limited, 2008 (2021) (DE-627)550736751 (DE-600)2397931-8 1687725X nnns year:2021 https://doi.org/10.1155/2021/7129800 kostenfrei https://doaj.org/article/0eb618540c6445b0a686a783ceed09db kostenfrei http://dx.doi.org/10.1155/2021/7129800 kostenfrei https://doaj.org/toc/1687-725X Journal toc kostenfrei https://doaj.org/toc/1687-7268 Journal toc kostenfrei GBV_USEFLAG_A SYSFLAG_A GBV_DOAJ SSG-OLC-PHA GBV_ILN_11 GBV_ILN_20 GBV_ILN_22 GBV_ILN_23 GBV_ILN_24 GBV_ILN_31 GBV_ILN_39 GBV_ILN_40 GBV_ILN_60 GBV_ILN_62 GBV_ILN_63 GBV_ILN_65 GBV_ILN_69 GBV_ILN_70 GBV_ILN_73 GBV_ILN_95 GBV_ILN_105 GBV_ILN_110 GBV_ILN_151 GBV_ILN_161 GBV_ILN_165 GBV_ILN_170 GBV_ILN_171 GBV_ILN_213 GBV_ILN_224 GBV_ILN_230 GBV_ILN_285 GBV_ILN_293 GBV_ILN_370 GBV_ILN_602 GBV_ILN_636 GBV_ILN_2003 GBV_ILN_2004 GBV_ILN_2005 GBV_ILN_2006 GBV_ILN_2007 GBV_ILN_2010 GBV_ILN_2011 GBV_ILN_2014 GBV_ILN_2026 GBV_ILN_2027 GBV_ILN_2034 GBV_ILN_2037 GBV_ILN_2038 GBV_ILN_2044 GBV_ILN_2048 GBV_ILN_2050 GBV_ILN_2055 GBV_ILN_2056 GBV_ILN_2057 GBV_ILN_2059 GBV_ILN_2061 GBV_ILN_2064 GBV_ILN_2068 GBV_ILN_2088 GBV_ILN_2106 GBV_ILN_2108 GBV_ILN_2110 GBV_ILN_2111 GBV_ILN_2118 GBV_ILN_2122 GBV_ILN_2143 GBV_ILN_2144 GBV_ILN_2147 GBV_ILN_2148 GBV_ILN_2152 GBV_ILN_2153 GBV_ILN_2232 GBV_ILN_2470 GBV_ILN_2507 GBV_ILN_2522 GBV_ILN_4012 GBV_ILN_4035 GBV_ILN_4037 GBV_ILN_4046 GBV_ILN_4112 GBV_ILN_4125 GBV_ILN_4126 GBV_ILN_4242 GBV_ILN_4249 GBV_ILN_4251 GBV_ILN_4305 GBV_ILN_4306 GBV_ILN_4307 GBV_ILN_4313 GBV_ILN_4322 GBV_ILN_4323 GBV_ILN_4324 GBV_ILN_4325 GBV_ILN_4326 GBV_ILN_4333 GBV_ILN_4334 GBV_ILN_4335 GBV_ILN_4336 GBV_ILN_4338 GBV_ILN_4367 GBV_ILN_4700 AR 2021 |
allfieldsGer |
10.1155/2021/7129800 doi (DE-627)DOAJ012022578 (DE-599)DOAJ0eb618540c6445b0a686a783ceed09db DE-627 ger DE-627 rakwb eng T1-995 Huilin Ge verfasserin aut Application of Multiscale Facial Feature Manifold Learning Based on VGG-16 2021 Text txt rdacontent Computermedien c rdamedia Online-Ressource cr rdacarrier Purpose. In order to solve the problems of small face image samples, high size, low structure, no label, and difficulty in tracking and recapture in security videos, we propose a popular multiscale facial feature manifold (MSFFM) algorithm based on VGG16. Method. We first build the VGG16 architecture to obtain face features at different scales and construct a multiscale face feature manifold with face features at different scales as dimensions. At the same time, the recognition rate, accuracy rate, and running time are used to evaluate the performance of VGG16, LeNet-5, and DenseNet on the same database. Results. From the results of comparative experiments, it can be seen that the recognition rate and accuracy of VGG16 are the highest among the three networks. The recognition rate of VGG16 is 97.588%, and the accuracy is 95.889%. And the running time is only 3.5 seconds, which is 72.727% faster than LeNet-5 and 66.666% faster than DenseNet. Conclusion. The model proposed in this paper breaks through the key problem in the face detection and tracking problem in the public security field, predicts the position of the face target image in the time dimension manifold space, and improves the efficiency of face detection. Technology (General) Zhiyu Zhu verfasserin aut Runbang Liu verfasserin aut Xuedong Wu verfasserin aut In Journal of Sensors Hindawi Limited, 2008 (2021) (DE-627)550736751 (DE-600)2397931-8 1687725X nnns year:2021 https://doi.org/10.1155/2021/7129800 kostenfrei https://doaj.org/article/0eb618540c6445b0a686a783ceed09db kostenfrei http://dx.doi.org/10.1155/2021/7129800 kostenfrei https://doaj.org/toc/1687-725X Journal toc kostenfrei https://doaj.org/toc/1687-7268 Journal toc kostenfrei GBV_USEFLAG_A SYSFLAG_A GBV_DOAJ SSG-OLC-PHA GBV_ILN_11 GBV_ILN_20 GBV_ILN_22 GBV_ILN_23 GBV_ILN_24 GBV_ILN_31 GBV_ILN_39 GBV_ILN_40 GBV_ILN_60 GBV_ILN_62 GBV_ILN_63 GBV_ILN_65 GBV_ILN_69 GBV_ILN_70 GBV_ILN_73 GBV_ILN_95 GBV_ILN_105 GBV_ILN_110 GBV_ILN_151 GBV_ILN_161 GBV_ILN_165 GBV_ILN_170 GBV_ILN_171 GBV_ILN_213 GBV_ILN_224 GBV_ILN_230 GBV_ILN_285 GBV_ILN_293 GBV_ILN_370 GBV_ILN_602 GBV_ILN_636 GBV_ILN_2003 GBV_ILN_2004 GBV_ILN_2005 GBV_ILN_2006 GBV_ILN_2007 GBV_ILN_2010 GBV_ILN_2011 GBV_ILN_2014 GBV_ILN_2026 GBV_ILN_2027 GBV_ILN_2034 GBV_ILN_2037 GBV_ILN_2038 GBV_ILN_2044 GBV_ILN_2048 GBV_ILN_2050 GBV_ILN_2055 GBV_ILN_2056 GBV_ILN_2057 GBV_ILN_2059 GBV_ILN_2061 GBV_ILN_2064 GBV_ILN_2068 GBV_ILN_2088 GBV_ILN_2106 GBV_ILN_2108 GBV_ILN_2110 GBV_ILN_2111 GBV_ILN_2118 GBV_ILN_2122 GBV_ILN_2143 GBV_ILN_2144 GBV_ILN_2147 GBV_ILN_2148 GBV_ILN_2152 GBV_ILN_2153 GBV_ILN_2232 GBV_ILN_2470 GBV_ILN_2507 GBV_ILN_2522 GBV_ILN_4012 GBV_ILN_4035 GBV_ILN_4037 GBV_ILN_4046 GBV_ILN_4112 GBV_ILN_4125 GBV_ILN_4126 GBV_ILN_4242 GBV_ILN_4249 GBV_ILN_4251 GBV_ILN_4305 GBV_ILN_4306 GBV_ILN_4307 GBV_ILN_4313 GBV_ILN_4322 GBV_ILN_4323 GBV_ILN_4324 GBV_ILN_4325 GBV_ILN_4326 GBV_ILN_4333 GBV_ILN_4334 GBV_ILN_4335 GBV_ILN_4336 GBV_ILN_4338 GBV_ILN_4367 GBV_ILN_4700 AR 2021 |
allfieldsSound |
10.1155/2021/7129800 doi (DE-627)DOAJ012022578 (DE-599)DOAJ0eb618540c6445b0a686a783ceed09db DE-627 ger DE-627 rakwb eng T1-995 Huilin Ge verfasserin aut Application of Multiscale Facial Feature Manifold Learning Based on VGG-16 2021 Text txt rdacontent Computermedien c rdamedia Online-Ressource cr rdacarrier Purpose. In order to solve the problems of small face image samples, high size, low structure, no label, and difficulty in tracking and recapture in security videos, we propose a popular multiscale facial feature manifold (MSFFM) algorithm based on VGG16. Method. We first build the VGG16 architecture to obtain face features at different scales and construct a multiscale face feature manifold with face features at different scales as dimensions. At the same time, the recognition rate, accuracy rate, and running time are used to evaluate the performance of VGG16, LeNet-5, and DenseNet on the same database. Results. From the results of comparative experiments, it can be seen that the recognition rate and accuracy of VGG16 are the highest among the three networks. The recognition rate of VGG16 is 97.588%, and the accuracy is 95.889%. And the running time is only 3.5 seconds, which is 72.727% faster than LeNet-5 and 66.666% faster than DenseNet. Conclusion. The model proposed in this paper breaks through the key problem in the face detection and tracking problem in the public security field, predicts the position of the face target image in the time dimension manifold space, and improves the efficiency of face detection. Technology (General) Zhiyu Zhu verfasserin aut Runbang Liu verfasserin aut Xuedong Wu verfasserin aut In Journal of Sensors Hindawi Limited, 2008 (2021) (DE-627)550736751 (DE-600)2397931-8 1687725X nnns year:2021 https://doi.org/10.1155/2021/7129800 kostenfrei https://doaj.org/article/0eb618540c6445b0a686a783ceed09db kostenfrei http://dx.doi.org/10.1155/2021/7129800 kostenfrei https://doaj.org/toc/1687-725X Journal toc kostenfrei https://doaj.org/toc/1687-7268 Journal toc kostenfrei GBV_USEFLAG_A SYSFLAG_A GBV_DOAJ SSG-OLC-PHA GBV_ILN_11 GBV_ILN_20 GBV_ILN_22 GBV_ILN_23 GBV_ILN_24 GBV_ILN_31 GBV_ILN_39 GBV_ILN_40 GBV_ILN_60 GBV_ILN_62 GBV_ILN_63 GBV_ILN_65 GBV_ILN_69 GBV_ILN_70 GBV_ILN_73 GBV_ILN_95 GBV_ILN_105 GBV_ILN_110 GBV_ILN_151 GBV_ILN_161 GBV_ILN_165 GBV_ILN_170 GBV_ILN_171 GBV_ILN_213 GBV_ILN_224 GBV_ILN_230 GBV_ILN_285 GBV_ILN_293 GBV_ILN_370 GBV_ILN_602 GBV_ILN_636 GBV_ILN_2003 GBV_ILN_2004 GBV_ILN_2005 GBV_ILN_2006 GBV_ILN_2007 GBV_ILN_2010 GBV_ILN_2011 GBV_ILN_2014 GBV_ILN_2026 GBV_ILN_2027 GBV_ILN_2034 GBV_ILN_2037 GBV_ILN_2038 GBV_ILN_2044 GBV_ILN_2048 GBV_ILN_2050 GBV_ILN_2055 GBV_ILN_2056 GBV_ILN_2057 GBV_ILN_2059 GBV_ILN_2061 GBV_ILN_2064 GBV_ILN_2068 GBV_ILN_2088 GBV_ILN_2106 GBV_ILN_2108 GBV_ILN_2110 GBV_ILN_2111 GBV_ILN_2118 GBV_ILN_2122 GBV_ILN_2143 GBV_ILN_2144 GBV_ILN_2147 GBV_ILN_2148 GBV_ILN_2152 GBV_ILN_2153 GBV_ILN_2232 GBV_ILN_2470 GBV_ILN_2507 GBV_ILN_2522 GBV_ILN_4012 GBV_ILN_4035 GBV_ILN_4037 GBV_ILN_4046 GBV_ILN_4112 GBV_ILN_4125 GBV_ILN_4126 GBV_ILN_4242 GBV_ILN_4249 GBV_ILN_4251 GBV_ILN_4305 GBV_ILN_4306 GBV_ILN_4307 GBV_ILN_4313 GBV_ILN_4322 GBV_ILN_4323 GBV_ILN_4324 GBV_ILN_4325 GBV_ILN_4326 GBV_ILN_4333 GBV_ILN_4334 GBV_ILN_4335 GBV_ILN_4336 GBV_ILN_4338 GBV_ILN_4367 GBV_ILN_4700 AR 2021 |
language |
English |
source |
In Journal of Sensors (2021) year:2021 |
sourceStr |
In Journal of Sensors (2021) year:2021 |
format_phy_str_mv |
Article |
institution |
findex.gbv.de |
topic_facet |
Technology (General) |
isfreeaccess_bool |
true |
container_title |
Journal of Sensors |
authorswithroles_txt_mv |
Huilin Ge @@aut@@ Zhiyu Zhu @@aut@@ Runbang Liu @@aut@@ Xuedong Wu @@aut@@ |
publishDateDaySort_date |
2021-01-01T00:00:00Z |
hierarchy_top_id |
550736751 |
id |
DOAJ012022578 |
language_de |
englisch |
fullrecord |
<?xml version="1.0" encoding="UTF-8"?><collection xmlns="http://www.loc.gov/MARC21/slim"><record><leader>01000caa a22002652 4500</leader><controlfield tag="001">DOAJ012022578</controlfield><controlfield tag="003">DE-627</controlfield><controlfield tag="005">20230503073732.0</controlfield><controlfield tag="007">cr uuu---uuuuu</controlfield><controlfield tag="008">230225s2021 xx |||||o 00| ||eng c</controlfield><datafield tag="024" ind1="7" ind2=" "><subfield code="a">10.1155/2021/7129800</subfield><subfield code="2">doi</subfield></datafield><datafield tag="035" ind1=" " ind2=" "><subfield code="a">(DE-627)DOAJ012022578</subfield></datafield><datafield tag="035" ind1=" " ind2=" "><subfield code="a">(DE-599)DOAJ0eb618540c6445b0a686a783ceed09db</subfield></datafield><datafield tag="040" ind1=" " ind2=" "><subfield code="a">DE-627</subfield><subfield code="b">ger</subfield><subfield code="c">DE-627</subfield><subfield code="e">rakwb</subfield></datafield><datafield tag="041" ind1=" " ind2=" "><subfield code="a">eng</subfield></datafield><datafield tag="050" ind1=" " ind2="0"><subfield code="a">T1-995</subfield></datafield><datafield tag="100" ind1="0" ind2=" "><subfield code="a">Huilin Ge</subfield><subfield code="e">verfasserin</subfield><subfield code="4">aut</subfield></datafield><datafield tag="245" ind1="1" ind2="0"><subfield code="a">Application of Multiscale Facial Feature Manifold Learning Based on VGG-16</subfield></datafield><datafield tag="264" ind1=" " ind2="1"><subfield code="c">2021</subfield></datafield><datafield tag="336" ind1=" " ind2=" "><subfield code="a">Text</subfield><subfield code="b">txt</subfield><subfield code="2">rdacontent</subfield></datafield><datafield tag="337" ind1=" " ind2=" "><subfield code="a">Computermedien</subfield><subfield code="b">c</subfield><subfield code="2">rdamedia</subfield></datafield><datafield tag="338" ind1=" " ind2=" "><subfield code="a">Online-Ressource</subfield><subfield code="b">cr</subfield><subfield code="2">rdacarrier</subfield></datafield><datafield tag="520" ind1=" " ind2=" "><subfield code="a">Purpose. In order to solve the problems of small face image samples, high size, low structure, no label, and difficulty in tracking and recapture in security videos, we propose a popular multiscale facial feature manifold (MSFFM) algorithm based on VGG16. Method. We first build the VGG16 architecture to obtain face features at different scales and construct a multiscale face feature manifold with face features at different scales as dimensions. At the same time, the recognition rate, accuracy rate, and running time are used to evaluate the performance of VGG16, LeNet-5, and DenseNet on the same database. Results. From the results of comparative experiments, it can be seen that the recognition rate and accuracy of VGG16 are the highest among the three networks. The recognition rate of VGG16 is 97.588%, and the accuracy is 95.889%. And the running time is only 3.5 seconds, which is 72.727% faster than LeNet-5 and 66.666% faster than DenseNet. Conclusion. The model proposed in this paper breaks through the key problem in the face detection and tracking problem in the public security field, predicts the position of the face target image in the time dimension manifold space, and improves the efficiency of face detection.</subfield></datafield><datafield tag="653" ind1=" " ind2="0"><subfield code="a">Technology (General)</subfield></datafield><datafield tag="700" ind1="0" ind2=" "><subfield code="a">Zhiyu Zhu</subfield><subfield code="e">verfasserin</subfield><subfield code="4">aut</subfield></datafield><datafield tag="700" ind1="0" ind2=" "><subfield code="a">Runbang Liu</subfield><subfield code="e">verfasserin</subfield><subfield code="4">aut</subfield></datafield><datafield tag="700" ind1="0" ind2=" "><subfield code="a">Xuedong Wu</subfield><subfield code="e">verfasserin</subfield><subfield code="4">aut</subfield></datafield><datafield tag="773" ind1="0" ind2="8"><subfield code="i">In</subfield><subfield code="t">Journal of Sensors</subfield><subfield code="d">Hindawi Limited, 2008</subfield><subfield code="g">(2021)</subfield><subfield code="w">(DE-627)550736751</subfield><subfield code="w">(DE-600)2397931-8</subfield><subfield code="x">1687725X</subfield><subfield code="7">nnns</subfield></datafield><datafield tag="773" ind1="1" ind2="8"><subfield code="g">year:2021</subfield></datafield><datafield tag="856" ind1="4" ind2="0"><subfield code="u">https://doi.org/10.1155/2021/7129800</subfield><subfield code="z">kostenfrei</subfield></datafield><datafield tag="856" ind1="4" ind2="0"><subfield code="u">https://doaj.org/article/0eb618540c6445b0a686a783ceed09db</subfield><subfield code="z">kostenfrei</subfield></datafield><datafield tag="856" ind1="4" ind2="0"><subfield code="u">http://dx.doi.org/10.1155/2021/7129800</subfield><subfield code="z">kostenfrei</subfield></datafield><datafield tag="856" ind1="4" ind2="2"><subfield code="u">https://doaj.org/toc/1687-725X</subfield><subfield code="y">Journal toc</subfield><subfield code="z">kostenfrei</subfield></datafield><datafield tag="856" ind1="4" ind2="2"><subfield code="u">https://doaj.org/toc/1687-7268</subfield><subfield code="y">Journal toc</subfield><subfield code="z">kostenfrei</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_USEFLAG_A</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">SYSFLAG_A</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_DOAJ</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">SSG-OLC-PHA</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_11</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_20</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_22</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_23</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_24</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_31</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_39</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_40</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_60</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_62</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_63</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_65</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_69</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_70</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_73</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_95</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_105</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_110</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_151</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_161</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_165</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_170</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_171</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_213</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_224</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_230</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_285</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_293</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_370</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_602</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_636</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2003</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2004</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2005</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2006</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2007</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2010</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2011</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2014</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2026</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2027</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2034</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2037</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2038</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2044</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2048</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2050</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2055</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2056</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2057</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2059</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2061</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2064</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2068</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2088</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2106</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2108</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2110</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2111</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2118</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2122</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2143</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2144</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2147</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2148</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2152</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2153</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2232</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2470</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2507</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2522</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4012</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4035</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4037</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4046</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4112</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4125</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4126</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4242</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4249</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4251</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4305</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4306</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4307</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4313</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4322</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4323</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4324</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4325</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4326</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4333</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4334</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4335</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4336</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4338</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4367</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4700</subfield></datafield><datafield tag="951" ind1=" " ind2=" "><subfield code="a">AR</subfield></datafield><datafield tag="952" ind1=" " ind2=" "><subfield code="j">2021</subfield></datafield></record></collection>
|
callnumber-first |
T - Technology |
author |
Huilin Ge |
spellingShingle |
Huilin Ge misc T1-995 misc Technology (General) Application of Multiscale Facial Feature Manifold Learning Based on VGG-16 |
authorStr |
Huilin Ge |
ppnlink_with_tag_str_mv |
@@773@@(DE-627)550736751 |
format |
electronic Article |
delete_txt_mv |
keep |
author_role |
aut aut aut aut |
collection |
DOAJ |
remote_str |
true |
callnumber-label |
T1-995 |
illustrated |
Not Illustrated |
issn |
1687725X |
topic_title |
T1-995 Application of Multiscale Facial Feature Manifold Learning Based on VGG-16 |
topic |
misc T1-995 misc Technology (General) |
topic_unstemmed |
misc T1-995 misc Technology (General) |
topic_browse |
misc T1-995 misc Technology (General) |
format_facet |
Elektronische Aufsätze Aufsätze Elektronische Ressource |
format_main_str_mv |
Text Zeitschrift/Artikel |
carriertype_str_mv |
cr |
hierarchy_parent_title |
Journal of Sensors |
hierarchy_parent_id |
550736751 |
hierarchy_top_title |
Journal of Sensors |
isfreeaccess_txt |
true |
familylinks_str_mv |
(DE-627)550736751 (DE-600)2397931-8 |
title |
Application of Multiscale Facial Feature Manifold Learning Based on VGG-16 |
ctrlnum |
(DE-627)DOAJ012022578 (DE-599)DOAJ0eb618540c6445b0a686a783ceed09db |
title_full |
Application of Multiscale Facial Feature Manifold Learning Based on VGG-16 |
author_sort |
Huilin Ge |
journal |
Journal of Sensors |
journalStr |
Journal of Sensors |
callnumber-first-code |
T |
lang_code |
eng |
isOA_bool |
true |
recordtype |
marc |
publishDateSort |
2021 |
contenttype_str_mv |
txt |
author_browse |
Huilin Ge Zhiyu Zhu Runbang Liu Xuedong Wu |
class |
T1-995 |
format_se |
Elektronische Aufsätze |
author-letter |
Huilin Ge |
doi_str_mv |
10.1155/2021/7129800 |
author2-role |
verfasserin |
title_sort |
application of multiscale facial feature manifold learning based on vgg-16 |
callnumber |
T1-995 |
title_auth |
Application of Multiscale Facial Feature Manifold Learning Based on VGG-16 |
abstract |
Purpose. In order to solve the problems of small face image samples, high size, low structure, no label, and difficulty in tracking and recapture in security videos, we propose a popular multiscale facial feature manifold (MSFFM) algorithm based on VGG16. Method. We first build the VGG16 architecture to obtain face features at different scales and construct a multiscale face feature manifold with face features at different scales as dimensions. At the same time, the recognition rate, accuracy rate, and running time are used to evaluate the performance of VGG16, LeNet-5, and DenseNet on the same database. Results. From the results of comparative experiments, it can be seen that the recognition rate and accuracy of VGG16 are the highest among the three networks. The recognition rate of VGG16 is 97.588%, and the accuracy is 95.889%. And the running time is only 3.5 seconds, which is 72.727% faster than LeNet-5 and 66.666% faster than DenseNet. Conclusion. The model proposed in this paper breaks through the key problem in the face detection and tracking problem in the public security field, predicts the position of the face target image in the time dimension manifold space, and improves the efficiency of face detection. |
abstractGer |
Purpose. In order to solve the problems of small face image samples, high size, low structure, no label, and difficulty in tracking and recapture in security videos, we propose a popular multiscale facial feature manifold (MSFFM) algorithm based on VGG16. Method. We first build the VGG16 architecture to obtain face features at different scales and construct a multiscale face feature manifold with face features at different scales as dimensions. At the same time, the recognition rate, accuracy rate, and running time are used to evaluate the performance of VGG16, LeNet-5, and DenseNet on the same database. Results. From the results of comparative experiments, it can be seen that the recognition rate and accuracy of VGG16 are the highest among the three networks. The recognition rate of VGG16 is 97.588%, and the accuracy is 95.889%. And the running time is only 3.5 seconds, which is 72.727% faster than LeNet-5 and 66.666% faster than DenseNet. Conclusion. The model proposed in this paper breaks through the key problem in the face detection and tracking problem in the public security field, predicts the position of the face target image in the time dimension manifold space, and improves the efficiency of face detection. |
abstract_unstemmed |
Purpose. In order to solve the problems of small face image samples, high size, low structure, no label, and difficulty in tracking and recapture in security videos, we propose a popular multiscale facial feature manifold (MSFFM) algorithm based on VGG16. Method. We first build the VGG16 architecture to obtain face features at different scales and construct a multiscale face feature manifold with face features at different scales as dimensions. At the same time, the recognition rate, accuracy rate, and running time are used to evaluate the performance of VGG16, LeNet-5, and DenseNet on the same database. Results. From the results of comparative experiments, it can be seen that the recognition rate and accuracy of VGG16 are the highest among the three networks. The recognition rate of VGG16 is 97.588%, and the accuracy is 95.889%. And the running time is only 3.5 seconds, which is 72.727% faster than LeNet-5 and 66.666% faster than DenseNet. Conclusion. The model proposed in this paper breaks through the key problem in the face detection and tracking problem in the public security field, predicts the position of the face target image in the time dimension manifold space, and improves the efficiency of face detection. |
collection_details |
GBV_USEFLAG_A SYSFLAG_A GBV_DOAJ SSG-OLC-PHA GBV_ILN_11 GBV_ILN_20 GBV_ILN_22 GBV_ILN_23 GBV_ILN_24 GBV_ILN_31 GBV_ILN_39 GBV_ILN_40 GBV_ILN_60 GBV_ILN_62 GBV_ILN_63 GBV_ILN_65 GBV_ILN_69 GBV_ILN_70 GBV_ILN_73 GBV_ILN_95 GBV_ILN_105 GBV_ILN_110 GBV_ILN_151 GBV_ILN_161 GBV_ILN_165 GBV_ILN_170 GBV_ILN_171 GBV_ILN_213 GBV_ILN_224 GBV_ILN_230 GBV_ILN_285 GBV_ILN_293 GBV_ILN_370 GBV_ILN_602 GBV_ILN_636 GBV_ILN_2003 GBV_ILN_2004 GBV_ILN_2005 GBV_ILN_2006 GBV_ILN_2007 GBV_ILN_2010 GBV_ILN_2011 GBV_ILN_2014 GBV_ILN_2026 GBV_ILN_2027 GBV_ILN_2034 GBV_ILN_2037 GBV_ILN_2038 GBV_ILN_2044 GBV_ILN_2048 GBV_ILN_2050 GBV_ILN_2055 GBV_ILN_2056 GBV_ILN_2057 GBV_ILN_2059 GBV_ILN_2061 GBV_ILN_2064 GBV_ILN_2068 GBV_ILN_2088 GBV_ILN_2106 GBV_ILN_2108 GBV_ILN_2110 GBV_ILN_2111 GBV_ILN_2118 GBV_ILN_2122 GBV_ILN_2143 GBV_ILN_2144 GBV_ILN_2147 GBV_ILN_2148 GBV_ILN_2152 GBV_ILN_2153 GBV_ILN_2232 GBV_ILN_2470 GBV_ILN_2507 GBV_ILN_2522 GBV_ILN_4012 GBV_ILN_4035 GBV_ILN_4037 GBV_ILN_4046 GBV_ILN_4112 GBV_ILN_4125 GBV_ILN_4126 GBV_ILN_4242 GBV_ILN_4249 GBV_ILN_4251 GBV_ILN_4305 GBV_ILN_4306 GBV_ILN_4307 GBV_ILN_4313 GBV_ILN_4322 GBV_ILN_4323 GBV_ILN_4324 GBV_ILN_4325 GBV_ILN_4326 GBV_ILN_4333 GBV_ILN_4334 GBV_ILN_4335 GBV_ILN_4336 GBV_ILN_4338 GBV_ILN_4367 GBV_ILN_4700 |
title_short |
Application of Multiscale Facial Feature Manifold Learning Based on VGG-16 |
url |
https://doi.org/10.1155/2021/7129800 https://doaj.org/article/0eb618540c6445b0a686a783ceed09db http://dx.doi.org/10.1155/2021/7129800 https://doaj.org/toc/1687-725X https://doaj.org/toc/1687-7268 |
remote_bool |
true |
author2 |
Zhiyu Zhu Runbang Liu Xuedong Wu |
author2Str |
Zhiyu Zhu Runbang Liu Xuedong Wu |
ppnlink |
550736751 |
callnumber-subject |
T - General Technology |
mediatype_str_mv |
c |
isOA_txt |
true |
hochschulschrift_bool |
false |
doi_str |
10.1155/2021/7129800 |
callnumber-a |
T1-995 |
up_date |
2024-07-03T23:23:15.559Z |
_version_ |
1803602095571468288 |
fullrecord_marcxml |
<?xml version="1.0" encoding="UTF-8"?><collection xmlns="http://www.loc.gov/MARC21/slim"><record><leader>01000caa a22002652 4500</leader><controlfield tag="001">DOAJ012022578</controlfield><controlfield tag="003">DE-627</controlfield><controlfield tag="005">20230503073732.0</controlfield><controlfield tag="007">cr uuu---uuuuu</controlfield><controlfield tag="008">230225s2021 xx |||||o 00| ||eng c</controlfield><datafield tag="024" ind1="7" ind2=" "><subfield code="a">10.1155/2021/7129800</subfield><subfield code="2">doi</subfield></datafield><datafield tag="035" ind1=" " ind2=" "><subfield code="a">(DE-627)DOAJ012022578</subfield></datafield><datafield tag="035" ind1=" " ind2=" "><subfield code="a">(DE-599)DOAJ0eb618540c6445b0a686a783ceed09db</subfield></datafield><datafield tag="040" ind1=" " ind2=" "><subfield code="a">DE-627</subfield><subfield code="b">ger</subfield><subfield code="c">DE-627</subfield><subfield code="e">rakwb</subfield></datafield><datafield tag="041" ind1=" " ind2=" "><subfield code="a">eng</subfield></datafield><datafield tag="050" ind1=" " ind2="0"><subfield code="a">T1-995</subfield></datafield><datafield tag="100" ind1="0" ind2=" "><subfield code="a">Huilin Ge</subfield><subfield code="e">verfasserin</subfield><subfield code="4">aut</subfield></datafield><datafield tag="245" ind1="1" ind2="0"><subfield code="a">Application of Multiscale Facial Feature Manifold Learning Based on VGG-16</subfield></datafield><datafield tag="264" ind1=" " ind2="1"><subfield code="c">2021</subfield></datafield><datafield tag="336" ind1=" " ind2=" "><subfield code="a">Text</subfield><subfield code="b">txt</subfield><subfield code="2">rdacontent</subfield></datafield><datafield tag="337" ind1=" " ind2=" "><subfield code="a">Computermedien</subfield><subfield code="b">c</subfield><subfield code="2">rdamedia</subfield></datafield><datafield tag="338" ind1=" " ind2=" "><subfield code="a">Online-Ressource</subfield><subfield code="b">cr</subfield><subfield code="2">rdacarrier</subfield></datafield><datafield tag="520" ind1=" " ind2=" "><subfield code="a">Purpose. In order to solve the problems of small face image samples, high size, low structure, no label, and difficulty in tracking and recapture in security videos, we propose a popular multiscale facial feature manifold (MSFFM) algorithm based on VGG16. Method. We first build the VGG16 architecture to obtain face features at different scales and construct a multiscale face feature manifold with face features at different scales as dimensions. At the same time, the recognition rate, accuracy rate, and running time are used to evaluate the performance of VGG16, LeNet-5, and DenseNet on the same database. Results. From the results of comparative experiments, it can be seen that the recognition rate and accuracy of VGG16 are the highest among the three networks. The recognition rate of VGG16 is 97.588%, and the accuracy is 95.889%. And the running time is only 3.5 seconds, which is 72.727% faster than LeNet-5 and 66.666% faster than DenseNet. Conclusion. The model proposed in this paper breaks through the key problem in the face detection and tracking problem in the public security field, predicts the position of the face target image in the time dimension manifold space, and improves the efficiency of face detection.</subfield></datafield><datafield tag="653" ind1=" " ind2="0"><subfield code="a">Technology (General)</subfield></datafield><datafield tag="700" ind1="0" ind2=" "><subfield code="a">Zhiyu Zhu</subfield><subfield code="e">verfasserin</subfield><subfield code="4">aut</subfield></datafield><datafield tag="700" ind1="0" ind2=" "><subfield code="a">Runbang Liu</subfield><subfield code="e">verfasserin</subfield><subfield code="4">aut</subfield></datafield><datafield tag="700" ind1="0" ind2=" "><subfield code="a">Xuedong Wu</subfield><subfield code="e">verfasserin</subfield><subfield code="4">aut</subfield></datafield><datafield tag="773" ind1="0" ind2="8"><subfield code="i">In</subfield><subfield code="t">Journal of Sensors</subfield><subfield code="d">Hindawi Limited, 2008</subfield><subfield code="g">(2021)</subfield><subfield code="w">(DE-627)550736751</subfield><subfield code="w">(DE-600)2397931-8</subfield><subfield code="x">1687725X</subfield><subfield code="7">nnns</subfield></datafield><datafield tag="773" ind1="1" ind2="8"><subfield code="g">year:2021</subfield></datafield><datafield tag="856" ind1="4" ind2="0"><subfield code="u">https://doi.org/10.1155/2021/7129800</subfield><subfield code="z">kostenfrei</subfield></datafield><datafield tag="856" ind1="4" ind2="0"><subfield code="u">https://doaj.org/article/0eb618540c6445b0a686a783ceed09db</subfield><subfield code="z">kostenfrei</subfield></datafield><datafield tag="856" ind1="4" ind2="0"><subfield code="u">http://dx.doi.org/10.1155/2021/7129800</subfield><subfield code="z">kostenfrei</subfield></datafield><datafield tag="856" ind1="4" ind2="2"><subfield code="u">https://doaj.org/toc/1687-725X</subfield><subfield code="y">Journal toc</subfield><subfield code="z">kostenfrei</subfield></datafield><datafield tag="856" ind1="4" ind2="2"><subfield code="u">https://doaj.org/toc/1687-7268</subfield><subfield code="y">Journal toc</subfield><subfield code="z">kostenfrei</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_USEFLAG_A</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">SYSFLAG_A</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_DOAJ</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">SSG-OLC-PHA</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_11</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_20</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_22</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_23</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_24</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_31</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_39</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_40</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_60</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_62</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_63</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_65</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_69</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_70</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_73</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_95</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_105</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_110</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_151</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_161</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_165</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_170</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_171</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_213</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_224</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_230</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_285</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_293</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_370</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_602</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_636</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2003</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2004</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2005</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2006</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2007</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2010</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2011</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2014</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2026</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2027</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2034</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2037</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2038</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2044</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2048</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2050</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2055</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2056</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2057</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2059</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2061</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2064</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2068</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2088</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2106</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2108</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2110</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2111</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2118</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2122</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2143</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2144</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2147</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2148</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2152</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2153</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2232</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2470</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2507</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2522</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4012</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4035</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4037</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4046</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4112</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4125</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4126</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4242</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4249</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4251</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4305</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4306</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4307</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4313</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4322</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4323</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4324</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4325</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4326</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4333</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4334</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4335</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4336</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4338</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4367</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4700</subfield></datafield><datafield tag="951" ind1=" " ind2=" "><subfield code="a">AR</subfield></datafield><datafield tag="952" ind1=" " ind2=" "><subfield code="j">2021</subfield></datafield></record></collection>
|
score |
7.3994284 |