Dictionary-Based Face and Person Recognition From Unconstrained Video
To recognize people in unconstrained video, one has to explore the identity information in multiple frames and the accompanying dynamic signature. These identity cues include face, body, and motion. Our approach is based on video-dictionaries for face and body. Video-dictionaries are a generalizatio...
Ausführliche Beschreibung
Autor*in: |
Yi-Chen Chen [verfasserIn] Vishal M. Patel [verfasserIn] P. Jonathon Phillips [verfasserIn] Rama Chellappa [verfasserIn] |
---|
Format: |
E-Artikel |
---|---|
Sprache: |
Englisch |
Erschienen: |
2015 |
---|
Schlagwörter: |
---|
Übergeordnetes Werk: |
In: IEEE Access - IEEE, 2014, 3(2015), Seite 1783-1798 |
---|---|
Übergeordnetes Werk: |
volume:3 ; year:2015 ; pages:1783-1798 |
Links: |
---|
DOI / URN: |
10.1109/ACCESS.2015.2485400 |
---|
Katalog-ID: |
DOAJ015296679 |
---|
LEADER | 01000caa a22002652 4500 | ||
---|---|---|---|
001 | DOAJ015296679 | ||
003 | DE-627 | ||
005 | 20230503064838.0 | ||
007 | cr uuu---uuuuu | ||
008 | 230226s2015 xx |||||o 00| ||eng c | ||
024 | 7 | |a 10.1109/ACCESS.2015.2485400 |2 doi | |
035 | |a (DE-627)DOAJ015296679 | ||
035 | |a (DE-599)DOAJ5575555824144f6795ffaa1f6ff65e92 | ||
040 | |a DE-627 |b ger |c DE-627 |e rakwb | ||
041 | |a eng | ||
050 | 0 | |a TK1-9971 | |
100 | 0 | |a Yi-Chen Chen |e verfasserin |4 aut | |
245 | 1 | 0 | |a Dictionary-Based Face and Person Recognition From Unconstrained Video |
264 | 1 | |c 2015 | |
336 | |a Text |b txt |2 rdacontent | ||
337 | |a Computermedien |b c |2 rdamedia | ||
338 | |a Online-Ressource |b cr |2 rdacarrier | ||
520 | |a To recognize people in unconstrained video, one has to explore the identity information in multiple frames and the accompanying dynamic signature. These identity cues include face, body, and motion. Our approach is based on video-dictionaries for face and body. Video-dictionaries are a generalization of sparse representation and dictionaries for still images. We design the video-dictionaries to implicitly encode temporal, pose, and illumination information. In addition, our video-dictionaries are learned for both face and body, which enables the algorithm to encode both identity cues. To increase the ability of our algorithm to learn nonlinearities, we further apply kernel methods for learning the dictionaries. We demonstrate our method on the Multiple Biometric Grand Challenge, Face and Ocular Challenge Series, Honda/UCSD, and UMD data sets that consist of unconstrained video sequences. Our experimental results on these four data sets compare favorably with those published in the literature. We show that fusing face and body identity cues can improve performance over face alone. | ||
650 | 4 | |a Video-based face recognition | |
650 | 4 | |a person recognition | |
650 | 4 | |a dictionary learning | |
650 | 4 | |a kernel dictionary learning | |
653 | 0 | |a Electrical engineering. Electronics. Nuclear engineering | |
700 | 0 | |a Vishal M. Patel |e verfasserin |4 aut | |
700 | 0 | |a P. Jonathon Phillips |e verfasserin |4 aut | |
700 | 0 | |a Rama Chellappa |e verfasserin |4 aut | |
773 | 0 | 8 | |i In |t IEEE Access |d IEEE, 2014 |g 3(2015), Seite 1783-1798 |w (DE-627)728440385 |w (DE-600)2687964-5 |x 21693536 |7 nnns |
773 | 1 | 8 | |g volume:3 |g year:2015 |g pages:1783-1798 |
856 | 4 | 0 | |u https://doi.org/10.1109/ACCESS.2015.2485400 |z kostenfrei |
856 | 4 | 0 | |u https://doaj.org/article/5575555824144f6795ffaa1f6ff65e92 |z kostenfrei |
856 | 4 | 0 | |u https://ieeexplore.ieee.org/document/7296579/ |z kostenfrei |
856 | 4 | 2 | |u https://doaj.org/toc/2169-3536 |y Journal toc |z kostenfrei |
912 | |a GBV_USEFLAG_A | ||
912 | |a SYSFLAG_A | ||
912 | |a GBV_DOAJ | ||
912 | |a SSG-OLC-PHA | ||
912 | |a GBV_ILN_11 | ||
912 | |a GBV_ILN_20 | ||
912 | |a GBV_ILN_22 | ||
912 | |a GBV_ILN_23 | ||
912 | |a GBV_ILN_24 | ||
912 | |a GBV_ILN_31 | ||
912 | |a GBV_ILN_39 | ||
912 | |a GBV_ILN_40 | ||
912 | |a GBV_ILN_60 | ||
912 | |a GBV_ILN_62 | ||
912 | |a GBV_ILN_63 | ||
912 | |a GBV_ILN_65 | ||
912 | |a GBV_ILN_69 | ||
912 | |a GBV_ILN_70 | ||
912 | |a GBV_ILN_73 | ||
912 | |a GBV_ILN_95 | ||
912 | |a GBV_ILN_105 | ||
912 | |a GBV_ILN_110 | ||
912 | |a GBV_ILN_151 | ||
912 | |a GBV_ILN_161 | ||
912 | |a GBV_ILN_170 | ||
912 | |a GBV_ILN_213 | ||
912 | |a GBV_ILN_230 | ||
912 | |a GBV_ILN_285 | ||
912 | |a GBV_ILN_293 | ||
912 | |a GBV_ILN_370 | ||
912 | |a GBV_ILN_602 | ||
912 | |a GBV_ILN_2014 | ||
912 | |a GBV_ILN_4012 | ||
912 | |a GBV_ILN_4037 | ||
912 | |a GBV_ILN_4112 | ||
912 | |a GBV_ILN_4125 | ||
912 | |a GBV_ILN_4126 | ||
912 | |a GBV_ILN_4249 | ||
912 | |a GBV_ILN_4305 | ||
912 | |a GBV_ILN_4306 | ||
912 | |a GBV_ILN_4307 | ||
912 | |a GBV_ILN_4313 | ||
912 | |a GBV_ILN_4322 | ||
912 | |a GBV_ILN_4323 | ||
912 | |a GBV_ILN_4324 | ||
912 | |a GBV_ILN_4325 | ||
912 | |a GBV_ILN_4335 | ||
912 | |a GBV_ILN_4338 | ||
912 | |a GBV_ILN_4367 | ||
912 | |a GBV_ILN_4700 | ||
951 | |a AR | ||
952 | |d 3 |j 2015 |h 1783-1798 |
author_variant |
y c c ycc v m p vmp p j p pjp r c rc |
---|---|
matchkey_str |
article:21693536:2015----::itoayaefcadesneontofou |
hierarchy_sort_str |
2015 |
callnumber-subject-code |
TK |
publishDate |
2015 |
allfields |
10.1109/ACCESS.2015.2485400 doi (DE-627)DOAJ015296679 (DE-599)DOAJ5575555824144f6795ffaa1f6ff65e92 DE-627 ger DE-627 rakwb eng TK1-9971 Yi-Chen Chen verfasserin aut Dictionary-Based Face and Person Recognition From Unconstrained Video 2015 Text txt rdacontent Computermedien c rdamedia Online-Ressource cr rdacarrier To recognize people in unconstrained video, one has to explore the identity information in multiple frames and the accompanying dynamic signature. These identity cues include face, body, and motion. Our approach is based on video-dictionaries for face and body. Video-dictionaries are a generalization of sparse representation and dictionaries for still images. We design the video-dictionaries to implicitly encode temporal, pose, and illumination information. In addition, our video-dictionaries are learned for both face and body, which enables the algorithm to encode both identity cues. To increase the ability of our algorithm to learn nonlinearities, we further apply kernel methods for learning the dictionaries. We demonstrate our method on the Multiple Biometric Grand Challenge, Face and Ocular Challenge Series, Honda/UCSD, and UMD data sets that consist of unconstrained video sequences. Our experimental results on these four data sets compare favorably with those published in the literature. We show that fusing face and body identity cues can improve performance over face alone. Video-based face recognition person recognition dictionary learning kernel dictionary learning Electrical engineering. Electronics. Nuclear engineering Vishal M. Patel verfasserin aut P. Jonathon Phillips verfasserin aut Rama Chellappa verfasserin aut In IEEE Access IEEE, 2014 3(2015), Seite 1783-1798 (DE-627)728440385 (DE-600)2687964-5 21693536 nnns volume:3 year:2015 pages:1783-1798 https://doi.org/10.1109/ACCESS.2015.2485400 kostenfrei https://doaj.org/article/5575555824144f6795ffaa1f6ff65e92 kostenfrei https://ieeexplore.ieee.org/document/7296579/ kostenfrei https://doaj.org/toc/2169-3536 Journal toc kostenfrei GBV_USEFLAG_A SYSFLAG_A GBV_DOAJ SSG-OLC-PHA GBV_ILN_11 GBV_ILN_20 GBV_ILN_22 GBV_ILN_23 GBV_ILN_24 GBV_ILN_31 GBV_ILN_39 GBV_ILN_40 GBV_ILN_60 GBV_ILN_62 GBV_ILN_63 GBV_ILN_65 GBV_ILN_69 GBV_ILN_70 GBV_ILN_73 GBV_ILN_95 GBV_ILN_105 GBV_ILN_110 GBV_ILN_151 GBV_ILN_161 GBV_ILN_170 GBV_ILN_213 GBV_ILN_230 GBV_ILN_285 GBV_ILN_293 GBV_ILN_370 GBV_ILN_602 GBV_ILN_2014 GBV_ILN_4012 GBV_ILN_4037 GBV_ILN_4112 GBV_ILN_4125 GBV_ILN_4126 GBV_ILN_4249 GBV_ILN_4305 GBV_ILN_4306 GBV_ILN_4307 GBV_ILN_4313 GBV_ILN_4322 GBV_ILN_4323 GBV_ILN_4324 GBV_ILN_4325 GBV_ILN_4335 GBV_ILN_4338 GBV_ILN_4367 GBV_ILN_4700 AR 3 2015 1783-1798 |
spelling |
10.1109/ACCESS.2015.2485400 doi (DE-627)DOAJ015296679 (DE-599)DOAJ5575555824144f6795ffaa1f6ff65e92 DE-627 ger DE-627 rakwb eng TK1-9971 Yi-Chen Chen verfasserin aut Dictionary-Based Face and Person Recognition From Unconstrained Video 2015 Text txt rdacontent Computermedien c rdamedia Online-Ressource cr rdacarrier To recognize people in unconstrained video, one has to explore the identity information in multiple frames and the accompanying dynamic signature. These identity cues include face, body, and motion. Our approach is based on video-dictionaries for face and body. Video-dictionaries are a generalization of sparse representation and dictionaries for still images. We design the video-dictionaries to implicitly encode temporal, pose, and illumination information. In addition, our video-dictionaries are learned for both face and body, which enables the algorithm to encode both identity cues. To increase the ability of our algorithm to learn nonlinearities, we further apply kernel methods for learning the dictionaries. We demonstrate our method on the Multiple Biometric Grand Challenge, Face and Ocular Challenge Series, Honda/UCSD, and UMD data sets that consist of unconstrained video sequences. Our experimental results on these four data sets compare favorably with those published in the literature. We show that fusing face and body identity cues can improve performance over face alone. Video-based face recognition person recognition dictionary learning kernel dictionary learning Electrical engineering. Electronics. Nuclear engineering Vishal M. Patel verfasserin aut P. Jonathon Phillips verfasserin aut Rama Chellappa verfasserin aut In IEEE Access IEEE, 2014 3(2015), Seite 1783-1798 (DE-627)728440385 (DE-600)2687964-5 21693536 nnns volume:3 year:2015 pages:1783-1798 https://doi.org/10.1109/ACCESS.2015.2485400 kostenfrei https://doaj.org/article/5575555824144f6795ffaa1f6ff65e92 kostenfrei https://ieeexplore.ieee.org/document/7296579/ kostenfrei https://doaj.org/toc/2169-3536 Journal toc kostenfrei GBV_USEFLAG_A SYSFLAG_A GBV_DOAJ SSG-OLC-PHA GBV_ILN_11 GBV_ILN_20 GBV_ILN_22 GBV_ILN_23 GBV_ILN_24 GBV_ILN_31 GBV_ILN_39 GBV_ILN_40 GBV_ILN_60 GBV_ILN_62 GBV_ILN_63 GBV_ILN_65 GBV_ILN_69 GBV_ILN_70 GBV_ILN_73 GBV_ILN_95 GBV_ILN_105 GBV_ILN_110 GBV_ILN_151 GBV_ILN_161 GBV_ILN_170 GBV_ILN_213 GBV_ILN_230 GBV_ILN_285 GBV_ILN_293 GBV_ILN_370 GBV_ILN_602 GBV_ILN_2014 GBV_ILN_4012 GBV_ILN_4037 GBV_ILN_4112 GBV_ILN_4125 GBV_ILN_4126 GBV_ILN_4249 GBV_ILN_4305 GBV_ILN_4306 GBV_ILN_4307 GBV_ILN_4313 GBV_ILN_4322 GBV_ILN_4323 GBV_ILN_4324 GBV_ILN_4325 GBV_ILN_4335 GBV_ILN_4338 GBV_ILN_4367 GBV_ILN_4700 AR 3 2015 1783-1798 |
allfields_unstemmed |
10.1109/ACCESS.2015.2485400 doi (DE-627)DOAJ015296679 (DE-599)DOAJ5575555824144f6795ffaa1f6ff65e92 DE-627 ger DE-627 rakwb eng TK1-9971 Yi-Chen Chen verfasserin aut Dictionary-Based Face and Person Recognition From Unconstrained Video 2015 Text txt rdacontent Computermedien c rdamedia Online-Ressource cr rdacarrier To recognize people in unconstrained video, one has to explore the identity information in multiple frames and the accompanying dynamic signature. These identity cues include face, body, and motion. Our approach is based on video-dictionaries for face and body. Video-dictionaries are a generalization of sparse representation and dictionaries for still images. We design the video-dictionaries to implicitly encode temporal, pose, and illumination information. In addition, our video-dictionaries are learned for both face and body, which enables the algorithm to encode both identity cues. To increase the ability of our algorithm to learn nonlinearities, we further apply kernel methods for learning the dictionaries. We demonstrate our method on the Multiple Biometric Grand Challenge, Face and Ocular Challenge Series, Honda/UCSD, and UMD data sets that consist of unconstrained video sequences. Our experimental results on these four data sets compare favorably with those published in the literature. We show that fusing face and body identity cues can improve performance over face alone. Video-based face recognition person recognition dictionary learning kernel dictionary learning Electrical engineering. Electronics. Nuclear engineering Vishal M. Patel verfasserin aut P. Jonathon Phillips verfasserin aut Rama Chellappa verfasserin aut In IEEE Access IEEE, 2014 3(2015), Seite 1783-1798 (DE-627)728440385 (DE-600)2687964-5 21693536 nnns volume:3 year:2015 pages:1783-1798 https://doi.org/10.1109/ACCESS.2015.2485400 kostenfrei https://doaj.org/article/5575555824144f6795ffaa1f6ff65e92 kostenfrei https://ieeexplore.ieee.org/document/7296579/ kostenfrei https://doaj.org/toc/2169-3536 Journal toc kostenfrei GBV_USEFLAG_A SYSFLAG_A GBV_DOAJ SSG-OLC-PHA GBV_ILN_11 GBV_ILN_20 GBV_ILN_22 GBV_ILN_23 GBV_ILN_24 GBV_ILN_31 GBV_ILN_39 GBV_ILN_40 GBV_ILN_60 GBV_ILN_62 GBV_ILN_63 GBV_ILN_65 GBV_ILN_69 GBV_ILN_70 GBV_ILN_73 GBV_ILN_95 GBV_ILN_105 GBV_ILN_110 GBV_ILN_151 GBV_ILN_161 GBV_ILN_170 GBV_ILN_213 GBV_ILN_230 GBV_ILN_285 GBV_ILN_293 GBV_ILN_370 GBV_ILN_602 GBV_ILN_2014 GBV_ILN_4012 GBV_ILN_4037 GBV_ILN_4112 GBV_ILN_4125 GBV_ILN_4126 GBV_ILN_4249 GBV_ILN_4305 GBV_ILN_4306 GBV_ILN_4307 GBV_ILN_4313 GBV_ILN_4322 GBV_ILN_4323 GBV_ILN_4324 GBV_ILN_4325 GBV_ILN_4335 GBV_ILN_4338 GBV_ILN_4367 GBV_ILN_4700 AR 3 2015 1783-1798 |
allfieldsGer |
10.1109/ACCESS.2015.2485400 doi (DE-627)DOAJ015296679 (DE-599)DOAJ5575555824144f6795ffaa1f6ff65e92 DE-627 ger DE-627 rakwb eng TK1-9971 Yi-Chen Chen verfasserin aut Dictionary-Based Face and Person Recognition From Unconstrained Video 2015 Text txt rdacontent Computermedien c rdamedia Online-Ressource cr rdacarrier To recognize people in unconstrained video, one has to explore the identity information in multiple frames and the accompanying dynamic signature. These identity cues include face, body, and motion. Our approach is based on video-dictionaries for face and body. Video-dictionaries are a generalization of sparse representation and dictionaries for still images. We design the video-dictionaries to implicitly encode temporal, pose, and illumination information. In addition, our video-dictionaries are learned for both face and body, which enables the algorithm to encode both identity cues. To increase the ability of our algorithm to learn nonlinearities, we further apply kernel methods for learning the dictionaries. We demonstrate our method on the Multiple Biometric Grand Challenge, Face and Ocular Challenge Series, Honda/UCSD, and UMD data sets that consist of unconstrained video sequences. Our experimental results on these four data sets compare favorably with those published in the literature. We show that fusing face and body identity cues can improve performance over face alone. Video-based face recognition person recognition dictionary learning kernel dictionary learning Electrical engineering. Electronics. Nuclear engineering Vishal M. Patel verfasserin aut P. Jonathon Phillips verfasserin aut Rama Chellappa verfasserin aut In IEEE Access IEEE, 2014 3(2015), Seite 1783-1798 (DE-627)728440385 (DE-600)2687964-5 21693536 nnns volume:3 year:2015 pages:1783-1798 https://doi.org/10.1109/ACCESS.2015.2485400 kostenfrei https://doaj.org/article/5575555824144f6795ffaa1f6ff65e92 kostenfrei https://ieeexplore.ieee.org/document/7296579/ kostenfrei https://doaj.org/toc/2169-3536 Journal toc kostenfrei GBV_USEFLAG_A SYSFLAG_A GBV_DOAJ SSG-OLC-PHA GBV_ILN_11 GBV_ILN_20 GBV_ILN_22 GBV_ILN_23 GBV_ILN_24 GBV_ILN_31 GBV_ILN_39 GBV_ILN_40 GBV_ILN_60 GBV_ILN_62 GBV_ILN_63 GBV_ILN_65 GBV_ILN_69 GBV_ILN_70 GBV_ILN_73 GBV_ILN_95 GBV_ILN_105 GBV_ILN_110 GBV_ILN_151 GBV_ILN_161 GBV_ILN_170 GBV_ILN_213 GBV_ILN_230 GBV_ILN_285 GBV_ILN_293 GBV_ILN_370 GBV_ILN_602 GBV_ILN_2014 GBV_ILN_4012 GBV_ILN_4037 GBV_ILN_4112 GBV_ILN_4125 GBV_ILN_4126 GBV_ILN_4249 GBV_ILN_4305 GBV_ILN_4306 GBV_ILN_4307 GBV_ILN_4313 GBV_ILN_4322 GBV_ILN_4323 GBV_ILN_4324 GBV_ILN_4325 GBV_ILN_4335 GBV_ILN_4338 GBV_ILN_4367 GBV_ILN_4700 AR 3 2015 1783-1798 |
allfieldsSound |
10.1109/ACCESS.2015.2485400 doi (DE-627)DOAJ015296679 (DE-599)DOAJ5575555824144f6795ffaa1f6ff65e92 DE-627 ger DE-627 rakwb eng TK1-9971 Yi-Chen Chen verfasserin aut Dictionary-Based Face and Person Recognition From Unconstrained Video 2015 Text txt rdacontent Computermedien c rdamedia Online-Ressource cr rdacarrier To recognize people in unconstrained video, one has to explore the identity information in multiple frames and the accompanying dynamic signature. These identity cues include face, body, and motion. Our approach is based on video-dictionaries for face and body. Video-dictionaries are a generalization of sparse representation and dictionaries for still images. We design the video-dictionaries to implicitly encode temporal, pose, and illumination information. In addition, our video-dictionaries are learned for both face and body, which enables the algorithm to encode both identity cues. To increase the ability of our algorithm to learn nonlinearities, we further apply kernel methods for learning the dictionaries. We demonstrate our method on the Multiple Biometric Grand Challenge, Face and Ocular Challenge Series, Honda/UCSD, and UMD data sets that consist of unconstrained video sequences. Our experimental results on these four data sets compare favorably with those published in the literature. We show that fusing face and body identity cues can improve performance over face alone. Video-based face recognition person recognition dictionary learning kernel dictionary learning Electrical engineering. Electronics. Nuclear engineering Vishal M. Patel verfasserin aut P. Jonathon Phillips verfasserin aut Rama Chellappa verfasserin aut In IEEE Access IEEE, 2014 3(2015), Seite 1783-1798 (DE-627)728440385 (DE-600)2687964-5 21693536 nnns volume:3 year:2015 pages:1783-1798 https://doi.org/10.1109/ACCESS.2015.2485400 kostenfrei https://doaj.org/article/5575555824144f6795ffaa1f6ff65e92 kostenfrei https://ieeexplore.ieee.org/document/7296579/ kostenfrei https://doaj.org/toc/2169-3536 Journal toc kostenfrei GBV_USEFLAG_A SYSFLAG_A GBV_DOAJ SSG-OLC-PHA GBV_ILN_11 GBV_ILN_20 GBV_ILN_22 GBV_ILN_23 GBV_ILN_24 GBV_ILN_31 GBV_ILN_39 GBV_ILN_40 GBV_ILN_60 GBV_ILN_62 GBV_ILN_63 GBV_ILN_65 GBV_ILN_69 GBV_ILN_70 GBV_ILN_73 GBV_ILN_95 GBV_ILN_105 GBV_ILN_110 GBV_ILN_151 GBV_ILN_161 GBV_ILN_170 GBV_ILN_213 GBV_ILN_230 GBV_ILN_285 GBV_ILN_293 GBV_ILN_370 GBV_ILN_602 GBV_ILN_2014 GBV_ILN_4012 GBV_ILN_4037 GBV_ILN_4112 GBV_ILN_4125 GBV_ILN_4126 GBV_ILN_4249 GBV_ILN_4305 GBV_ILN_4306 GBV_ILN_4307 GBV_ILN_4313 GBV_ILN_4322 GBV_ILN_4323 GBV_ILN_4324 GBV_ILN_4325 GBV_ILN_4335 GBV_ILN_4338 GBV_ILN_4367 GBV_ILN_4700 AR 3 2015 1783-1798 |
language |
English |
source |
In IEEE Access 3(2015), Seite 1783-1798 volume:3 year:2015 pages:1783-1798 |
sourceStr |
In IEEE Access 3(2015), Seite 1783-1798 volume:3 year:2015 pages:1783-1798 |
format_phy_str_mv |
Article |
institution |
findex.gbv.de |
topic_facet |
Video-based face recognition person recognition dictionary learning kernel dictionary learning Electrical engineering. Electronics. Nuclear engineering |
isfreeaccess_bool |
true |
container_title |
IEEE Access |
authorswithroles_txt_mv |
Yi-Chen Chen @@aut@@ Vishal M. Patel @@aut@@ P. Jonathon Phillips @@aut@@ Rama Chellappa @@aut@@ |
publishDateDaySort_date |
2015-01-01T00:00:00Z |
hierarchy_top_id |
728440385 |
id |
DOAJ015296679 |
language_de |
englisch |
fullrecord |
<?xml version="1.0" encoding="UTF-8"?><collection xmlns="http://www.loc.gov/MARC21/slim"><record><leader>01000caa a22002652 4500</leader><controlfield tag="001">DOAJ015296679</controlfield><controlfield tag="003">DE-627</controlfield><controlfield tag="005">20230503064838.0</controlfield><controlfield tag="007">cr uuu---uuuuu</controlfield><controlfield tag="008">230226s2015 xx |||||o 00| ||eng c</controlfield><datafield tag="024" ind1="7" ind2=" "><subfield code="a">10.1109/ACCESS.2015.2485400</subfield><subfield code="2">doi</subfield></datafield><datafield tag="035" ind1=" " ind2=" "><subfield code="a">(DE-627)DOAJ015296679</subfield></datafield><datafield tag="035" ind1=" " ind2=" "><subfield code="a">(DE-599)DOAJ5575555824144f6795ffaa1f6ff65e92</subfield></datafield><datafield tag="040" ind1=" " ind2=" "><subfield code="a">DE-627</subfield><subfield code="b">ger</subfield><subfield code="c">DE-627</subfield><subfield code="e">rakwb</subfield></datafield><datafield tag="041" ind1=" " ind2=" "><subfield code="a">eng</subfield></datafield><datafield tag="050" ind1=" " ind2="0"><subfield code="a">TK1-9971</subfield></datafield><datafield tag="100" ind1="0" ind2=" "><subfield code="a">Yi-Chen Chen</subfield><subfield code="e">verfasserin</subfield><subfield code="4">aut</subfield></datafield><datafield tag="245" ind1="1" ind2="0"><subfield code="a">Dictionary-Based Face and Person Recognition From Unconstrained Video</subfield></datafield><datafield tag="264" ind1=" " ind2="1"><subfield code="c">2015</subfield></datafield><datafield tag="336" ind1=" " ind2=" "><subfield code="a">Text</subfield><subfield code="b">txt</subfield><subfield code="2">rdacontent</subfield></datafield><datafield tag="337" ind1=" " ind2=" "><subfield code="a">Computermedien</subfield><subfield code="b">c</subfield><subfield code="2">rdamedia</subfield></datafield><datafield tag="338" ind1=" " ind2=" "><subfield code="a">Online-Ressource</subfield><subfield code="b">cr</subfield><subfield code="2">rdacarrier</subfield></datafield><datafield tag="520" ind1=" " ind2=" "><subfield code="a">To recognize people in unconstrained video, one has to explore the identity information in multiple frames and the accompanying dynamic signature. These identity cues include face, body, and motion. Our approach is based on video-dictionaries for face and body. Video-dictionaries are a generalization of sparse representation and dictionaries for still images. We design the video-dictionaries to implicitly encode temporal, pose, and illumination information. In addition, our video-dictionaries are learned for both face and body, which enables the algorithm to encode both identity cues. To increase the ability of our algorithm to learn nonlinearities, we further apply kernel methods for learning the dictionaries. We demonstrate our method on the Multiple Biometric Grand Challenge, Face and Ocular Challenge Series, Honda/UCSD, and UMD data sets that consist of unconstrained video sequences. Our experimental results on these four data sets compare favorably with those published in the literature. We show that fusing face and body identity cues can improve performance over face alone.</subfield></datafield><datafield tag="650" ind1=" " ind2="4"><subfield code="a">Video-based face recognition</subfield></datafield><datafield tag="650" ind1=" " ind2="4"><subfield code="a">person recognition</subfield></datafield><datafield tag="650" ind1=" " ind2="4"><subfield code="a">dictionary learning</subfield></datafield><datafield tag="650" ind1=" " ind2="4"><subfield code="a">kernel dictionary learning</subfield></datafield><datafield tag="653" ind1=" " ind2="0"><subfield code="a">Electrical engineering. Electronics. Nuclear engineering</subfield></datafield><datafield tag="700" ind1="0" ind2=" "><subfield code="a">Vishal M. Patel</subfield><subfield code="e">verfasserin</subfield><subfield code="4">aut</subfield></datafield><datafield tag="700" ind1="0" ind2=" "><subfield code="a">P. Jonathon Phillips</subfield><subfield code="e">verfasserin</subfield><subfield code="4">aut</subfield></datafield><datafield tag="700" ind1="0" ind2=" "><subfield code="a">Rama Chellappa</subfield><subfield code="e">verfasserin</subfield><subfield code="4">aut</subfield></datafield><datafield tag="773" ind1="0" ind2="8"><subfield code="i">In</subfield><subfield code="t">IEEE Access</subfield><subfield code="d">IEEE, 2014</subfield><subfield code="g">3(2015), Seite 1783-1798</subfield><subfield code="w">(DE-627)728440385</subfield><subfield code="w">(DE-600)2687964-5</subfield><subfield code="x">21693536</subfield><subfield code="7">nnns</subfield></datafield><datafield tag="773" ind1="1" ind2="8"><subfield code="g">volume:3</subfield><subfield code="g">year:2015</subfield><subfield code="g">pages:1783-1798</subfield></datafield><datafield tag="856" ind1="4" ind2="0"><subfield code="u">https://doi.org/10.1109/ACCESS.2015.2485400</subfield><subfield code="z">kostenfrei</subfield></datafield><datafield tag="856" ind1="4" ind2="0"><subfield code="u">https://doaj.org/article/5575555824144f6795ffaa1f6ff65e92</subfield><subfield code="z">kostenfrei</subfield></datafield><datafield tag="856" ind1="4" ind2="0"><subfield code="u">https://ieeexplore.ieee.org/document/7296579/</subfield><subfield code="z">kostenfrei</subfield></datafield><datafield tag="856" ind1="4" ind2="2"><subfield code="u">https://doaj.org/toc/2169-3536</subfield><subfield code="y">Journal toc</subfield><subfield code="z">kostenfrei</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_USEFLAG_A</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">SYSFLAG_A</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_DOAJ</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">SSG-OLC-PHA</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_11</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_20</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_22</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_23</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_24</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_31</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_39</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_40</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_60</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_62</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_63</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_65</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_69</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_70</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_73</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_95</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_105</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_110</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_151</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_161</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_170</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_213</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_230</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_285</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_293</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_370</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_602</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2014</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4012</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4037</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4112</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4125</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4126</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4249</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4305</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4306</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4307</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4313</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4322</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4323</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4324</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4325</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4335</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4338</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4367</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4700</subfield></datafield><datafield tag="951" ind1=" " ind2=" "><subfield code="a">AR</subfield></datafield><datafield tag="952" ind1=" " ind2=" "><subfield code="d">3</subfield><subfield code="j">2015</subfield><subfield code="h">1783-1798</subfield></datafield></record></collection>
|
callnumber-first |
T - Technology |
author |
Yi-Chen Chen |
spellingShingle |
Yi-Chen Chen misc TK1-9971 misc Video-based face recognition misc person recognition misc dictionary learning misc kernel dictionary learning misc Electrical engineering. Electronics. Nuclear engineering Dictionary-Based Face and Person Recognition From Unconstrained Video |
authorStr |
Yi-Chen Chen |
ppnlink_with_tag_str_mv |
@@773@@(DE-627)728440385 |
format |
electronic Article |
delete_txt_mv |
keep |
author_role |
aut aut aut aut |
collection |
DOAJ |
remote_str |
true |
callnumber-label |
TK1-9971 |
illustrated |
Not Illustrated |
issn |
21693536 |
topic_title |
TK1-9971 Dictionary-Based Face and Person Recognition From Unconstrained Video Video-based face recognition person recognition dictionary learning kernel dictionary learning |
topic |
misc TK1-9971 misc Video-based face recognition misc person recognition misc dictionary learning misc kernel dictionary learning misc Electrical engineering. Electronics. Nuclear engineering |
topic_unstemmed |
misc TK1-9971 misc Video-based face recognition misc person recognition misc dictionary learning misc kernel dictionary learning misc Electrical engineering. Electronics. Nuclear engineering |
topic_browse |
misc TK1-9971 misc Video-based face recognition misc person recognition misc dictionary learning misc kernel dictionary learning misc Electrical engineering. Electronics. Nuclear engineering |
format_facet |
Elektronische Aufsätze Aufsätze Elektronische Ressource |
format_main_str_mv |
Text Zeitschrift/Artikel |
carriertype_str_mv |
cr |
hierarchy_parent_title |
IEEE Access |
hierarchy_parent_id |
728440385 |
hierarchy_top_title |
IEEE Access |
isfreeaccess_txt |
true |
familylinks_str_mv |
(DE-627)728440385 (DE-600)2687964-5 |
title |
Dictionary-Based Face and Person Recognition From Unconstrained Video |
ctrlnum |
(DE-627)DOAJ015296679 (DE-599)DOAJ5575555824144f6795ffaa1f6ff65e92 |
title_full |
Dictionary-Based Face and Person Recognition From Unconstrained Video |
author_sort |
Yi-Chen Chen |
journal |
IEEE Access |
journalStr |
IEEE Access |
callnumber-first-code |
T |
lang_code |
eng |
isOA_bool |
true |
recordtype |
marc |
publishDateSort |
2015 |
contenttype_str_mv |
txt |
container_start_page |
1783 |
author_browse |
Yi-Chen Chen Vishal M. Patel P. Jonathon Phillips Rama Chellappa |
container_volume |
3 |
class |
TK1-9971 |
format_se |
Elektronische Aufsätze |
author-letter |
Yi-Chen Chen |
doi_str_mv |
10.1109/ACCESS.2015.2485400 |
author2-role |
verfasserin |
title_sort |
dictionary-based face and person recognition from unconstrained video |
callnumber |
TK1-9971 |
title_auth |
Dictionary-Based Face and Person Recognition From Unconstrained Video |
abstract |
To recognize people in unconstrained video, one has to explore the identity information in multiple frames and the accompanying dynamic signature. These identity cues include face, body, and motion. Our approach is based on video-dictionaries for face and body. Video-dictionaries are a generalization of sparse representation and dictionaries for still images. We design the video-dictionaries to implicitly encode temporal, pose, and illumination information. In addition, our video-dictionaries are learned for both face and body, which enables the algorithm to encode both identity cues. To increase the ability of our algorithm to learn nonlinearities, we further apply kernel methods for learning the dictionaries. We demonstrate our method on the Multiple Biometric Grand Challenge, Face and Ocular Challenge Series, Honda/UCSD, and UMD data sets that consist of unconstrained video sequences. Our experimental results on these four data sets compare favorably with those published in the literature. We show that fusing face and body identity cues can improve performance over face alone. |
abstractGer |
To recognize people in unconstrained video, one has to explore the identity information in multiple frames and the accompanying dynamic signature. These identity cues include face, body, and motion. Our approach is based on video-dictionaries for face and body. Video-dictionaries are a generalization of sparse representation and dictionaries for still images. We design the video-dictionaries to implicitly encode temporal, pose, and illumination information. In addition, our video-dictionaries are learned for both face and body, which enables the algorithm to encode both identity cues. To increase the ability of our algorithm to learn nonlinearities, we further apply kernel methods for learning the dictionaries. We demonstrate our method on the Multiple Biometric Grand Challenge, Face and Ocular Challenge Series, Honda/UCSD, and UMD data sets that consist of unconstrained video sequences. Our experimental results on these four data sets compare favorably with those published in the literature. We show that fusing face and body identity cues can improve performance over face alone. |
abstract_unstemmed |
To recognize people in unconstrained video, one has to explore the identity information in multiple frames and the accompanying dynamic signature. These identity cues include face, body, and motion. Our approach is based on video-dictionaries for face and body. Video-dictionaries are a generalization of sparse representation and dictionaries for still images. We design the video-dictionaries to implicitly encode temporal, pose, and illumination information. In addition, our video-dictionaries are learned for both face and body, which enables the algorithm to encode both identity cues. To increase the ability of our algorithm to learn nonlinearities, we further apply kernel methods for learning the dictionaries. We demonstrate our method on the Multiple Biometric Grand Challenge, Face and Ocular Challenge Series, Honda/UCSD, and UMD data sets that consist of unconstrained video sequences. Our experimental results on these four data sets compare favorably with those published in the literature. We show that fusing face and body identity cues can improve performance over face alone. |
collection_details |
GBV_USEFLAG_A SYSFLAG_A GBV_DOAJ SSG-OLC-PHA GBV_ILN_11 GBV_ILN_20 GBV_ILN_22 GBV_ILN_23 GBV_ILN_24 GBV_ILN_31 GBV_ILN_39 GBV_ILN_40 GBV_ILN_60 GBV_ILN_62 GBV_ILN_63 GBV_ILN_65 GBV_ILN_69 GBV_ILN_70 GBV_ILN_73 GBV_ILN_95 GBV_ILN_105 GBV_ILN_110 GBV_ILN_151 GBV_ILN_161 GBV_ILN_170 GBV_ILN_213 GBV_ILN_230 GBV_ILN_285 GBV_ILN_293 GBV_ILN_370 GBV_ILN_602 GBV_ILN_2014 GBV_ILN_4012 GBV_ILN_4037 GBV_ILN_4112 GBV_ILN_4125 GBV_ILN_4126 GBV_ILN_4249 GBV_ILN_4305 GBV_ILN_4306 GBV_ILN_4307 GBV_ILN_4313 GBV_ILN_4322 GBV_ILN_4323 GBV_ILN_4324 GBV_ILN_4325 GBV_ILN_4335 GBV_ILN_4338 GBV_ILN_4367 GBV_ILN_4700 |
title_short |
Dictionary-Based Face and Person Recognition From Unconstrained Video |
url |
https://doi.org/10.1109/ACCESS.2015.2485400 https://doaj.org/article/5575555824144f6795ffaa1f6ff65e92 https://ieeexplore.ieee.org/document/7296579/ https://doaj.org/toc/2169-3536 |
remote_bool |
true |
author2 |
Vishal M. Patel P. Jonathon Phillips Rama Chellappa |
author2Str |
Vishal M. Patel P. Jonathon Phillips Rama Chellappa |
ppnlink |
728440385 |
callnumber-subject |
TK - Electrical and Nuclear Engineering |
mediatype_str_mv |
c |
isOA_txt |
true |
hochschulschrift_bool |
false |
doi_str |
10.1109/ACCESS.2015.2485400 |
callnumber-a |
TK1-9971 |
up_date |
2024-07-03T14:08:43.891Z |
_version_ |
1803567207695777792 |
fullrecord_marcxml |
<?xml version="1.0" encoding="UTF-8"?><collection xmlns="http://www.loc.gov/MARC21/slim"><record><leader>01000caa a22002652 4500</leader><controlfield tag="001">DOAJ015296679</controlfield><controlfield tag="003">DE-627</controlfield><controlfield tag="005">20230503064838.0</controlfield><controlfield tag="007">cr uuu---uuuuu</controlfield><controlfield tag="008">230226s2015 xx |||||o 00| ||eng c</controlfield><datafield tag="024" ind1="7" ind2=" "><subfield code="a">10.1109/ACCESS.2015.2485400</subfield><subfield code="2">doi</subfield></datafield><datafield tag="035" ind1=" " ind2=" "><subfield code="a">(DE-627)DOAJ015296679</subfield></datafield><datafield tag="035" ind1=" " ind2=" "><subfield code="a">(DE-599)DOAJ5575555824144f6795ffaa1f6ff65e92</subfield></datafield><datafield tag="040" ind1=" " ind2=" "><subfield code="a">DE-627</subfield><subfield code="b">ger</subfield><subfield code="c">DE-627</subfield><subfield code="e">rakwb</subfield></datafield><datafield tag="041" ind1=" " ind2=" "><subfield code="a">eng</subfield></datafield><datafield tag="050" ind1=" " ind2="0"><subfield code="a">TK1-9971</subfield></datafield><datafield tag="100" ind1="0" ind2=" "><subfield code="a">Yi-Chen Chen</subfield><subfield code="e">verfasserin</subfield><subfield code="4">aut</subfield></datafield><datafield tag="245" ind1="1" ind2="0"><subfield code="a">Dictionary-Based Face and Person Recognition From Unconstrained Video</subfield></datafield><datafield tag="264" ind1=" " ind2="1"><subfield code="c">2015</subfield></datafield><datafield tag="336" ind1=" " ind2=" "><subfield code="a">Text</subfield><subfield code="b">txt</subfield><subfield code="2">rdacontent</subfield></datafield><datafield tag="337" ind1=" " ind2=" "><subfield code="a">Computermedien</subfield><subfield code="b">c</subfield><subfield code="2">rdamedia</subfield></datafield><datafield tag="338" ind1=" " ind2=" "><subfield code="a">Online-Ressource</subfield><subfield code="b">cr</subfield><subfield code="2">rdacarrier</subfield></datafield><datafield tag="520" ind1=" " ind2=" "><subfield code="a">To recognize people in unconstrained video, one has to explore the identity information in multiple frames and the accompanying dynamic signature. These identity cues include face, body, and motion. Our approach is based on video-dictionaries for face and body. Video-dictionaries are a generalization of sparse representation and dictionaries for still images. We design the video-dictionaries to implicitly encode temporal, pose, and illumination information. In addition, our video-dictionaries are learned for both face and body, which enables the algorithm to encode both identity cues. To increase the ability of our algorithm to learn nonlinearities, we further apply kernel methods for learning the dictionaries. We demonstrate our method on the Multiple Biometric Grand Challenge, Face and Ocular Challenge Series, Honda/UCSD, and UMD data sets that consist of unconstrained video sequences. Our experimental results on these four data sets compare favorably with those published in the literature. We show that fusing face and body identity cues can improve performance over face alone.</subfield></datafield><datafield tag="650" ind1=" " ind2="4"><subfield code="a">Video-based face recognition</subfield></datafield><datafield tag="650" ind1=" " ind2="4"><subfield code="a">person recognition</subfield></datafield><datafield tag="650" ind1=" " ind2="4"><subfield code="a">dictionary learning</subfield></datafield><datafield tag="650" ind1=" " ind2="4"><subfield code="a">kernel dictionary learning</subfield></datafield><datafield tag="653" ind1=" " ind2="0"><subfield code="a">Electrical engineering. Electronics. Nuclear engineering</subfield></datafield><datafield tag="700" ind1="0" ind2=" "><subfield code="a">Vishal M. Patel</subfield><subfield code="e">verfasserin</subfield><subfield code="4">aut</subfield></datafield><datafield tag="700" ind1="0" ind2=" "><subfield code="a">P. Jonathon Phillips</subfield><subfield code="e">verfasserin</subfield><subfield code="4">aut</subfield></datafield><datafield tag="700" ind1="0" ind2=" "><subfield code="a">Rama Chellappa</subfield><subfield code="e">verfasserin</subfield><subfield code="4">aut</subfield></datafield><datafield tag="773" ind1="0" ind2="8"><subfield code="i">In</subfield><subfield code="t">IEEE Access</subfield><subfield code="d">IEEE, 2014</subfield><subfield code="g">3(2015), Seite 1783-1798</subfield><subfield code="w">(DE-627)728440385</subfield><subfield code="w">(DE-600)2687964-5</subfield><subfield code="x">21693536</subfield><subfield code="7">nnns</subfield></datafield><datafield tag="773" ind1="1" ind2="8"><subfield code="g">volume:3</subfield><subfield code="g">year:2015</subfield><subfield code="g">pages:1783-1798</subfield></datafield><datafield tag="856" ind1="4" ind2="0"><subfield code="u">https://doi.org/10.1109/ACCESS.2015.2485400</subfield><subfield code="z">kostenfrei</subfield></datafield><datafield tag="856" ind1="4" ind2="0"><subfield code="u">https://doaj.org/article/5575555824144f6795ffaa1f6ff65e92</subfield><subfield code="z">kostenfrei</subfield></datafield><datafield tag="856" ind1="4" ind2="0"><subfield code="u">https://ieeexplore.ieee.org/document/7296579/</subfield><subfield code="z">kostenfrei</subfield></datafield><datafield tag="856" ind1="4" ind2="2"><subfield code="u">https://doaj.org/toc/2169-3536</subfield><subfield code="y">Journal toc</subfield><subfield code="z">kostenfrei</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_USEFLAG_A</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">SYSFLAG_A</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_DOAJ</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">SSG-OLC-PHA</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_11</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_20</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_22</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_23</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_24</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_31</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_39</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_40</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_60</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_62</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_63</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_65</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_69</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_70</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_73</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_95</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_105</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_110</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_151</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_161</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_170</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_213</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_230</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_285</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_293</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_370</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_602</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2014</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4012</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4037</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4112</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4125</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4126</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4249</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4305</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4306</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4307</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4313</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4322</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4323</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4324</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4325</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4335</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4338</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4367</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4700</subfield></datafield><datafield tag="951" ind1=" " ind2=" "><subfield code="a">AR</subfield></datafield><datafield tag="952" ind1=" " ind2=" "><subfield code="d">3</subfield><subfield code="j">2015</subfield><subfield code="h">1783-1798</subfield></datafield></record></collection>
|
score |
7.3983927 |