Evaluating real-life performance of the state-of-the-art in facial expression recognition using a novel YouTube-based datasets
Abstract Facial expression recognition (FER) is one of the most active areas of research in computer science, due to its importance in a large number of application domains. Over the years, a great number of FER systems have been implemented, each surpassing the other in terms of classification accu...
Ausführliche Beschreibung
Autor*in: |
Siddiqi, Muhammad Hameed [verfasserIn] |
---|
Format: |
Artikel |
---|---|
Sprache: |
Englisch |
Erschienen: |
2017 |
---|
Schlagwörter: |
---|
Anmerkung: |
© Springer Science+Business Media New York 2017 |
---|
Übergeordnetes Werk: |
Enthalten in: Multimedia tools and applications - Springer US, 1995, 77(2017), 1 vom: 06. Jan., Seite 917-937 |
---|---|
Übergeordnetes Werk: |
volume:77 ; year:2017 ; number:1 ; day:06 ; month:01 ; pages:917-937 |
Links: |
---|
DOI / URN: |
10.1007/s11042-016-4321-2 |
---|
Katalog-ID: |
OLC2035041724 |
---|
LEADER | 01000caa a22002652 4500 | ||
---|---|---|---|
001 | OLC2035041724 | ||
003 | DE-627 | ||
005 | 20230503193231.0 | ||
007 | tu | ||
008 | 200819s2017 xx ||||| 00| ||eng c | ||
024 | 7 | |a 10.1007/s11042-016-4321-2 |2 doi | |
035 | |a (DE-627)OLC2035041724 | ||
035 | |a (DE-He213)s11042-016-4321-2-p | ||
040 | |a DE-627 |b ger |c DE-627 |e rakwb | ||
041 | |a eng | ||
082 | 0 | 4 | |a 070 |a 004 |q VZ |
100 | 1 | |a Siddiqi, Muhammad Hameed |e verfasserin |4 aut | |
245 | 1 | 0 | |a Evaluating real-life performance of the state-of-the-art in facial expression recognition using a novel YouTube-based datasets |
264 | 1 | |c 2017 | |
336 | |a Text |b txt |2 rdacontent | ||
337 | |a ohne Hilfsmittel zu benutzen |b n |2 rdamedia | ||
338 | |a Band |b nc |2 rdacarrier | ||
500 | |a © Springer Science+Business Media New York 2017 | ||
520 | |a Abstract Facial expression recognition (FER) is one of the most active areas of research in computer science, due to its importance in a large number of application domains. Over the years, a great number of FER systems have been implemented, each surpassing the other in terms of classification accuracy. However, one major weakness found in the previous studies is that they have all used standard datasets for their evaluations and comparisons. Though this serves well given the needs of a fair comparison with existing systems, it is argued that this does not go in hand with the fact that these systems are built with a hope of eventually being used in the real-world. It is because these datasets assume a predefined camera setup, consist of mostly posed expressions collected in a controlled setting, using fixed background and static ambient settings, and having low variations in the face size and camera angles, which is not the case in a dynamic real-world. The contributions of this work are two-fold: firstly, using numerous online resources and also our own setup, we have collected a rich FER dataset keeping in mind the above mentioned problems. Secondly, we have chosen eleven state-of-the-art FER systems, implemented them and performed a rigorous evaluation of these systems using our dataset. The results confirm our hypothesis that even the most accurate existing FER systems are not ready to face the challenges of a dynamic real-world. We hope that our dataset would become a benchmark to assess the real-life performance of future FER systems. | ||
650 | 4 | |a Facial expressions | |
650 | 4 | |a Classification | |
650 | 4 | |a YouTube | |
650 | 4 | |a Real-life scenarios | |
700 | 1 | |a Ali, Maqbool |4 aut | |
700 | 1 | |a Abdelrahman Eldib, Mohamed Elsayed |4 aut | |
700 | 1 | |a Khan, Asfandyar |4 aut | |
700 | 1 | |a Banos, Oresti |4 aut | |
700 | 1 | |a Khan, Adil Mehmood |4 aut | |
700 | 1 | |a Lee, Sungyoung |4 aut | |
700 | 1 | |a Choo, Hyunseung |4 aut | |
773 | 0 | 8 | |i Enthalten in |t Multimedia tools and applications |d Springer US, 1995 |g 77(2017), 1 vom: 06. Jan., Seite 917-937 |w (DE-627)189064145 |w (DE-600)1287642-2 |w (DE-576)052842126 |x 1380-7501 |7 nnns |
773 | 1 | 8 | |g volume:77 |g year:2017 |g number:1 |g day:06 |g month:01 |g pages:917-937 |
856 | 4 | 1 | |u https://doi.org/10.1007/s11042-016-4321-2 |z lizenzpflichtig |3 Volltext |
912 | |a GBV_USEFLAG_A | ||
912 | |a SYSFLAG_A | ||
912 | |a GBV_OLC | ||
912 | |a SSG-OLC-MAT | ||
912 | |a SSG-OLC-BUB | ||
912 | |a SSG-OLC-MKW | ||
912 | |a GBV_ILN_70 | ||
951 | |a AR | ||
952 | |d 77 |j 2017 |e 1 |b 06 |c 01 |h 917-937 |
author_variant |
m h s mh mhs m a ma e m e a eme emea a k ak o b ob a m k am amk s l sl h c hc |
---|---|
matchkey_str |
article:13807501:2017----::vlaigelieefracotettoterifcaepesorcgiins |
hierarchy_sort_str |
2017 |
publishDate |
2017 |
allfields |
10.1007/s11042-016-4321-2 doi (DE-627)OLC2035041724 (DE-He213)s11042-016-4321-2-p DE-627 ger DE-627 rakwb eng 070 004 VZ Siddiqi, Muhammad Hameed verfasserin aut Evaluating real-life performance of the state-of-the-art in facial expression recognition using a novel YouTube-based datasets 2017 Text txt rdacontent ohne Hilfsmittel zu benutzen n rdamedia Band nc rdacarrier © Springer Science+Business Media New York 2017 Abstract Facial expression recognition (FER) is one of the most active areas of research in computer science, due to its importance in a large number of application domains. Over the years, a great number of FER systems have been implemented, each surpassing the other in terms of classification accuracy. However, one major weakness found in the previous studies is that they have all used standard datasets for their evaluations and comparisons. Though this serves well given the needs of a fair comparison with existing systems, it is argued that this does not go in hand with the fact that these systems are built with a hope of eventually being used in the real-world. It is because these datasets assume a predefined camera setup, consist of mostly posed expressions collected in a controlled setting, using fixed background and static ambient settings, and having low variations in the face size and camera angles, which is not the case in a dynamic real-world. The contributions of this work are two-fold: firstly, using numerous online resources and also our own setup, we have collected a rich FER dataset keeping in mind the above mentioned problems. Secondly, we have chosen eleven state-of-the-art FER systems, implemented them and performed a rigorous evaluation of these systems using our dataset. The results confirm our hypothesis that even the most accurate existing FER systems are not ready to face the challenges of a dynamic real-world. We hope that our dataset would become a benchmark to assess the real-life performance of future FER systems. Facial expressions Classification YouTube Real-life scenarios Ali, Maqbool aut Abdelrahman Eldib, Mohamed Elsayed aut Khan, Asfandyar aut Banos, Oresti aut Khan, Adil Mehmood aut Lee, Sungyoung aut Choo, Hyunseung aut Enthalten in Multimedia tools and applications Springer US, 1995 77(2017), 1 vom: 06. Jan., Seite 917-937 (DE-627)189064145 (DE-600)1287642-2 (DE-576)052842126 1380-7501 nnns volume:77 year:2017 number:1 day:06 month:01 pages:917-937 https://doi.org/10.1007/s11042-016-4321-2 lizenzpflichtig Volltext GBV_USEFLAG_A SYSFLAG_A GBV_OLC SSG-OLC-MAT SSG-OLC-BUB SSG-OLC-MKW GBV_ILN_70 AR 77 2017 1 06 01 917-937 |
spelling |
10.1007/s11042-016-4321-2 doi (DE-627)OLC2035041724 (DE-He213)s11042-016-4321-2-p DE-627 ger DE-627 rakwb eng 070 004 VZ Siddiqi, Muhammad Hameed verfasserin aut Evaluating real-life performance of the state-of-the-art in facial expression recognition using a novel YouTube-based datasets 2017 Text txt rdacontent ohne Hilfsmittel zu benutzen n rdamedia Band nc rdacarrier © Springer Science+Business Media New York 2017 Abstract Facial expression recognition (FER) is one of the most active areas of research in computer science, due to its importance in a large number of application domains. Over the years, a great number of FER systems have been implemented, each surpassing the other in terms of classification accuracy. However, one major weakness found in the previous studies is that they have all used standard datasets for their evaluations and comparisons. Though this serves well given the needs of a fair comparison with existing systems, it is argued that this does not go in hand with the fact that these systems are built with a hope of eventually being used in the real-world. It is because these datasets assume a predefined camera setup, consist of mostly posed expressions collected in a controlled setting, using fixed background and static ambient settings, and having low variations in the face size and camera angles, which is not the case in a dynamic real-world. The contributions of this work are two-fold: firstly, using numerous online resources and also our own setup, we have collected a rich FER dataset keeping in mind the above mentioned problems. Secondly, we have chosen eleven state-of-the-art FER systems, implemented them and performed a rigorous evaluation of these systems using our dataset. The results confirm our hypothesis that even the most accurate existing FER systems are not ready to face the challenges of a dynamic real-world. We hope that our dataset would become a benchmark to assess the real-life performance of future FER systems. Facial expressions Classification YouTube Real-life scenarios Ali, Maqbool aut Abdelrahman Eldib, Mohamed Elsayed aut Khan, Asfandyar aut Banos, Oresti aut Khan, Adil Mehmood aut Lee, Sungyoung aut Choo, Hyunseung aut Enthalten in Multimedia tools and applications Springer US, 1995 77(2017), 1 vom: 06. Jan., Seite 917-937 (DE-627)189064145 (DE-600)1287642-2 (DE-576)052842126 1380-7501 nnns volume:77 year:2017 number:1 day:06 month:01 pages:917-937 https://doi.org/10.1007/s11042-016-4321-2 lizenzpflichtig Volltext GBV_USEFLAG_A SYSFLAG_A GBV_OLC SSG-OLC-MAT SSG-OLC-BUB SSG-OLC-MKW GBV_ILN_70 AR 77 2017 1 06 01 917-937 |
allfields_unstemmed |
10.1007/s11042-016-4321-2 doi (DE-627)OLC2035041724 (DE-He213)s11042-016-4321-2-p DE-627 ger DE-627 rakwb eng 070 004 VZ Siddiqi, Muhammad Hameed verfasserin aut Evaluating real-life performance of the state-of-the-art in facial expression recognition using a novel YouTube-based datasets 2017 Text txt rdacontent ohne Hilfsmittel zu benutzen n rdamedia Band nc rdacarrier © Springer Science+Business Media New York 2017 Abstract Facial expression recognition (FER) is one of the most active areas of research in computer science, due to its importance in a large number of application domains. Over the years, a great number of FER systems have been implemented, each surpassing the other in terms of classification accuracy. However, one major weakness found in the previous studies is that they have all used standard datasets for their evaluations and comparisons. Though this serves well given the needs of a fair comparison with existing systems, it is argued that this does not go in hand with the fact that these systems are built with a hope of eventually being used in the real-world. It is because these datasets assume a predefined camera setup, consist of mostly posed expressions collected in a controlled setting, using fixed background and static ambient settings, and having low variations in the face size and camera angles, which is not the case in a dynamic real-world. The contributions of this work are two-fold: firstly, using numerous online resources and also our own setup, we have collected a rich FER dataset keeping in mind the above mentioned problems. Secondly, we have chosen eleven state-of-the-art FER systems, implemented them and performed a rigorous evaluation of these systems using our dataset. The results confirm our hypothesis that even the most accurate existing FER systems are not ready to face the challenges of a dynamic real-world. We hope that our dataset would become a benchmark to assess the real-life performance of future FER systems. Facial expressions Classification YouTube Real-life scenarios Ali, Maqbool aut Abdelrahman Eldib, Mohamed Elsayed aut Khan, Asfandyar aut Banos, Oresti aut Khan, Adil Mehmood aut Lee, Sungyoung aut Choo, Hyunseung aut Enthalten in Multimedia tools and applications Springer US, 1995 77(2017), 1 vom: 06. Jan., Seite 917-937 (DE-627)189064145 (DE-600)1287642-2 (DE-576)052842126 1380-7501 nnns volume:77 year:2017 number:1 day:06 month:01 pages:917-937 https://doi.org/10.1007/s11042-016-4321-2 lizenzpflichtig Volltext GBV_USEFLAG_A SYSFLAG_A GBV_OLC SSG-OLC-MAT SSG-OLC-BUB SSG-OLC-MKW GBV_ILN_70 AR 77 2017 1 06 01 917-937 |
allfieldsGer |
10.1007/s11042-016-4321-2 doi (DE-627)OLC2035041724 (DE-He213)s11042-016-4321-2-p DE-627 ger DE-627 rakwb eng 070 004 VZ Siddiqi, Muhammad Hameed verfasserin aut Evaluating real-life performance of the state-of-the-art in facial expression recognition using a novel YouTube-based datasets 2017 Text txt rdacontent ohne Hilfsmittel zu benutzen n rdamedia Band nc rdacarrier © Springer Science+Business Media New York 2017 Abstract Facial expression recognition (FER) is one of the most active areas of research in computer science, due to its importance in a large number of application domains. Over the years, a great number of FER systems have been implemented, each surpassing the other in terms of classification accuracy. However, one major weakness found in the previous studies is that they have all used standard datasets for their evaluations and comparisons. Though this serves well given the needs of a fair comparison with existing systems, it is argued that this does not go in hand with the fact that these systems are built with a hope of eventually being used in the real-world. It is because these datasets assume a predefined camera setup, consist of mostly posed expressions collected in a controlled setting, using fixed background and static ambient settings, and having low variations in the face size and camera angles, which is not the case in a dynamic real-world. The contributions of this work are two-fold: firstly, using numerous online resources and also our own setup, we have collected a rich FER dataset keeping in mind the above mentioned problems. Secondly, we have chosen eleven state-of-the-art FER systems, implemented them and performed a rigorous evaluation of these systems using our dataset. The results confirm our hypothesis that even the most accurate existing FER systems are not ready to face the challenges of a dynamic real-world. We hope that our dataset would become a benchmark to assess the real-life performance of future FER systems. Facial expressions Classification YouTube Real-life scenarios Ali, Maqbool aut Abdelrahman Eldib, Mohamed Elsayed aut Khan, Asfandyar aut Banos, Oresti aut Khan, Adil Mehmood aut Lee, Sungyoung aut Choo, Hyunseung aut Enthalten in Multimedia tools and applications Springer US, 1995 77(2017), 1 vom: 06. Jan., Seite 917-937 (DE-627)189064145 (DE-600)1287642-2 (DE-576)052842126 1380-7501 nnns volume:77 year:2017 number:1 day:06 month:01 pages:917-937 https://doi.org/10.1007/s11042-016-4321-2 lizenzpflichtig Volltext GBV_USEFLAG_A SYSFLAG_A GBV_OLC SSG-OLC-MAT SSG-OLC-BUB SSG-OLC-MKW GBV_ILN_70 AR 77 2017 1 06 01 917-937 |
allfieldsSound |
10.1007/s11042-016-4321-2 doi (DE-627)OLC2035041724 (DE-He213)s11042-016-4321-2-p DE-627 ger DE-627 rakwb eng 070 004 VZ Siddiqi, Muhammad Hameed verfasserin aut Evaluating real-life performance of the state-of-the-art in facial expression recognition using a novel YouTube-based datasets 2017 Text txt rdacontent ohne Hilfsmittel zu benutzen n rdamedia Band nc rdacarrier © Springer Science+Business Media New York 2017 Abstract Facial expression recognition (FER) is one of the most active areas of research in computer science, due to its importance in a large number of application domains. Over the years, a great number of FER systems have been implemented, each surpassing the other in terms of classification accuracy. However, one major weakness found in the previous studies is that they have all used standard datasets for their evaluations and comparisons. Though this serves well given the needs of a fair comparison with existing systems, it is argued that this does not go in hand with the fact that these systems are built with a hope of eventually being used in the real-world. It is because these datasets assume a predefined camera setup, consist of mostly posed expressions collected in a controlled setting, using fixed background and static ambient settings, and having low variations in the face size and camera angles, which is not the case in a dynamic real-world. The contributions of this work are two-fold: firstly, using numerous online resources and also our own setup, we have collected a rich FER dataset keeping in mind the above mentioned problems. Secondly, we have chosen eleven state-of-the-art FER systems, implemented them and performed a rigorous evaluation of these systems using our dataset. The results confirm our hypothesis that even the most accurate existing FER systems are not ready to face the challenges of a dynamic real-world. We hope that our dataset would become a benchmark to assess the real-life performance of future FER systems. Facial expressions Classification YouTube Real-life scenarios Ali, Maqbool aut Abdelrahman Eldib, Mohamed Elsayed aut Khan, Asfandyar aut Banos, Oresti aut Khan, Adil Mehmood aut Lee, Sungyoung aut Choo, Hyunseung aut Enthalten in Multimedia tools and applications Springer US, 1995 77(2017), 1 vom: 06. Jan., Seite 917-937 (DE-627)189064145 (DE-600)1287642-2 (DE-576)052842126 1380-7501 nnns volume:77 year:2017 number:1 day:06 month:01 pages:917-937 https://doi.org/10.1007/s11042-016-4321-2 lizenzpflichtig Volltext GBV_USEFLAG_A SYSFLAG_A GBV_OLC SSG-OLC-MAT SSG-OLC-BUB SSG-OLC-MKW GBV_ILN_70 AR 77 2017 1 06 01 917-937 |
language |
English |
source |
Enthalten in Multimedia tools and applications 77(2017), 1 vom: 06. Jan., Seite 917-937 volume:77 year:2017 number:1 day:06 month:01 pages:917-937 |
sourceStr |
Enthalten in Multimedia tools and applications 77(2017), 1 vom: 06. Jan., Seite 917-937 volume:77 year:2017 number:1 day:06 month:01 pages:917-937 |
format_phy_str_mv |
Article |
institution |
findex.gbv.de |
topic_facet |
Facial expressions Classification YouTube Real-life scenarios |
dewey-raw |
070 |
isfreeaccess_bool |
false |
container_title |
Multimedia tools and applications |
authorswithroles_txt_mv |
Siddiqi, Muhammad Hameed @@aut@@ Ali, Maqbool @@aut@@ Abdelrahman Eldib, Mohamed Elsayed @@aut@@ Khan, Asfandyar @@aut@@ Banos, Oresti @@aut@@ Khan, Adil Mehmood @@aut@@ Lee, Sungyoung @@aut@@ Choo, Hyunseung @@aut@@ |
publishDateDaySort_date |
2017-01-06T00:00:00Z |
hierarchy_top_id |
189064145 |
dewey-sort |
270 |
id |
OLC2035041724 |
language_de |
englisch |
fullrecord |
<?xml version="1.0" encoding="UTF-8"?><collection xmlns="http://www.loc.gov/MARC21/slim"><record><leader>01000caa a22002652 4500</leader><controlfield tag="001">OLC2035041724</controlfield><controlfield tag="003">DE-627</controlfield><controlfield tag="005">20230503193231.0</controlfield><controlfield tag="007">tu</controlfield><controlfield tag="008">200819s2017 xx ||||| 00| ||eng c</controlfield><datafield tag="024" ind1="7" ind2=" "><subfield code="a">10.1007/s11042-016-4321-2</subfield><subfield code="2">doi</subfield></datafield><datafield tag="035" ind1=" " ind2=" "><subfield code="a">(DE-627)OLC2035041724</subfield></datafield><datafield tag="035" ind1=" " ind2=" "><subfield code="a">(DE-He213)s11042-016-4321-2-p</subfield></datafield><datafield tag="040" ind1=" " ind2=" "><subfield code="a">DE-627</subfield><subfield code="b">ger</subfield><subfield code="c">DE-627</subfield><subfield code="e">rakwb</subfield></datafield><datafield tag="041" ind1=" " ind2=" "><subfield code="a">eng</subfield></datafield><datafield tag="082" ind1="0" ind2="4"><subfield code="a">070</subfield><subfield code="a">004</subfield><subfield code="q">VZ</subfield></datafield><datafield tag="100" ind1="1" ind2=" "><subfield code="a">Siddiqi, Muhammad Hameed</subfield><subfield code="e">verfasserin</subfield><subfield code="4">aut</subfield></datafield><datafield tag="245" ind1="1" ind2="0"><subfield code="a">Evaluating real-life performance of the state-of-the-art in facial expression recognition using a novel YouTube-based datasets</subfield></datafield><datafield tag="264" ind1=" " ind2="1"><subfield code="c">2017</subfield></datafield><datafield tag="336" ind1=" " ind2=" "><subfield code="a">Text</subfield><subfield code="b">txt</subfield><subfield code="2">rdacontent</subfield></datafield><datafield tag="337" ind1=" " ind2=" "><subfield code="a">ohne Hilfsmittel zu benutzen</subfield><subfield code="b">n</subfield><subfield code="2">rdamedia</subfield></datafield><datafield tag="338" ind1=" " ind2=" "><subfield code="a">Band</subfield><subfield code="b">nc</subfield><subfield code="2">rdacarrier</subfield></datafield><datafield tag="500" ind1=" " ind2=" "><subfield code="a">© Springer Science+Business Media New York 2017</subfield></datafield><datafield tag="520" ind1=" " ind2=" "><subfield code="a">Abstract Facial expression recognition (FER) is one of the most active areas of research in computer science, due to its importance in a large number of application domains. Over the years, a great number of FER systems have been implemented, each surpassing the other in terms of classification accuracy. However, one major weakness found in the previous studies is that they have all used standard datasets for their evaluations and comparisons. Though this serves well given the needs of a fair comparison with existing systems, it is argued that this does not go in hand with the fact that these systems are built with a hope of eventually being used in the real-world. It is because these datasets assume a predefined camera setup, consist of mostly posed expressions collected in a controlled setting, using fixed background and static ambient settings, and having low variations in the face size and camera angles, which is not the case in a dynamic real-world. The contributions of this work are two-fold: firstly, using numerous online resources and also our own setup, we have collected a rich FER dataset keeping in mind the above mentioned problems. Secondly, we have chosen eleven state-of-the-art FER systems, implemented them and performed a rigorous evaluation of these systems using our dataset. The results confirm our hypothesis that even the most accurate existing FER systems are not ready to face the challenges of a dynamic real-world. We hope that our dataset would become a benchmark to assess the real-life performance of future FER systems.</subfield></datafield><datafield tag="650" ind1=" " ind2="4"><subfield code="a">Facial expressions</subfield></datafield><datafield tag="650" ind1=" " ind2="4"><subfield code="a">Classification</subfield></datafield><datafield tag="650" ind1=" " ind2="4"><subfield code="a">YouTube</subfield></datafield><datafield tag="650" ind1=" " ind2="4"><subfield code="a">Real-life scenarios</subfield></datafield><datafield tag="700" ind1="1" ind2=" "><subfield code="a">Ali, Maqbool</subfield><subfield code="4">aut</subfield></datafield><datafield tag="700" ind1="1" ind2=" "><subfield code="a">Abdelrahman Eldib, Mohamed Elsayed</subfield><subfield code="4">aut</subfield></datafield><datafield tag="700" ind1="1" ind2=" "><subfield code="a">Khan, Asfandyar</subfield><subfield code="4">aut</subfield></datafield><datafield tag="700" ind1="1" ind2=" "><subfield code="a">Banos, Oresti</subfield><subfield code="4">aut</subfield></datafield><datafield tag="700" ind1="1" ind2=" "><subfield code="a">Khan, Adil Mehmood</subfield><subfield code="4">aut</subfield></datafield><datafield tag="700" ind1="1" ind2=" "><subfield code="a">Lee, Sungyoung</subfield><subfield code="4">aut</subfield></datafield><datafield tag="700" ind1="1" ind2=" "><subfield code="a">Choo, Hyunseung</subfield><subfield code="4">aut</subfield></datafield><datafield tag="773" ind1="0" ind2="8"><subfield code="i">Enthalten in</subfield><subfield code="t">Multimedia tools and applications</subfield><subfield code="d">Springer US, 1995</subfield><subfield code="g">77(2017), 1 vom: 06. Jan., Seite 917-937</subfield><subfield code="w">(DE-627)189064145</subfield><subfield code="w">(DE-600)1287642-2</subfield><subfield code="w">(DE-576)052842126</subfield><subfield code="x">1380-7501</subfield><subfield code="7">nnns</subfield></datafield><datafield tag="773" ind1="1" ind2="8"><subfield code="g">volume:77</subfield><subfield code="g">year:2017</subfield><subfield code="g">number:1</subfield><subfield code="g">day:06</subfield><subfield code="g">month:01</subfield><subfield code="g">pages:917-937</subfield></datafield><datafield tag="856" ind1="4" ind2="1"><subfield code="u">https://doi.org/10.1007/s11042-016-4321-2</subfield><subfield code="z">lizenzpflichtig</subfield><subfield code="3">Volltext</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_USEFLAG_A</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">SYSFLAG_A</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_OLC</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">SSG-OLC-MAT</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">SSG-OLC-BUB</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">SSG-OLC-MKW</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_70</subfield></datafield><datafield tag="951" ind1=" " ind2=" "><subfield code="a">AR</subfield></datafield><datafield tag="952" ind1=" " ind2=" "><subfield code="d">77</subfield><subfield code="j">2017</subfield><subfield code="e">1</subfield><subfield code="b">06</subfield><subfield code="c">01</subfield><subfield code="h">917-937</subfield></datafield></record></collection>
|
author |
Siddiqi, Muhammad Hameed |
spellingShingle |
Siddiqi, Muhammad Hameed ddc 070 misc Facial expressions misc Classification misc YouTube misc Real-life scenarios Evaluating real-life performance of the state-of-the-art in facial expression recognition using a novel YouTube-based datasets |
authorStr |
Siddiqi, Muhammad Hameed |
ppnlink_with_tag_str_mv |
@@773@@(DE-627)189064145 |
format |
Article |
dewey-ones |
070 - News media, journalism & publishing 004 - Data processing & computer science |
delete_txt_mv |
keep |
author_role |
aut aut aut aut aut aut aut aut |
collection |
OLC |
remote_str |
false |
illustrated |
Not Illustrated |
issn |
1380-7501 |
topic_title |
070 004 VZ Evaluating real-life performance of the state-of-the-art in facial expression recognition using a novel YouTube-based datasets Facial expressions Classification YouTube Real-life scenarios |
topic |
ddc 070 misc Facial expressions misc Classification misc YouTube misc Real-life scenarios |
topic_unstemmed |
ddc 070 misc Facial expressions misc Classification misc YouTube misc Real-life scenarios |
topic_browse |
ddc 070 misc Facial expressions misc Classification misc YouTube misc Real-life scenarios |
format_facet |
Aufsätze Gedruckte Aufsätze |
format_main_str_mv |
Text Zeitschrift/Artikel |
carriertype_str_mv |
nc |
hierarchy_parent_title |
Multimedia tools and applications |
hierarchy_parent_id |
189064145 |
dewey-tens |
070 - News media, journalism & publishing 000 - Computer science, knowledge & systems |
hierarchy_top_title |
Multimedia tools and applications |
isfreeaccess_txt |
false |
familylinks_str_mv |
(DE-627)189064145 (DE-600)1287642-2 (DE-576)052842126 |
title |
Evaluating real-life performance of the state-of-the-art in facial expression recognition using a novel YouTube-based datasets |
ctrlnum |
(DE-627)OLC2035041724 (DE-He213)s11042-016-4321-2-p |
title_full |
Evaluating real-life performance of the state-of-the-art in facial expression recognition using a novel YouTube-based datasets |
author_sort |
Siddiqi, Muhammad Hameed |
journal |
Multimedia tools and applications |
journalStr |
Multimedia tools and applications |
lang_code |
eng |
isOA_bool |
false |
dewey-hundreds |
000 - Computer science, information & general works |
recordtype |
marc |
publishDateSort |
2017 |
contenttype_str_mv |
txt |
container_start_page |
917 |
author_browse |
Siddiqi, Muhammad Hameed Ali, Maqbool Abdelrahman Eldib, Mohamed Elsayed Khan, Asfandyar Banos, Oresti Khan, Adil Mehmood Lee, Sungyoung Choo, Hyunseung |
container_volume |
77 |
class |
070 004 VZ |
format_se |
Aufsätze |
author-letter |
Siddiqi, Muhammad Hameed |
doi_str_mv |
10.1007/s11042-016-4321-2 |
dewey-full |
070 004 |
title_sort |
evaluating real-life performance of the state-of-the-art in facial expression recognition using a novel youtube-based datasets |
title_auth |
Evaluating real-life performance of the state-of-the-art in facial expression recognition using a novel YouTube-based datasets |
abstract |
Abstract Facial expression recognition (FER) is one of the most active areas of research in computer science, due to its importance in a large number of application domains. Over the years, a great number of FER systems have been implemented, each surpassing the other in terms of classification accuracy. However, one major weakness found in the previous studies is that they have all used standard datasets for their evaluations and comparisons. Though this serves well given the needs of a fair comparison with existing systems, it is argued that this does not go in hand with the fact that these systems are built with a hope of eventually being used in the real-world. It is because these datasets assume a predefined camera setup, consist of mostly posed expressions collected in a controlled setting, using fixed background and static ambient settings, and having low variations in the face size and camera angles, which is not the case in a dynamic real-world. The contributions of this work are two-fold: firstly, using numerous online resources and also our own setup, we have collected a rich FER dataset keeping in mind the above mentioned problems. Secondly, we have chosen eleven state-of-the-art FER systems, implemented them and performed a rigorous evaluation of these systems using our dataset. The results confirm our hypothesis that even the most accurate existing FER systems are not ready to face the challenges of a dynamic real-world. We hope that our dataset would become a benchmark to assess the real-life performance of future FER systems. © Springer Science+Business Media New York 2017 |
abstractGer |
Abstract Facial expression recognition (FER) is one of the most active areas of research in computer science, due to its importance in a large number of application domains. Over the years, a great number of FER systems have been implemented, each surpassing the other in terms of classification accuracy. However, one major weakness found in the previous studies is that they have all used standard datasets for their evaluations and comparisons. Though this serves well given the needs of a fair comparison with existing systems, it is argued that this does not go in hand with the fact that these systems are built with a hope of eventually being used in the real-world. It is because these datasets assume a predefined camera setup, consist of mostly posed expressions collected in a controlled setting, using fixed background and static ambient settings, and having low variations in the face size and camera angles, which is not the case in a dynamic real-world. The contributions of this work are two-fold: firstly, using numerous online resources and also our own setup, we have collected a rich FER dataset keeping in mind the above mentioned problems. Secondly, we have chosen eleven state-of-the-art FER systems, implemented them and performed a rigorous evaluation of these systems using our dataset. The results confirm our hypothesis that even the most accurate existing FER systems are not ready to face the challenges of a dynamic real-world. We hope that our dataset would become a benchmark to assess the real-life performance of future FER systems. © Springer Science+Business Media New York 2017 |
abstract_unstemmed |
Abstract Facial expression recognition (FER) is one of the most active areas of research in computer science, due to its importance in a large number of application domains. Over the years, a great number of FER systems have been implemented, each surpassing the other in terms of classification accuracy. However, one major weakness found in the previous studies is that they have all used standard datasets for their evaluations and comparisons. Though this serves well given the needs of a fair comparison with existing systems, it is argued that this does not go in hand with the fact that these systems are built with a hope of eventually being used in the real-world. It is because these datasets assume a predefined camera setup, consist of mostly posed expressions collected in a controlled setting, using fixed background and static ambient settings, and having low variations in the face size and camera angles, which is not the case in a dynamic real-world. The contributions of this work are two-fold: firstly, using numerous online resources and also our own setup, we have collected a rich FER dataset keeping in mind the above mentioned problems. Secondly, we have chosen eleven state-of-the-art FER systems, implemented them and performed a rigorous evaluation of these systems using our dataset. The results confirm our hypothesis that even the most accurate existing FER systems are not ready to face the challenges of a dynamic real-world. We hope that our dataset would become a benchmark to assess the real-life performance of future FER systems. © Springer Science+Business Media New York 2017 |
collection_details |
GBV_USEFLAG_A SYSFLAG_A GBV_OLC SSG-OLC-MAT SSG-OLC-BUB SSG-OLC-MKW GBV_ILN_70 |
container_issue |
1 |
title_short |
Evaluating real-life performance of the state-of-the-art in facial expression recognition using a novel YouTube-based datasets |
url |
https://doi.org/10.1007/s11042-016-4321-2 |
remote_bool |
false |
author2 |
Ali, Maqbool Abdelrahman Eldib, Mohamed Elsayed Khan, Asfandyar Banos, Oresti Khan, Adil Mehmood Lee, Sungyoung Choo, Hyunseung |
author2Str |
Ali, Maqbool Abdelrahman Eldib, Mohamed Elsayed Khan, Asfandyar Banos, Oresti Khan, Adil Mehmood Lee, Sungyoung Choo, Hyunseung |
ppnlink |
189064145 |
mediatype_str_mv |
n |
isOA_txt |
false |
hochschulschrift_bool |
false |
doi_str |
10.1007/s11042-016-4321-2 |
up_date |
2024-07-03T23:33:52.473Z |
_version_ |
1803602763421057024 |
fullrecord_marcxml |
<?xml version="1.0" encoding="UTF-8"?><collection xmlns="http://www.loc.gov/MARC21/slim"><record><leader>01000caa a22002652 4500</leader><controlfield tag="001">OLC2035041724</controlfield><controlfield tag="003">DE-627</controlfield><controlfield tag="005">20230503193231.0</controlfield><controlfield tag="007">tu</controlfield><controlfield tag="008">200819s2017 xx ||||| 00| ||eng c</controlfield><datafield tag="024" ind1="7" ind2=" "><subfield code="a">10.1007/s11042-016-4321-2</subfield><subfield code="2">doi</subfield></datafield><datafield tag="035" ind1=" " ind2=" "><subfield code="a">(DE-627)OLC2035041724</subfield></datafield><datafield tag="035" ind1=" " ind2=" "><subfield code="a">(DE-He213)s11042-016-4321-2-p</subfield></datafield><datafield tag="040" ind1=" " ind2=" "><subfield code="a">DE-627</subfield><subfield code="b">ger</subfield><subfield code="c">DE-627</subfield><subfield code="e">rakwb</subfield></datafield><datafield tag="041" ind1=" " ind2=" "><subfield code="a">eng</subfield></datafield><datafield tag="082" ind1="0" ind2="4"><subfield code="a">070</subfield><subfield code="a">004</subfield><subfield code="q">VZ</subfield></datafield><datafield tag="100" ind1="1" ind2=" "><subfield code="a">Siddiqi, Muhammad Hameed</subfield><subfield code="e">verfasserin</subfield><subfield code="4">aut</subfield></datafield><datafield tag="245" ind1="1" ind2="0"><subfield code="a">Evaluating real-life performance of the state-of-the-art in facial expression recognition using a novel YouTube-based datasets</subfield></datafield><datafield tag="264" ind1=" " ind2="1"><subfield code="c">2017</subfield></datafield><datafield tag="336" ind1=" " ind2=" "><subfield code="a">Text</subfield><subfield code="b">txt</subfield><subfield code="2">rdacontent</subfield></datafield><datafield tag="337" ind1=" " ind2=" "><subfield code="a">ohne Hilfsmittel zu benutzen</subfield><subfield code="b">n</subfield><subfield code="2">rdamedia</subfield></datafield><datafield tag="338" ind1=" " ind2=" "><subfield code="a">Band</subfield><subfield code="b">nc</subfield><subfield code="2">rdacarrier</subfield></datafield><datafield tag="500" ind1=" " ind2=" "><subfield code="a">© Springer Science+Business Media New York 2017</subfield></datafield><datafield tag="520" ind1=" " ind2=" "><subfield code="a">Abstract Facial expression recognition (FER) is one of the most active areas of research in computer science, due to its importance in a large number of application domains. Over the years, a great number of FER systems have been implemented, each surpassing the other in terms of classification accuracy. However, one major weakness found in the previous studies is that they have all used standard datasets for their evaluations and comparisons. Though this serves well given the needs of a fair comparison with existing systems, it is argued that this does not go in hand with the fact that these systems are built with a hope of eventually being used in the real-world. It is because these datasets assume a predefined camera setup, consist of mostly posed expressions collected in a controlled setting, using fixed background and static ambient settings, and having low variations in the face size and camera angles, which is not the case in a dynamic real-world. The contributions of this work are two-fold: firstly, using numerous online resources and also our own setup, we have collected a rich FER dataset keeping in mind the above mentioned problems. Secondly, we have chosen eleven state-of-the-art FER systems, implemented them and performed a rigorous evaluation of these systems using our dataset. The results confirm our hypothesis that even the most accurate existing FER systems are not ready to face the challenges of a dynamic real-world. We hope that our dataset would become a benchmark to assess the real-life performance of future FER systems.</subfield></datafield><datafield tag="650" ind1=" " ind2="4"><subfield code="a">Facial expressions</subfield></datafield><datafield tag="650" ind1=" " ind2="4"><subfield code="a">Classification</subfield></datafield><datafield tag="650" ind1=" " ind2="4"><subfield code="a">YouTube</subfield></datafield><datafield tag="650" ind1=" " ind2="4"><subfield code="a">Real-life scenarios</subfield></datafield><datafield tag="700" ind1="1" ind2=" "><subfield code="a">Ali, Maqbool</subfield><subfield code="4">aut</subfield></datafield><datafield tag="700" ind1="1" ind2=" "><subfield code="a">Abdelrahman Eldib, Mohamed Elsayed</subfield><subfield code="4">aut</subfield></datafield><datafield tag="700" ind1="1" ind2=" "><subfield code="a">Khan, Asfandyar</subfield><subfield code="4">aut</subfield></datafield><datafield tag="700" ind1="1" ind2=" "><subfield code="a">Banos, Oresti</subfield><subfield code="4">aut</subfield></datafield><datafield tag="700" ind1="1" ind2=" "><subfield code="a">Khan, Adil Mehmood</subfield><subfield code="4">aut</subfield></datafield><datafield tag="700" ind1="1" ind2=" "><subfield code="a">Lee, Sungyoung</subfield><subfield code="4">aut</subfield></datafield><datafield tag="700" ind1="1" ind2=" "><subfield code="a">Choo, Hyunseung</subfield><subfield code="4">aut</subfield></datafield><datafield tag="773" ind1="0" ind2="8"><subfield code="i">Enthalten in</subfield><subfield code="t">Multimedia tools and applications</subfield><subfield code="d">Springer US, 1995</subfield><subfield code="g">77(2017), 1 vom: 06. Jan., Seite 917-937</subfield><subfield code="w">(DE-627)189064145</subfield><subfield code="w">(DE-600)1287642-2</subfield><subfield code="w">(DE-576)052842126</subfield><subfield code="x">1380-7501</subfield><subfield code="7">nnns</subfield></datafield><datafield tag="773" ind1="1" ind2="8"><subfield code="g">volume:77</subfield><subfield code="g">year:2017</subfield><subfield code="g">number:1</subfield><subfield code="g">day:06</subfield><subfield code="g">month:01</subfield><subfield code="g">pages:917-937</subfield></datafield><datafield tag="856" ind1="4" ind2="1"><subfield code="u">https://doi.org/10.1007/s11042-016-4321-2</subfield><subfield code="z">lizenzpflichtig</subfield><subfield code="3">Volltext</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_USEFLAG_A</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">SYSFLAG_A</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_OLC</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">SSG-OLC-MAT</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">SSG-OLC-BUB</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">SSG-OLC-MKW</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_70</subfield></datafield><datafield tag="951" ind1=" " ind2=" "><subfield code="a">AR</subfield></datafield><datafield tag="952" ind1=" " ind2=" "><subfield code="d">77</subfield><subfield code="j">2017</subfield><subfield code="e">1</subfield><subfield code="b">06</subfield><subfield code="c">01</subfield><subfield code="h">917-937</subfield></datafield></record></collection>
|
score |
7.399581 |