Efficient-SwishNet Based System for Facial Emotion Recognition
Facial emotion recognition (FER) is an important research area in artificial intelligence (AI) and has many applications i.e., face authentication systems, e-learning, entertainment, deepfakes detection, etc. FER is still a challenging task due to more intra-class variations of emotions. Although pr...
Ausführliche Beschreibung
Autor*in: |
Tarim Dar [verfasserIn] Ali Javed [verfasserIn] Sami Bourouis [verfasserIn] Hany S. Hussein [verfasserIn] Hammam Alshazly [verfasserIn] |
---|
Format: |
E-Artikel |
---|---|
Sprache: |
Englisch |
Erschienen: |
2022 |
---|
Schlagwörter: |
---|
Übergeordnetes Werk: |
In: IEEE Access - IEEE, 2014, 10(2022), Seite 71311-71328 |
---|---|
Übergeordnetes Werk: |
volume:10 ; year:2022 ; pages:71311-71328 |
Links: |
---|
DOI / URN: |
10.1109/ACCESS.2022.3188730 |
---|
Katalog-ID: |
DOAJ03653188X |
---|
LEADER | 01000caa a22002652 4500 | ||
---|---|---|---|
001 | DOAJ03653188X | ||
003 | DE-627 | ||
005 | 20230307231828.0 | ||
007 | cr uuu---uuuuu | ||
008 | 230227s2022 xx |||||o 00| ||eng c | ||
024 | 7 | |a 10.1109/ACCESS.2022.3188730 |2 doi | |
035 | |a (DE-627)DOAJ03653188X | ||
035 | |a (DE-599)DOAJf7f62713cd5e486a8c277e512fceadf8 | ||
040 | |a DE-627 |b ger |c DE-627 |e rakwb | ||
041 | |a eng | ||
050 | 0 | |a TK1-9971 | |
100 | 0 | |a Tarim Dar |e verfasserin |4 aut | |
245 | 1 | 0 | |a Efficient-SwishNet Based System for Facial Emotion Recognition |
264 | 1 | |c 2022 | |
336 | |a Text |b txt |2 rdacontent | ||
337 | |a Computermedien |b c |2 rdamedia | ||
338 | |a Online-Ressource |b cr |2 rdacarrier | ||
520 | |a Facial emotion recognition (FER) is an important research area in artificial intelligence (AI) and has many applications i.e., face authentication systems, e-learning, entertainment, deepfakes detection, etc. FER is still a challenging task due to more intra-class variations of emotions. Although prior deep learning methods have achieved good performance for FER. However, still there exists a need to develop efficient and effective FER systems robust to certain conditions i.e., variations in illumination, face angles, gender, race, background settings, and people belonging to diverse geographical regions. Moreover, a generalized model for the classification of human emotions is required to be implemented in computer systems so that they can interact with humans according to their emotions and improve their interaction. This work presents a novel light-weight Efficient-SwishNet model for emotion recognition that is robust towards the aforementioned conditions. We have introduced a low-cost, smooth unbounded above and bounded below Swish activation function in our model. Property of unboundedness helps to avoid saturation while smoothing helps in optimization and generalization of the model. Performance of the proposed model is evaluated on five diverse datasets including CK+, JAFFE, FER-2013, KDEF, and FERG datasets. We also performed a cross-corpora evaluation to show the generalizability of our model. The proposed model achieves a very high recognition rate for all datasets that prove the merit of the proposed framework for both the human facial images and stylized cartoon characters. Moreover, we conducted an ablation study with different variants of our model to prove its efficiency and effectiveness for emotions identification. | ||
650 | 4 | |a EfficientNet | |
650 | 4 | |a efficient-SwishNet | |
650 | 4 | |a facial emotion recognition | |
650 | 4 | |a human computer interaction (HCI) | |
650 | 4 | |a Swish activation | |
653 | 0 | |a Electrical engineering. Electronics. Nuclear engineering | |
700 | 0 | |a Ali Javed |e verfasserin |4 aut | |
700 | 0 | |a Sami Bourouis |e verfasserin |4 aut | |
700 | 0 | |a Hany S. Hussein |e verfasserin |4 aut | |
700 | 0 | |a Hammam Alshazly |e verfasserin |4 aut | |
773 | 0 | 8 | |i In |t IEEE Access |d IEEE, 2014 |g 10(2022), Seite 71311-71328 |w (DE-627)728440385 |w (DE-600)2687964-5 |x 21693536 |7 nnns |
773 | 1 | 8 | |g volume:10 |g year:2022 |g pages:71311-71328 |
856 | 4 | 0 | |u https://doi.org/10.1109/ACCESS.2022.3188730 |z kostenfrei |
856 | 4 | 0 | |u https://doaj.org/article/f7f62713cd5e486a8c277e512fceadf8 |z kostenfrei |
856 | 4 | 0 | |u https://ieeexplore.ieee.org/document/9815266/ |z kostenfrei |
856 | 4 | 2 | |u https://doaj.org/toc/2169-3536 |y Journal toc |z kostenfrei |
912 | |a GBV_USEFLAG_A | ||
912 | |a SYSFLAG_A | ||
912 | |a GBV_DOAJ | ||
912 | |a GBV_ILN_11 | ||
912 | |a GBV_ILN_20 | ||
912 | |a GBV_ILN_22 | ||
912 | |a GBV_ILN_23 | ||
912 | |a GBV_ILN_24 | ||
912 | |a GBV_ILN_31 | ||
912 | |a GBV_ILN_39 | ||
912 | |a GBV_ILN_40 | ||
912 | |a GBV_ILN_60 | ||
912 | |a GBV_ILN_62 | ||
912 | |a GBV_ILN_63 | ||
912 | |a GBV_ILN_65 | ||
912 | |a GBV_ILN_69 | ||
912 | |a GBV_ILN_70 | ||
912 | |a GBV_ILN_73 | ||
912 | |a GBV_ILN_95 | ||
912 | |a GBV_ILN_105 | ||
912 | |a GBV_ILN_110 | ||
912 | |a GBV_ILN_151 | ||
912 | |a GBV_ILN_161 | ||
912 | |a GBV_ILN_170 | ||
912 | |a GBV_ILN_213 | ||
912 | |a GBV_ILN_230 | ||
912 | |a GBV_ILN_285 | ||
912 | |a GBV_ILN_293 | ||
912 | |a GBV_ILN_370 | ||
912 | |a GBV_ILN_602 | ||
912 | |a GBV_ILN_2014 | ||
912 | |a GBV_ILN_4012 | ||
912 | |a GBV_ILN_4037 | ||
912 | |a GBV_ILN_4112 | ||
912 | |a GBV_ILN_4125 | ||
912 | |a GBV_ILN_4126 | ||
912 | |a GBV_ILN_4249 | ||
912 | |a GBV_ILN_4305 | ||
912 | |a GBV_ILN_4306 | ||
912 | |a GBV_ILN_4307 | ||
912 | |a GBV_ILN_4313 | ||
912 | |a GBV_ILN_4322 | ||
912 | |a GBV_ILN_4323 | ||
912 | |a GBV_ILN_4324 | ||
912 | |a GBV_ILN_4325 | ||
912 | |a GBV_ILN_4335 | ||
912 | |a GBV_ILN_4338 | ||
912 | |a GBV_ILN_4367 | ||
912 | |a GBV_ILN_4700 | ||
951 | |a AR | ||
952 | |d 10 |j 2022 |h 71311-71328 |
author_variant |
t d td a j aj s b sb h s h hsh h a ha |
---|---|
matchkey_str |
article:21693536:2022----::fiinsihebsdytmofcae |
hierarchy_sort_str |
2022 |
callnumber-subject-code |
TK |
publishDate |
2022 |
allfields |
10.1109/ACCESS.2022.3188730 doi (DE-627)DOAJ03653188X (DE-599)DOAJf7f62713cd5e486a8c277e512fceadf8 DE-627 ger DE-627 rakwb eng TK1-9971 Tarim Dar verfasserin aut Efficient-SwishNet Based System for Facial Emotion Recognition 2022 Text txt rdacontent Computermedien c rdamedia Online-Ressource cr rdacarrier Facial emotion recognition (FER) is an important research area in artificial intelligence (AI) and has many applications i.e., face authentication systems, e-learning, entertainment, deepfakes detection, etc. FER is still a challenging task due to more intra-class variations of emotions. Although prior deep learning methods have achieved good performance for FER. However, still there exists a need to develop efficient and effective FER systems robust to certain conditions i.e., variations in illumination, face angles, gender, race, background settings, and people belonging to diverse geographical regions. Moreover, a generalized model for the classification of human emotions is required to be implemented in computer systems so that they can interact with humans according to their emotions and improve their interaction. This work presents a novel light-weight Efficient-SwishNet model for emotion recognition that is robust towards the aforementioned conditions. We have introduced a low-cost, smooth unbounded above and bounded below Swish activation function in our model. Property of unboundedness helps to avoid saturation while smoothing helps in optimization and generalization of the model. Performance of the proposed model is evaluated on five diverse datasets including CK+, JAFFE, FER-2013, KDEF, and FERG datasets. We also performed a cross-corpora evaluation to show the generalizability of our model. The proposed model achieves a very high recognition rate for all datasets that prove the merit of the proposed framework for both the human facial images and stylized cartoon characters. Moreover, we conducted an ablation study with different variants of our model to prove its efficiency and effectiveness for emotions identification. EfficientNet efficient-SwishNet facial emotion recognition human computer interaction (HCI) Swish activation Electrical engineering. Electronics. Nuclear engineering Ali Javed verfasserin aut Sami Bourouis verfasserin aut Hany S. Hussein verfasserin aut Hammam Alshazly verfasserin aut In IEEE Access IEEE, 2014 10(2022), Seite 71311-71328 (DE-627)728440385 (DE-600)2687964-5 21693536 nnns volume:10 year:2022 pages:71311-71328 https://doi.org/10.1109/ACCESS.2022.3188730 kostenfrei https://doaj.org/article/f7f62713cd5e486a8c277e512fceadf8 kostenfrei https://ieeexplore.ieee.org/document/9815266/ kostenfrei https://doaj.org/toc/2169-3536 Journal toc kostenfrei GBV_USEFLAG_A SYSFLAG_A GBV_DOAJ GBV_ILN_11 GBV_ILN_20 GBV_ILN_22 GBV_ILN_23 GBV_ILN_24 GBV_ILN_31 GBV_ILN_39 GBV_ILN_40 GBV_ILN_60 GBV_ILN_62 GBV_ILN_63 GBV_ILN_65 GBV_ILN_69 GBV_ILN_70 GBV_ILN_73 GBV_ILN_95 GBV_ILN_105 GBV_ILN_110 GBV_ILN_151 GBV_ILN_161 GBV_ILN_170 GBV_ILN_213 GBV_ILN_230 GBV_ILN_285 GBV_ILN_293 GBV_ILN_370 GBV_ILN_602 GBV_ILN_2014 GBV_ILN_4012 GBV_ILN_4037 GBV_ILN_4112 GBV_ILN_4125 GBV_ILN_4126 GBV_ILN_4249 GBV_ILN_4305 GBV_ILN_4306 GBV_ILN_4307 GBV_ILN_4313 GBV_ILN_4322 GBV_ILN_4323 GBV_ILN_4324 GBV_ILN_4325 GBV_ILN_4335 GBV_ILN_4338 GBV_ILN_4367 GBV_ILN_4700 AR 10 2022 71311-71328 |
spelling |
10.1109/ACCESS.2022.3188730 doi (DE-627)DOAJ03653188X (DE-599)DOAJf7f62713cd5e486a8c277e512fceadf8 DE-627 ger DE-627 rakwb eng TK1-9971 Tarim Dar verfasserin aut Efficient-SwishNet Based System for Facial Emotion Recognition 2022 Text txt rdacontent Computermedien c rdamedia Online-Ressource cr rdacarrier Facial emotion recognition (FER) is an important research area in artificial intelligence (AI) and has many applications i.e., face authentication systems, e-learning, entertainment, deepfakes detection, etc. FER is still a challenging task due to more intra-class variations of emotions. Although prior deep learning methods have achieved good performance for FER. However, still there exists a need to develop efficient and effective FER systems robust to certain conditions i.e., variations in illumination, face angles, gender, race, background settings, and people belonging to diverse geographical regions. Moreover, a generalized model for the classification of human emotions is required to be implemented in computer systems so that they can interact with humans according to their emotions and improve their interaction. This work presents a novel light-weight Efficient-SwishNet model for emotion recognition that is robust towards the aforementioned conditions. We have introduced a low-cost, smooth unbounded above and bounded below Swish activation function in our model. Property of unboundedness helps to avoid saturation while smoothing helps in optimization and generalization of the model. Performance of the proposed model is evaluated on five diverse datasets including CK+, JAFFE, FER-2013, KDEF, and FERG datasets. We also performed a cross-corpora evaluation to show the generalizability of our model. The proposed model achieves a very high recognition rate for all datasets that prove the merit of the proposed framework for both the human facial images and stylized cartoon characters. Moreover, we conducted an ablation study with different variants of our model to prove its efficiency and effectiveness for emotions identification. EfficientNet efficient-SwishNet facial emotion recognition human computer interaction (HCI) Swish activation Electrical engineering. Electronics. Nuclear engineering Ali Javed verfasserin aut Sami Bourouis verfasserin aut Hany S. Hussein verfasserin aut Hammam Alshazly verfasserin aut In IEEE Access IEEE, 2014 10(2022), Seite 71311-71328 (DE-627)728440385 (DE-600)2687964-5 21693536 nnns volume:10 year:2022 pages:71311-71328 https://doi.org/10.1109/ACCESS.2022.3188730 kostenfrei https://doaj.org/article/f7f62713cd5e486a8c277e512fceadf8 kostenfrei https://ieeexplore.ieee.org/document/9815266/ kostenfrei https://doaj.org/toc/2169-3536 Journal toc kostenfrei GBV_USEFLAG_A SYSFLAG_A GBV_DOAJ GBV_ILN_11 GBV_ILN_20 GBV_ILN_22 GBV_ILN_23 GBV_ILN_24 GBV_ILN_31 GBV_ILN_39 GBV_ILN_40 GBV_ILN_60 GBV_ILN_62 GBV_ILN_63 GBV_ILN_65 GBV_ILN_69 GBV_ILN_70 GBV_ILN_73 GBV_ILN_95 GBV_ILN_105 GBV_ILN_110 GBV_ILN_151 GBV_ILN_161 GBV_ILN_170 GBV_ILN_213 GBV_ILN_230 GBV_ILN_285 GBV_ILN_293 GBV_ILN_370 GBV_ILN_602 GBV_ILN_2014 GBV_ILN_4012 GBV_ILN_4037 GBV_ILN_4112 GBV_ILN_4125 GBV_ILN_4126 GBV_ILN_4249 GBV_ILN_4305 GBV_ILN_4306 GBV_ILN_4307 GBV_ILN_4313 GBV_ILN_4322 GBV_ILN_4323 GBV_ILN_4324 GBV_ILN_4325 GBV_ILN_4335 GBV_ILN_4338 GBV_ILN_4367 GBV_ILN_4700 AR 10 2022 71311-71328 |
allfields_unstemmed |
10.1109/ACCESS.2022.3188730 doi (DE-627)DOAJ03653188X (DE-599)DOAJf7f62713cd5e486a8c277e512fceadf8 DE-627 ger DE-627 rakwb eng TK1-9971 Tarim Dar verfasserin aut Efficient-SwishNet Based System for Facial Emotion Recognition 2022 Text txt rdacontent Computermedien c rdamedia Online-Ressource cr rdacarrier Facial emotion recognition (FER) is an important research area in artificial intelligence (AI) and has many applications i.e., face authentication systems, e-learning, entertainment, deepfakes detection, etc. FER is still a challenging task due to more intra-class variations of emotions. Although prior deep learning methods have achieved good performance for FER. However, still there exists a need to develop efficient and effective FER systems robust to certain conditions i.e., variations in illumination, face angles, gender, race, background settings, and people belonging to diverse geographical regions. Moreover, a generalized model for the classification of human emotions is required to be implemented in computer systems so that they can interact with humans according to their emotions and improve their interaction. This work presents a novel light-weight Efficient-SwishNet model for emotion recognition that is robust towards the aforementioned conditions. We have introduced a low-cost, smooth unbounded above and bounded below Swish activation function in our model. Property of unboundedness helps to avoid saturation while smoothing helps in optimization and generalization of the model. Performance of the proposed model is evaluated on five diverse datasets including CK+, JAFFE, FER-2013, KDEF, and FERG datasets. We also performed a cross-corpora evaluation to show the generalizability of our model. The proposed model achieves a very high recognition rate for all datasets that prove the merit of the proposed framework for both the human facial images and stylized cartoon characters. Moreover, we conducted an ablation study with different variants of our model to prove its efficiency and effectiveness for emotions identification. EfficientNet efficient-SwishNet facial emotion recognition human computer interaction (HCI) Swish activation Electrical engineering. Electronics. Nuclear engineering Ali Javed verfasserin aut Sami Bourouis verfasserin aut Hany S. Hussein verfasserin aut Hammam Alshazly verfasserin aut In IEEE Access IEEE, 2014 10(2022), Seite 71311-71328 (DE-627)728440385 (DE-600)2687964-5 21693536 nnns volume:10 year:2022 pages:71311-71328 https://doi.org/10.1109/ACCESS.2022.3188730 kostenfrei https://doaj.org/article/f7f62713cd5e486a8c277e512fceadf8 kostenfrei https://ieeexplore.ieee.org/document/9815266/ kostenfrei https://doaj.org/toc/2169-3536 Journal toc kostenfrei GBV_USEFLAG_A SYSFLAG_A GBV_DOAJ GBV_ILN_11 GBV_ILN_20 GBV_ILN_22 GBV_ILN_23 GBV_ILN_24 GBV_ILN_31 GBV_ILN_39 GBV_ILN_40 GBV_ILN_60 GBV_ILN_62 GBV_ILN_63 GBV_ILN_65 GBV_ILN_69 GBV_ILN_70 GBV_ILN_73 GBV_ILN_95 GBV_ILN_105 GBV_ILN_110 GBV_ILN_151 GBV_ILN_161 GBV_ILN_170 GBV_ILN_213 GBV_ILN_230 GBV_ILN_285 GBV_ILN_293 GBV_ILN_370 GBV_ILN_602 GBV_ILN_2014 GBV_ILN_4012 GBV_ILN_4037 GBV_ILN_4112 GBV_ILN_4125 GBV_ILN_4126 GBV_ILN_4249 GBV_ILN_4305 GBV_ILN_4306 GBV_ILN_4307 GBV_ILN_4313 GBV_ILN_4322 GBV_ILN_4323 GBV_ILN_4324 GBV_ILN_4325 GBV_ILN_4335 GBV_ILN_4338 GBV_ILN_4367 GBV_ILN_4700 AR 10 2022 71311-71328 |
allfieldsGer |
10.1109/ACCESS.2022.3188730 doi (DE-627)DOAJ03653188X (DE-599)DOAJf7f62713cd5e486a8c277e512fceadf8 DE-627 ger DE-627 rakwb eng TK1-9971 Tarim Dar verfasserin aut Efficient-SwishNet Based System for Facial Emotion Recognition 2022 Text txt rdacontent Computermedien c rdamedia Online-Ressource cr rdacarrier Facial emotion recognition (FER) is an important research area in artificial intelligence (AI) and has many applications i.e., face authentication systems, e-learning, entertainment, deepfakes detection, etc. FER is still a challenging task due to more intra-class variations of emotions. Although prior deep learning methods have achieved good performance for FER. However, still there exists a need to develop efficient and effective FER systems robust to certain conditions i.e., variations in illumination, face angles, gender, race, background settings, and people belonging to diverse geographical regions. Moreover, a generalized model for the classification of human emotions is required to be implemented in computer systems so that they can interact with humans according to their emotions and improve their interaction. This work presents a novel light-weight Efficient-SwishNet model for emotion recognition that is robust towards the aforementioned conditions. We have introduced a low-cost, smooth unbounded above and bounded below Swish activation function in our model. Property of unboundedness helps to avoid saturation while smoothing helps in optimization and generalization of the model. Performance of the proposed model is evaluated on five diverse datasets including CK+, JAFFE, FER-2013, KDEF, and FERG datasets. We also performed a cross-corpora evaluation to show the generalizability of our model. The proposed model achieves a very high recognition rate for all datasets that prove the merit of the proposed framework for both the human facial images and stylized cartoon characters. Moreover, we conducted an ablation study with different variants of our model to prove its efficiency and effectiveness for emotions identification. EfficientNet efficient-SwishNet facial emotion recognition human computer interaction (HCI) Swish activation Electrical engineering. Electronics. Nuclear engineering Ali Javed verfasserin aut Sami Bourouis verfasserin aut Hany S. Hussein verfasserin aut Hammam Alshazly verfasserin aut In IEEE Access IEEE, 2014 10(2022), Seite 71311-71328 (DE-627)728440385 (DE-600)2687964-5 21693536 nnns volume:10 year:2022 pages:71311-71328 https://doi.org/10.1109/ACCESS.2022.3188730 kostenfrei https://doaj.org/article/f7f62713cd5e486a8c277e512fceadf8 kostenfrei https://ieeexplore.ieee.org/document/9815266/ kostenfrei https://doaj.org/toc/2169-3536 Journal toc kostenfrei GBV_USEFLAG_A SYSFLAG_A GBV_DOAJ GBV_ILN_11 GBV_ILN_20 GBV_ILN_22 GBV_ILN_23 GBV_ILN_24 GBV_ILN_31 GBV_ILN_39 GBV_ILN_40 GBV_ILN_60 GBV_ILN_62 GBV_ILN_63 GBV_ILN_65 GBV_ILN_69 GBV_ILN_70 GBV_ILN_73 GBV_ILN_95 GBV_ILN_105 GBV_ILN_110 GBV_ILN_151 GBV_ILN_161 GBV_ILN_170 GBV_ILN_213 GBV_ILN_230 GBV_ILN_285 GBV_ILN_293 GBV_ILN_370 GBV_ILN_602 GBV_ILN_2014 GBV_ILN_4012 GBV_ILN_4037 GBV_ILN_4112 GBV_ILN_4125 GBV_ILN_4126 GBV_ILN_4249 GBV_ILN_4305 GBV_ILN_4306 GBV_ILN_4307 GBV_ILN_4313 GBV_ILN_4322 GBV_ILN_4323 GBV_ILN_4324 GBV_ILN_4325 GBV_ILN_4335 GBV_ILN_4338 GBV_ILN_4367 GBV_ILN_4700 AR 10 2022 71311-71328 |
allfieldsSound |
10.1109/ACCESS.2022.3188730 doi (DE-627)DOAJ03653188X (DE-599)DOAJf7f62713cd5e486a8c277e512fceadf8 DE-627 ger DE-627 rakwb eng TK1-9971 Tarim Dar verfasserin aut Efficient-SwishNet Based System for Facial Emotion Recognition 2022 Text txt rdacontent Computermedien c rdamedia Online-Ressource cr rdacarrier Facial emotion recognition (FER) is an important research area in artificial intelligence (AI) and has many applications i.e., face authentication systems, e-learning, entertainment, deepfakes detection, etc. FER is still a challenging task due to more intra-class variations of emotions. Although prior deep learning methods have achieved good performance for FER. However, still there exists a need to develop efficient and effective FER systems robust to certain conditions i.e., variations in illumination, face angles, gender, race, background settings, and people belonging to diverse geographical regions. Moreover, a generalized model for the classification of human emotions is required to be implemented in computer systems so that they can interact with humans according to their emotions and improve their interaction. This work presents a novel light-weight Efficient-SwishNet model for emotion recognition that is robust towards the aforementioned conditions. We have introduced a low-cost, smooth unbounded above and bounded below Swish activation function in our model. Property of unboundedness helps to avoid saturation while smoothing helps in optimization and generalization of the model. Performance of the proposed model is evaluated on five diverse datasets including CK+, JAFFE, FER-2013, KDEF, and FERG datasets. We also performed a cross-corpora evaluation to show the generalizability of our model. The proposed model achieves a very high recognition rate for all datasets that prove the merit of the proposed framework for both the human facial images and stylized cartoon characters. Moreover, we conducted an ablation study with different variants of our model to prove its efficiency and effectiveness for emotions identification. EfficientNet efficient-SwishNet facial emotion recognition human computer interaction (HCI) Swish activation Electrical engineering. Electronics. Nuclear engineering Ali Javed verfasserin aut Sami Bourouis verfasserin aut Hany S. Hussein verfasserin aut Hammam Alshazly verfasserin aut In IEEE Access IEEE, 2014 10(2022), Seite 71311-71328 (DE-627)728440385 (DE-600)2687964-5 21693536 nnns volume:10 year:2022 pages:71311-71328 https://doi.org/10.1109/ACCESS.2022.3188730 kostenfrei https://doaj.org/article/f7f62713cd5e486a8c277e512fceadf8 kostenfrei https://ieeexplore.ieee.org/document/9815266/ kostenfrei https://doaj.org/toc/2169-3536 Journal toc kostenfrei GBV_USEFLAG_A SYSFLAG_A GBV_DOAJ GBV_ILN_11 GBV_ILN_20 GBV_ILN_22 GBV_ILN_23 GBV_ILN_24 GBV_ILN_31 GBV_ILN_39 GBV_ILN_40 GBV_ILN_60 GBV_ILN_62 GBV_ILN_63 GBV_ILN_65 GBV_ILN_69 GBV_ILN_70 GBV_ILN_73 GBV_ILN_95 GBV_ILN_105 GBV_ILN_110 GBV_ILN_151 GBV_ILN_161 GBV_ILN_170 GBV_ILN_213 GBV_ILN_230 GBV_ILN_285 GBV_ILN_293 GBV_ILN_370 GBV_ILN_602 GBV_ILN_2014 GBV_ILN_4012 GBV_ILN_4037 GBV_ILN_4112 GBV_ILN_4125 GBV_ILN_4126 GBV_ILN_4249 GBV_ILN_4305 GBV_ILN_4306 GBV_ILN_4307 GBV_ILN_4313 GBV_ILN_4322 GBV_ILN_4323 GBV_ILN_4324 GBV_ILN_4325 GBV_ILN_4335 GBV_ILN_4338 GBV_ILN_4367 GBV_ILN_4700 AR 10 2022 71311-71328 |
language |
English |
source |
In IEEE Access 10(2022), Seite 71311-71328 volume:10 year:2022 pages:71311-71328 |
sourceStr |
In IEEE Access 10(2022), Seite 71311-71328 volume:10 year:2022 pages:71311-71328 |
format_phy_str_mv |
Article |
institution |
findex.gbv.de |
topic_facet |
EfficientNet efficient-SwishNet facial emotion recognition human computer interaction (HCI) Swish activation Electrical engineering. Electronics. Nuclear engineering |
isfreeaccess_bool |
true |
container_title |
IEEE Access |
authorswithroles_txt_mv |
Tarim Dar @@aut@@ Ali Javed @@aut@@ Sami Bourouis @@aut@@ Hany S. Hussein @@aut@@ Hammam Alshazly @@aut@@ |
publishDateDaySort_date |
2022-01-01T00:00:00Z |
hierarchy_top_id |
728440385 |
id |
DOAJ03653188X |
language_de |
englisch |
fullrecord |
<?xml version="1.0" encoding="UTF-8"?><collection xmlns="http://www.loc.gov/MARC21/slim"><record><leader>01000caa a22002652 4500</leader><controlfield tag="001">DOAJ03653188X</controlfield><controlfield tag="003">DE-627</controlfield><controlfield tag="005">20230307231828.0</controlfield><controlfield tag="007">cr uuu---uuuuu</controlfield><controlfield tag="008">230227s2022 xx |||||o 00| ||eng c</controlfield><datafield tag="024" ind1="7" ind2=" "><subfield code="a">10.1109/ACCESS.2022.3188730</subfield><subfield code="2">doi</subfield></datafield><datafield tag="035" ind1=" " ind2=" "><subfield code="a">(DE-627)DOAJ03653188X</subfield></datafield><datafield tag="035" ind1=" " ind2=" "><subfield code="a">(DE-599)DOAJf7f62713cd5e486a8c277e512fceadf8</subfield></datafield><datafield tag="040" ind1=" " ind2=" "><subfield code="a">DE-627</subfield><subfield code="b">ger</subfield><subfield code="c">DE-627</subfield><subfield code="e">rakwb</subfield></datafield><datafield tag="041" ind1=" " ind2=" "><subfield code="a">eng</subfield></datafield><datafield tag="050" ind1=" " ind2="0"><subfield code="a">TK1-9971</subfield></datafield><datafield tag="100" ind1="0" ind2=" "><subfield code="a">Tarim Dar</subfield><subfield code="e">verfasserin</subfield><subfield code="4">aut</subfield></datafield><datafield tag="245" ind1="1" ind2="0"><subfield code="a">Efficient-SwishNet Based System for Facial Emotion Recognition</subfield></datafield><datafield tag="264" ind1=" " ind2="1"><subfield code="c">2022</subfield></datafield><datafield tag="336" ind1=" " ind2=" "><subfield code="a">Text</subfield><subfield code="b">txt</subfield><subfield code="2">rdacontent</subfield></datafield><datafield tag="337" ind1=" " ind2=" "><subfield code="a">Computermedien</subfield><subfield code="b">c</subfield><subfield code="2">rdamedia</subfield></datafield><datafield tag="338" ind1=" " ind2=" "><subfield code="a">Online-Ressource</subfield><subfield code="b">cr</subfield><subfield code="2">rdacarrier</subfield></datafield><datafield tag="520" ind1=" " ind2=" "><subfield code="a">Facial emotion recognition (FER) is an important research area in artificial intelligence (AI) and has many applications i.e., face authentication systems, e-learning, entertainment, deepfakes detection, etc. FER is still a challenging task due to more intra-class variations of emotions. Although prior deep learning methods have achieved good performance for FER. However, still there exists a need to develop efficient and effective FER systems robust to certain conditions i.e., variations in illumination, face angles, gender, race, background settings, and people belonging to diverse geographical regions. Moreover, a generalized model for the classification of human emotions is required to be implemented in computer systems so that they can interact with humans according to their emotions and improve their interaction. This work presents a novel light-weight Efficient-SwishNet model for emotion recognition that is robust towards the aforementioned conditions. We have introduced a low-cost, smooth unbounded above and bounded below Swish activation function in our model. Property of unboundedness helps to avoid saturation while smoothing helps in optimization and generalization of the model. Performance of the proposed model is evaluated on five diverse datasets including CK+, JAFFE, FER-2013, KDEF, and FERG datasets. We also performed a cross-corpora evaluation to show the generalizability of our model. The proposed model achieves a very high recognition rate for all datasets that prove the merit of the proposed framework for both the human facial images and stylized cartoon characters. Moreover, we conducted an ablation study with different variants of our model to prove its efficiency and effectiveness for emotions identification.</subfield></datafield><datafield tag="650" ind1=" " ind2="4"><subfield code="a">EfficientNet</subfield></datafield><datafield tag="650" ind1=" " ind2="4"><subfield code="a">efficient-SwishNet</subfield></datafield><datafield tag="650" ind1=" " ind2="4"><subfield code="a">facial emotion recognition</subfield></datafield><datafield tag="650" ind1=" " ind2="4"><subfield code="a">human computer interaction (HCI)</subfield></datafield><datafield tag="650" ind1=" " ind2="4"><subfield code="a">Swish activation</subfield></datafield><datafield tag="653" ind1=" " ind2="0"><subfield code="a">Electrical engineering. Electronics. Nuclear engineering</subfield></datafield><datafield tag="700" ind1="0" ind2=" "><subfield code="a">Ali Javed</subfield><subfield code="e">verfasserin</subfield><subfield code="4">aut</subfield></datafield><datafield tag="700" ind1="0" ind2=" "><subfield code="a">Sami Bourouis</subfield><subfield code="e">verfasserin</subfield><subfield code="4">aut</subfield></datafield><datafield tag="700" ind1="0" ind2=" "><subfield code="a">Hany S. Hussein</subfield><subfield code="e">verfasserin</subfield><subfield code="4">aut</subfield></datafield><datafield tag="700" ind1="0" ind2=" "><subfield code="a">Hammam Alshazly</subfield><subfield code="e">verfasserin</subfield><subfield code="4">aut</subfield></datafield><datafield tag="773" ind1="0" ind2="8"><subfield code="i">In</subfield><subfield code="t">IEEE Access</subfield><subfield code="d">IEEE, 2014</subfield><subfield code="g">10(2022), Seite 71311-71328</subfield><subfield code="w">(DE-627)728440385</subfield><subfield code="w">(DE-600)2687964-5</subfield><subfield code="x">21693536</subfield><subfield code="7">nnns</subfield></datafield><datafield tag="773" ind1="1" ind2="8"><subfield code="g">volume:10</subfield><subfield code="g">year:2022</subfield><subfield code="g">pages:71311-71328</subfield></datafield><datafield tag="856" ind1="4" ind2="0"><subfield code="u">https://doi.org/10.1109/ACCESS.2022.3188730</subfield><subfield code="z">kostenfrei</subfield></datafield><datafield tag="856" ind1="4" ind2="0"><subfield code="u">https://doaj.org/article/f7f62713cd5e486a8c277e512fceadf8</subfield><subfield code="z">kostenfrei</subfield></datafield><datafield tag="856" ind1="4" ind2="0"><subfield code="u">https://ieeexplore.ieee.org/document/9815266/</subfield><subfield code="z">kostenfrei</subfield></datafield><datafield tag="856" ind1="4" ind2="2"><subfield code="u">https://doaj.org/toc/2169-3536</subfield><subfield code="y">Journal toc</subfield><subfield code="z">kostenfrei</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_USEFLAG_A</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">SYSFLAG_A</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_DOAJ</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_11</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_20</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_22</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_23</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_24</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_31</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_39</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_40</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_60</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_62</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_63</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_65</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_69</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_70</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_73</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_95</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_105</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_110</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_151</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_161</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_170</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_213</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_230</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_285</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_293</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_370</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_602</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2014</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4012</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4037</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4112</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4125</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4126</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4249</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4305</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4306</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4307</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4313</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4322</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4323</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4324</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4325</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4335</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4338</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4367</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4700</subfield></datafield><datafield tag="951" ind1=" " ind2=" "><subfield code="a">AR</subfield></datafield><datafield tag="952" ind1=" " ind2=" "><subfield code="d">10</subfield><subfield code="j">2022</subfield><subfield code="h">71311-71328</subfield></datafield></record></collection>
|
callnumber-first |
T - Technology |
author |
Tarim Dar |
spellingShingle |
Tarim Dar misc TK1-9971 misc EfficientNet misc efficient-SwishNet misc facial emotion recognition misc human computer interaction (HCI) misc Swish activation misc Electrical engineering. Electronics. Nuclear engineering Efficient-SwishNet Based System for Facial Emotion Recognition |
authorStr |
Tarim Dar |
ppnlink_with_tag_str_mv |
@@773@@(DE-627)728440385 |
format |
electronic Article |
delete_txt_mv |
keep |
author_role |
aut aut aut aut aut |
collection |
DOAJ |
remote_str |
true |
callnumber-label |
TK1-9971 |
illustrated |
Not Illustrated |
issn |
21693536 |
topic_title |
TK1-9971 Efficient-SwishNet Based System for Facial Emotion Recognition EfficientNet efficient-SwishNet facial emotion recognition human computer interaction (HCI) Swish activation |
topic |
misc TK1-9971 misc EfficientNet misc efficient-SwishNet misc facial emotion recognition misc human computer interaction (HCI) misc Swish activation misc Electrical engineering. Electronics. Nuclear engineering |
topic_unstemmed |
misc TK1-9971 misc EfficientNet misc efficient-SwishNet misc facial emotion recognition misc human computer interaction (HCI) misc Swish activation misc Electrical engineering. Electronics. Nuclear engineering |
topic_browse |
misc TK1-9971 misc EfficientNet misc efficient-SwishNet misc facial emotion recognition misc human computer interaction (HCI) misc Swish activation misc Electrical engineering. Electronics. Nuclear engineering |
format_facet |
Elektronische Aufsätze Aufsätze Elektronische Ressource |
format_main_str_mv |
Text Zeitschrift/Artikel |
carriertype_str_mv |
cr |
hierarchy_parent_title |
IEEE Access |
hierarchy_parent_id |
728440385 |
hierarchy_top_title |
IEEE Access |
isfreeaccess_txt |
true |
familylinks_str_mv |
(DE-627)728440385 (DE-600)2687964-5 |
title |
Efficient-SwishNet Based System for Facial Emotion Recognition |
ctrlnum |
(DE-627)DOAJ03653188X (DE-599)DOAJf7f62713cd5e486a8c277e512fceadf8 |
title_full |
Efficient-SwishNet Based System for Facial Emotion Recognition |
author_sort |
Tarim Dar |
journal |
IEEE Access |
journalStr |
IEEE Access |
callnumber-first-code |
T |
lang_code |
eng |
isOA_bool |
true |
recordtype |
marc |
publishDateSort |
2022 |
contenttype_str_mv |
txt |
container_start_page |
71311 |
author_browse |
Tarim Dar Ali Javed Sami Bourouis Hany S. Hussein Hammam Alshazly |
container_volume |
10 |
class |
TK1-9971 |
format_se |
Elektronische Aufsätze |
author-letter |
Tarim Dar |
doi_str_mv |
10.1109/ACCESS.2022.3188730 |
author2-role |
verfasserin |
title_sort |
efficient-swishnet based system for facial emotion recognition |
callnumber |
TK1-9971 |
title_auth |
Efficient-SwishNet Based System for Facial Emotion Recognition |
abstract |
Facial emotion recognition (FER) is an important research area in artificial intelligence (AI) and has many applications i.e., face authentication systems, e-learning, entertainment, deepfakes detection, etc. FER is still a challenging task due to more intra-class variations of emotions. Although prior deep learning methods have achieved good performance for FER. However, still there exists a need to develop efficient and effective FER systems robust to certain conditions i.e., variations in illumination, face angles, gender, race, background settings, and people belonging to diverse geographical regions. Moreover, a generalized model for the classification of human emotions is required to be implemented in computer systems so that they can interact with humans according to their emotions and improve their interaction. This work presents a novel light-weight Efficient-SwishNet model for emotion recognition that is robust towards the aforementioned conditions. We have introduced a low-cost, smooth unbounded above and bounded below Swish activation function in our model. Property of unboundedness helps to avoid saturation while smoothing helps in optimization and generalization of the model. Performance of the proposed model is evaluated on five diverse datasets including CK+, JAFFE, FER-2013, KDEF, and FERG datasets. We also performed a cross-corpora evaluation to show the generalizability of our model. The proposed model achieves a very high recognition rate for all datasets that prove the merit of the proposed framework for both the human facial images and stylized cartoon characters. Moreover, we conducted an ablation study with different variants of our model to prove its efficiency and effectiveness for emotions identification. |
abstractGer |
Facial emotion recognition (FER) is an important research area in artificial intelligence (AI) and has many applications i.e., face authentication systems, e-learning, entertainment, deepfakes detection, etc. FER is still a challenging task due to more intra-class variations of emotions. Although prior deep learning methods have achieved good performance for FER. However, still there exists a need to develop efficient and effective FER systems robust to certain conditions i.e., variations in illumination, face angles, gender, race, background settings, and people belonging to diverse geographical regions. Moreover, a generalized model for the classification of human emotions is required to be implemented in computer systems so that they can interact with humans according to their emotions and improve their interaction. This work presents a novel light-weight Efficient-SwishNet model for emotion recognition that is robust towards the aforementioned conditions. We have introduced a low-cost, smooth unbounded above and bounded below Swish activation function in our model. Property of unboundedness helps to avoid saturation while smoothing helps in optimization and generalization of the model. Performance of the proposed model is evaluated on five diverse datasets including CK+, JAFFE, FER-2013, KDEF, and FERG datasets. We also performed a cross-corpora evaluation to show the generalizability of our model. The proposed model achieves a very high recognition rate for all datasets that prove the merit of the proposed framework for both the human facial images and stylized cartoon characters. Moreover, we conducted an ablation study with different variants of our model to prove its efficiency and effectiveness for emotions identification. |
abstract_unstemmed |
Facial emotion recognition (FER) is an important research area in artificial intelligence (AI) and has many applications i.e., face authentication systems, e-learning, entertainment, deepfakes detection, etc. FER is still a challenging task due to more intra-class variations of emotions. Although prior deep learning methods have achieved good performance for FER. However, still there exists a need to develop efficient and effective FER systems robust to certain conditions i.e., variations in illumination, face angles, gender, race, background settings, and people belonging to diverse geographical regions. Moreover, a generalized model for the classification of human emotions is required to be implemented in computer systems so that they can interact with humans according to their emotions and improve their interaction. This work presents a novel light-weight Efficient-SwishNet model for emotion recognition that is robust towards the aforementioned conditions. We have introduced a low-cost, smooth unbounded above and bounded below Swish activation function in our model. Property of unboundedness helps to avoid saturation while smoothing helps in optimization and generalization of the model. Performance of the proposed model is evaluated on five diverse datasets including CK+, JAFFE, FER-2013, KDEF, and FERG datasets. We also performed a cross-corpora evaluation to show the generalizability of our model. The proposed model achieves a very high recognition rate for all datasets that prove the merit of the proposed framework for both the human facial images and stylized cartoon characters. Moreover, we conducted an ablation study with different variants of our model to prove its efficiency and effectiveness for emotions identification. |
collection_details |
GBV_USEFLAG_A SYSFLAG_A GBV_DOAJ GBV_ILN_11 GBV_ILN_20 GBV_ILN_22 GBV_ILN_23 GBV_ILN_24 GBV_ILN_31 GBV_ILN_39 GBV_ILN_40 GBV_ILN_60 GBV_ILN_62 GBV_ILN_63 GBV_ILN_65 GBV_ILN_69 GBV_ILN_70 GBV_ILN_73 GBV_ILN_95 GBV_ILN_105 GBV_ILN_110 GBV_ILN_151 GBV_ILN_161 GBV_ILN_170 GBV_ILN_213 GBV_ILN_230 GBV_ILN_285 GBV_ILN_293 GBV_ILN_370 GBV_ILN_602 GBV_ILN_2014 GBV_ILN_4012 GBV_ILN_4037 GBV_ILN_4112 GBV_ILN_4125 GBV_ILN_4126 GBV_ILN_4249 GBV_ILN_4305 GBV_ILN_4306 GBV_ILN_4307 GBV_ILN_4313 GBV_ILN_4322 GBV_ILN_4323 GBV_ILN_4324 GBV_ILN_4325 GBV_ILN_4335 GBV_ILN_4338 GBV_ILN_4367 GBV_ILN_4700 |
title_short |
Efficient-SwishNet Based System for Facial Emotion Recognition |
url |
https://doi.org/10.1109/ACCESS.2022.3188730 https://doaj.org/article/f7f62713cd5e486a8c277e512fceadf8 https://ieeexplore.ieee.org/document/9815266/ https://doaj.org/toc/2169-3536 |
remote_bool |
true |
author2 |
Ali Javed Sami Bourouis Hany S. Hussein Hammam Alshazly |
author2Str |
Ali Javed Sami Bourouis Hany S. Hussein Hammam Alshazly |
ppnlink |
728440385 |
callnumber-subject |
TK - Electrical and Nuclear Engineering |
mediatype_str_mv |
c |
isOA_txt |
true |
hochschulschrift_bool |
false |
doi_str |
10.1109/ACCESS.2022.3188730 |
callnumber-a |
TK1-9971 |
up_date |
2024-07-03T21:06:55.430Z |
_version_ |
1803593518083473408 |
fullrecord_marcxml |
<?xml version="1.0" encoding="UTF-8"?><collection xmlns="http://www.loc.gov/MARC21/slim"><record><leader>01000caa a22002652 4500</leader><controlfield tag="001">DOAJ03653188X</controlfield><controlfield tag="003">DE-627</controlfield><controlfield tag="005">20230307231828.0</controlfield><controlfield tag="007">cr uuu---uuuuu</controlfield><controlfield tag="008">230227s2022 xx |||||o 00| ||eng c</controlfield><datafield tag="024" ind1="7" ind2=" "><subfield code="a">10.1109/ACCESS.2022.3188730</subfield><subfield code="2">doi</subfield></datafield><datafield tag="035" ind1=" " ind2=" "><subfield code="a">(DE-627)DOAJ03653188X</subfield></datafield><datafield tag="035" ind1=" " ind2=" "><subfield code="a">(DE-599)DOAJf7f62713cd5e486a8c277e512fceadf8</subfield></datafield><datafield tag="040" ind1=" " ind2=" "><subfield code="a">DE-627</subfield><subfield code="b">ger</subfield><subfield code="c">DE-627</subfield><subfield code="e">rakwb</subfield></datafield><datafield tag="041" ind1=" " ind2=" "><subfield code="a">eng</subfield></datafield><datafield tag="050" ind1=" " ind2="0"><subfield code="a">TK1-9971</subfield></datafield><datafield tag="100" ind1="0" ind2=" "><subfield code="a">Tarim Dar</subfield><subfield code="e">verfasserin</subfield><subfield code="4">aut</subfield></datafield><datafield tag="245" ind1="1" ind2="0"><subfield code="a">Efficient-SwishNet Based System for Facial Emotion Recognition</subfield></datafield><datafield tag="264" ind1=" " ind2="1"><subfield code="c">2022</subfield></datafield><datafield tag="336" ind1=" " ind2=" "><subfield code="a">Text</subfield><subfield code="b">txt</subfield><subfield code="2">rdacontent</subfield></datafield><datafield tag="337" ind1=" " ind2=" "><subfield code="a">Computermedien</subfield><subfield code="b">c</subfield><subfield code="2">rdamedia</subfield></datafield><datafield tag="338" ind1=" " ind2=" "><subfield code="a">Online-Ressource</subfield><subfield code="b">cr</subfield><subfield code="2">rdacarrier</subfield></datafield><datafield tag="520" ind1=" " ind2=" "><subfield code="a">Facial emotion recognition (FER) is an important research area in artificial intelligence (AI) and has many applications i.e., face authentication systems, e-learning, entertainment, deepfakes detection, etc. FER is still a challenging task due to more intra-class variations of emotions. Although prior deep learning methods have achieved good performance for FER. However, still there exists a need to develop efficient and effective FER systems robust to certain conditions i.e., variations in illumination, face angles, gender, race, background settings, and people belonging to diverse geographical regions. Moreover, a generalized model for the classification of human emotions is required to be implemented in computer systems so that they can interact with humans according to their emotions and improve their interaction. This work presents a novel light-weight Efficient-SwishNet model for emotion recognition that is robust towards the aforementioned conditions. We have introduced a low-cost, smooth unbounded above and bounded below Swish activation function in our model. Property of unboundedness helps to avoid saturation while smoothing helps in optimization and generalization of the model. Performance of the proposed model is evaluated on five diverse datasets including CK+, JAFFE, FER-2013, KDEF, and FERG datasets. We also performed a cross-corpora evaluation to show the generalizability of our model. The proposed model achieves a very high recognition rate for all datasets that prove the merit of the proposed framework for both the human facial images and stylized cartoon characters. Moreover, we conducted an ablation study with different variants of our model to prove its efficiency and effectiveness for emotions identification.</subfield></datafield><datafield tag="650" ind1=" " ind2="4"><subfield code="a">EfficientNet</subfield></datafield><datafield tag="650" ind1=" " ind2="4"><subfield code="a">efficient-SwishNet</subfield></datafield><datafield tag="650" ind1=" " ind2="4"><subfield code="a">facial emotion recognition</subfield></datafield><datafield tag="650" ind1=" " ind2="4"><subfield code="a">human computer interaction (HCI)</subfield></datafield><datafield tag="650" ind1=" " ind2="4"><subfield code="a">Swish activation</subfield></datafield><datafield tag="653" ind1=" " ind2="0"><subfield code="a">Electrical engineering. Electronics. Nuclear engineering</subfield></datafield><datafield tag="700" ind1="0" ind2=" "><subfield code="a">Ali Javed</subfield><subfield code="e">verfasserin</subfield><subfield code="4">aut</subfield></datafield><datafield tag="700" ind1="0" ind2=" "><subfield code="a">Sami Bourouis</subfield><subfield code="e">verfasserin</subfield><subfield code="4">aut</subfield></datafield><datafield tag="700" ind1="0" ind2=" "><subfield code="a">Hany S. Hussein</subfield><subfield code="e">verfasserin</subfield><subfield code="4">aut</subfield></datafield><datafield tag="700" ind1="0" ind2=" "><subfield code="a">Hammam Alshazly</subfield><subfield code="e">verfasserin</subfield><subfield code="4">aut</subfield></datafield><datafield tag="773" ind1="0" ind2="8"><subfield code="i">In</subfield><subfield code="t">IEEE Access</subfield><subfield code="d">IEEE, 2014</subfield><subfield code="g">10(2022), Seite 71311-71328</subfield><subfield code="w">(DE-627)728440385</subfield><subfield code="w">(DE-600)2687964-5</subfield><subfield code="x">21693536</subfield><subfield code="7">nnns</subfield></datafield><datafield tag="773" ind1="1" ind2="8"><subfield code="g">volume:10</subfield><subfield code="g">year:2022</subfield><subfield code="g">pages:71311-71328</subfield></datafield><datafield tag="856" ind1="4" ind2="0"><subfield code="u">https://doi.org/10.1109/ACCESS.2022.3188730</subfield><subfield code="z">kostenfrei</subfield></datafield><datafield tag="856" ind1="4" ind2="0"><subfield code="u">https://doaj.org/article/f7f62713cd5e486a8c277e512fceadf8</subfield><subfield code="z">kostenfrei</subfield></datafield><datafield tag="856" ind1="4" ind2="0"><subfield code="u">https://ieeexplore.ieee.org/document/9815266/</subfield><subfield code="z">kostenfrei</subfield></datafield><datafield tag="856" ind1="4" ind2="2"><subfield code="u">https://doaj.org/toc/2169-3536</subfield><subfield code="y">Journal toc</subfield><subfield code="z">kostenfrei</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_USEFLAG_A</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">SYSFLAG_A</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_DOAJ</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_11</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_20</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_22</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_23</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_24</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_31</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_39</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_40</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_60</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_62</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_63</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_65</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_69</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_70</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_73</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_95</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_105</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_110</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_151</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_161</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_170</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_213</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_230</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_285</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_293</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_370</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_602</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2014</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4012</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4037</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4112</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4125</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4126</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4249</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4305</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4306</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4307</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4313</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4322</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4323</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4324</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4325</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4335</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4338</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4367</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4700</subfield></datafield><datafield tag="951" ind1=" " ind2=" "><subfield code="a">AR</subfield></datafield><datafield tag="952" ind1=" " ind2=" "><subfield code="d">10</subfield><subfield code="j">2022</subfield><subfield code="h">71311-71328</subfield></datafield></record></collection>
|
score |
7.4007463 |