Effectiveness of multi-task deep learning framework for EEG-based emotion and context recognition
Studies have investigated electroencephalogram (EEG)-based emotion recognition using hand-crafted EEG features (e.g., differential entropy) or the annotated emotion categories without any additional emotion factors (e.g., context). The effectiveness of raw EEG-based emotion recognition remains for f...
Ausführliche Beschreibung
Autor*in: |
Choo, Sanghyun [verfasserIn] Park, Hoonseok [verfasserIn] Kim, Sangyeon [verfasserIn] Park, Donghyun [verfasserIn] Jung, Jae-Yoon [verfasserIn] Lee, Sangwon [verfasserIn] Nam, Chang S. [verfasserIn] |
---|
Format: |
E-Artikel |
---|---|
Sprache: |
Englisch |
Erschienen: |
2023 |
---|
Schlagwörter: |
---|
Übergeordnetes Werk: |
Enthalten in: Expert systems with applications - Amsterdam [u.a.] : Elsevier Science, 1990, 227 |
---|---|
Übergeordnetes Werk: |
volume:227 |
DOI / URN: |
10.1016/j.eswa.2023.120348 |
---|
Katalog-ID: |
ELV01017026X |
---|
LEADER | 01000caa a22002652 4500 | ||
---|---|---|---|
001 | ELV01017026X | ||
003 | DE-627 | ||
005 | 20230927092443.0 | ||
007 | cr uuu---uuuuu | ||
008 | 230607s2023 xx |||||o 00| ||eng c | ||
024 | 7 | |a 10.1016/j.eswa.2023.120348 |2 doi | |
035 | |a (DE-627)ELV01017026X | ||
035 | |a (ELSEVIER)S0957-4174(23)00850-3 | ||
040 | |a DE-627 |b ger |c DE-627 |e rda | ||
041 | |a eng | ||
082 | 0 | 4 | |a 004 |q VZ |
084 | |a 54.72 |2 bkl | ||
100 | 1 | |a Choo, Sanghyun |e verfasserin |4 aut | |
245 | 1 | 0 | |a Effectiveness of multi-task deep learning framework for EEG-based emotion and context recognition |
264 | 1 | |c 2023 | |
336 | |a nicht spezifiziert |b zzz |2 rdacontent | ||
337 | |a Computermedien |b c |2 rdamedia | ||
338 | |a Online-Ressource |b cr |2 rdacarrier | ||
520 | |a Studies have investigated electroencephalogram (EEG)-based emotion recognition using hand-crafted EEG features (e.g., differential entropy) or the annotated emotion categories without any additional emotion factors (e.g., context). The effectiveness of raw EEG-based emotion recognition remains for further investigation. In this study, we investigated the effectiveness of multi-task learning (MTL) for raw EEG-based convolutional neural networks (CNNs) in emotion recognition with auxiliary context information. Thirty subjects participated in this study, where their brain signals were collected when watching six types of emotion images (social/nonsocial-fear, social/nonsocial-sad, and social/nonsocial-neutral). For the MTL architecture, we utilized temporal and spatial filtering layers from raw EEG-based CNNs as shared and task-specific layers for emotion and context classification tasks. Subject-dependent classifications and five repeated five-fold cross-validation were performed to test the classification accuracy for all comparison models. Our results showed that (1) the MTL classifier had a significantly higher classification accuracy and improved the performance of the single-task learnings (STLs) for both emotion and context, and (2) the ShallowConvNet was the best network architecture among the considered CNNs for the MTL with statistically significant improvement to the raw EEG-based STLs. This shows that the MTL can be a promising method for emotion recognition in utilizing the raw EEG-based CNN classifiers and emphasizes the importance of considering context information. | ||
650 | 4 | |a Emotion recognition | |
650 | 4 | |a Multi-task learning (MTL) | |
650 | 4 | |a Convolutional neural network (CNN) | |
650 | 4 | |a Electroencephalogram (EEG) | |
700 | 1 | |a Park, Hoonseok |e verfasserin |0 (orcid)0000-0003-0776-6570 |4 aut | |
700 | 1 | |a Kim, Sangyeon |e verfasserin |0 (orcid)0000-0001-9316-7982 |4 aut | |
700 | 1 | |a Park, Donghyun |e verfasserin |0 (orcid)0000-0002-0887-6168 |4 aut | |
700 | 1 | |a Jung, Jae-Yoon |e verfasserin |0 (orcid)0000-0002-4850-6284 |4 aut | |
700 | 1 | |a Lee, Sangwon |e verfasserin |4 aut | |
700 | 1 | |a Nam, Chang S. |e verfasserin |0 (orcid)0000-0001-9005-0703 |4 aut | |
773 | 0 | 8 | |i Enthalten in |t Expert systems with applications |d Amsterdam [u.a.] : Elsevier Science, 1990 |g 227 |h Online-Ressource |w (DE-627)320577961 |w (DE-600)2017237-0 |w (DE-576)11481807X |7 nnns |
773 | 1 | 8 | |g volume:227 |
912 | |a GBV_USEFLAG_U | ||
912 | |a GBV_ELV | ||
912 | |a SYSFLAG_U | ||
912 | |a GBV_ILN_20 | ||
912 | |a GBV_ILN_22 | ||
912 | |a GBV_ILN_23 | ||
912 | |a GBV_ILN_24 | ||
912 | |a GBV_ILN_31 | ||
912 | |a GBV_ILN_32 | ||
912 | |a GBV_ILN_40 | ||
912 | |a GBV_ILN_60 | ||
912 | |a GBV_ILN_62 | ||
912 | |a GBV_ILN_65 | ||
912 | |a GBV_ILN_69 | ||
912 | |a GBV_ILN_70 | ||
912 | |a GBV_ILN_73 | ||
912 | |a GBV_ILN_74 | ||
912 | |a GBV_ILN_90 | ||
912 | |a GBV_ILN_95 | ||
912 | |a GBV_ILN_100 | ||
912 | |a GBV_ILN_101 | ||
912 | |a GBV_ILN_105 | ||
912 | |a GBV_ILN_110 | ||
912 | |a GBV_ILN_150 | ||
912 | |a GBV_ILN_151 | ||
912 | |a GBV_ILN_187 | ||
912 | |a GBV_ILN_213 | ||
912 | |a GBV_ILN_224 | ||
912 | |a GBV_ILN_230 | ||
912 | |a GBV_ILN_370 | ||
912 | |a GBV_ILN_602 | ||
912 | |a GBV_ILN_702 | ||
912 | |a GBV_ILN_2001 | ||
912 | |a GBV_ILN_2003 | ||
912 | |a GBV_ILN_2004 | ||
912 | |a GBV_ILN_2005 | ||
912 | |a GBV_ILN_2007 | ||
912 | |a GBV_ILN_2008 | ||
912 | |a GBV_ILN_2009 | ||
912 | |a GBV_ILN_2010 | ||
912 | |a GBV_ILN_2011 | ||
912 | |a GBV_ILN_2014 | ||
912 | |a GBV_ILN_2015 | ||
912 | |a GBV_ILN_2020 | ||
912 | |a GBV_ILN_2021 | ||
912 | |a GBV_ILN_2025 | ||
912 | |a GBV_ILN_2026 | ||
912 | |a GBV_ILN_2027 | ||
912 | |a GBV_ILN_2034 | ||
912 | |a GBV_ILN_2044 | ||
912 | |a GBV_ILN_2048 | ||
912 | |a GBV_ILN_2049 | ||
912 | |a GBV_ILN_2050 | ||
912 | |a GBV_ILN_2055 | ||
912 | |a GBV_ILN_2056 | ||
912 | |a GBV_ILN_2059 | ||
912 | |a GBV_ILN_2061 | ||
912 | |a GBV_ILN_2064 | ||
912 | |a GBV_ILN_2088 | ||
912 | |a GBV_ILN_2106 | ||
912 | |a GBV_ILN_2110 | ||
912 | |a GBV_ILN_2111 | ||
912 | |a GBV_ILN_2112 | ||
912 | |a GBV_ILN_2122 | ||
912 | |a GBV_ILN_2129 | ||
912 | |a GBV_ILN_2143 | ||
912 | |a GBV_ILN_2152 | ||
912 | |a GBV_ILN_2153 | ||
912 | |a GBV_ILN_2190 | ||
912 | |a GBV_ILN_2232 | ||
912 | |a GBV_ILN_2336 | ||
912 | |a GBV_ILN_2470 | ||
912 | |a GBV_ILN_2507 | ||
912 | |a GBV_ILN_4035 | ||
912 | |a GBV_ILN_4037 | ||
912 | |a GBV_ILN_4112 | ||
912 | |a GBV_ILN_4125 | ||
912 | |a GBV_ILN_4242 | ||
912 | |a GBV_ILN_4249 | ||
912 | |a GBV_ILN_4251 | ||
912 | |a GBV_ILN_4305 | ||
912 | |a GBV_ILN_4306 | ||
912 | |a GBV_ILN_4307 | ||
912 | |a GBV_ILN_4313 | ||
912 | |a GBV_ILN_4322 | ||
912 | |a GBV_ILN_4323 | ||
912 | |a GBV_ILN_4324 | ||
912 | |a GBV_ILN_4325 | ||
912 | |a GBV_ILN_4326 | ||
912 | |a GBV_ILN_4333 | ||
912 | |a GBV_ILN_4334 | ||
912 | |a GBV_ILN_4338 | ||
912 | |a GBV_ILN_4393 | ||
912 | |a GBV_ILN_4700 | ||
936 | b | k | |a 54.72 |j Künstliche Intelligenz |q VZ |
951 | |a AR | ||
952 | |d 227 |
author_variant |
s c sc h p hp s k sk d p dp j y j jyj s l sl c s n cs csn |
---|---|
matchkey_str |
choosanghyunparkhoonseokkimsangyeonparkd:2023----:fetvnsomliakeperigrmwrfrebsdmto |
hierarchy_sort_str |
2023 |
bklnumber |
54.72 |
publishDate |
2023 |
allfields |
10.1016/j.eswa.2023.120348 doi (DE-627)ELV01017026X (ELSEVIER)S0957-4174(23)00850-3 DE-627 ger DE-627 rda eng 004 VZ 54.72 bkl Choo, Sanghyun verfasserin aut Effectiveness of multi-task deep learning framework for EEG-based emotion and context recognition 2023 nicht spezifiziert zzz rdacontent Computermedien c rdamedia Online-Ressource cr rdacarrier Studies have investigated electroencephalogram (EEG)-based emotion recognition using hand-crafted EEG features (e.g., differential entropy) or the annotated emotion categories without any additional emotion factors (e.g., context). The effectiveness of raw EEG-based emotion recognition remains for further investigation. In this study, we investigated the effectiveness of multi-task learning (MTL) for raw EEG-based convolutional neural networks (CNNs) in emotion recognition with auxiliary context information. Thirty subjects participated in this study, where their brain signals were collected when watching six types of emotion images (social/nonsocial-fear, social/nonsocial-sad, and social/nonsocial-neutral). For the MTL architecture, we utilized temporal and spatial filtering layers from raw EEG-based CNNs as shared and task-specific layers for emotion and context classification tasks. Subject-dependent classifications and five repeated five-fold cross-validation were performed to test the classification accuracy for all comparison models. Our results showed that (1) the MTL classifier had a significantly higher classification accuracy and improved the performance of the single-task learnings (STLs) for both emotion and context, and (2) the ShallowConvNet was the best network architecture among the considered CNNs for the MTL with statistically significant improvement to the raw EEG-based STLs. This shows that the MTL can be a promising method for emotion recognition in utilizing the raw EEG-based CNN classifiers and emphasizes the importance of considering context information. Emotion recognition Multi-task learning (MTL) Convolutional neural network (CNN) Electroencephalogram (EEG) Park, Hoonseok verfasserin (orcid)0000-0003-0776-6570 aut Kim, Sangyeon verfasserin (orcid)0000-0001-9316-7982 aut Park, Donghyun verfasserin (orcid)0000-0002-0887-6168 aut Jung, Jae-Yoon verfasserin (orcid)0000-0002-4850-6284 aut Lee, Sangwon verfasserin aut Nam, Chang S. verfasserin (orcid)0000-0001-9005-0703 aut Enthalten in Expert systems with applications Amsterdam [u.a.] : Elsevier Science, 1990 227 Online-Ressource (DE-627)320577961 (DE-600)2017237-0 (DE-576)11481807X nnns volume:227 GBV_USEFLAG_U GBV_ELV SYSFLAG_U GBV_ILN_20 GBV_ILN_22 GBV_ILN_23 GBV_ILN_24 GBV_ILN_31 GBV_ILN_32 GBV_ILN_40 GBV_ILN_60 GBV_ILN_62 GBV_ILN_65 GBV_ILN_69 GBV_ILN_70 GBV_ILN_73 GBV_ILN_74 GBV_ILN_90 GBV_ILN_95 GBV_ILN_100 GBV_ILN_101 GBV_ILN_105 GBV_ILN_110 GBV_ILN_150 GBV_ILN_151 GBV_ILN_187 GBV_ILN_213 GBV_ILN_224 GBV_ILN_230 GBV_ILN_370 GBV_ILN_602 GBV_ILN_702 GBV_ILN_2001 GBV_ILN_2003 GBV_ILN_2004 GBV_ILN_2005 GBV_ILN_2007 GBV_ILN_2008 GBV_ILN_2009 GBV_ILN_2010 GBV_ILN_2011 GBV_ILN_2014 GBV_ILN_2015 GBV_ILN_2020 GBV_ILN_2021 GBV_ILN_2025 GBV_ILN_2026 GBV_ILN_2027 GBV_ILN_2034 GBV_ILN_2044 GBV_ILN_2048 GBV_ILN_2049 GBV_ILN_2050 GBV_ILN_2055 GBV_ILN_2056 GBV_ILN_2059 GBV_ILN_2061 GBV_ILN_2064 GBV_ILN_2088 GBV_ILN_2106 GBV_ILN_2110 GBV_ILN_2111 GBV_ILN_2112 GBV_ILN_2122 GBV_ILN_2129 GBV_ILN_2143 GBV_ILN_2152 GBV_ILN_2153 GBV_ILN_2190 GBV_ILN_2232 GBV_ILN_2336 GBV_ILN_2470 GBV_ILN_2507 GBV_ILN_4035 GBV_ILN_4037 GBV_ILN_4112 GBV_ILN_4125 GBV_ILN_4242 GBV_ILN_4249 GBV_ILN_4251 GBV_ILN_4305 GBV_ILN_4306 GBV_ILN_4307 GBV_ILN_4313 GBV_ILN_4322 GBV_ILN_4323 GBV_ILN_4324 GBV_ILN_4325 GBV_ILN_4326 GBV_ILN_4333 GBV_ILN_4334 GBV_ILN_4338 GBV_ILN_4393 GBV_ILN_4700 54.72 Künstliche Intelligenz VZ AR 227 |
spelling |
10.1016/j.eswa.2023.120348 doi (DE-627)ELV01017026X (ELSEVIER)S0957-4174(23)00850-3 DE-627 ger DE-627 rda eng 004 VZ 54.72 bkl Choo, Sanghyun verfasserin aut Effectiveness of multi-task deep learning framework for EEG-based emotion and context recognition 2023 nicht spezifiziert zzz rdacontent Computermedien c rdamedia Online-Ressource cr rdacarrier Studies have investigated electroencephalogram (EEG)-based emotion recognition using hand-crafted EEG features (e.g., differential entropy) or the annotated emotion categories without any additional emotion factors (e.g., context). The effectiveness of raw EEG-based emotion recognition remains for further investigation. In this study, we investigated the effectiveness of multi-task learning (MTL) for raw EEG-based convolutional neural networks (CNNs) in emotion recognition with auxiliary context information. Thirty subjects participated in this study, where their brain signals were collected when watching six types of emotion images (social/nonsocial-fear, social/nonsocial-sad, and social/nonsocial-neutral). For the MTL architecture, we utilized temporal and spatial filtering layers from raw EEG-based CNNs as shared and task-specific layers for emotion and context classification tasks. Subject-dependent classifications and five repeated five-fold cross-validation were performed to test the classification accuracy for all comparison models. Our results showed that (1) the MTL classifier had a significantly higher classification accuracy and improved the performance of the single-task learnings (STLs) for both emotion and context, and (2) the ShallowConvNet was the best network architecture among the considered CNNs for the MTL with statistically significant improvement to the raw EEG-based STLs. This shows that the MTL can be a promising method for emotion recognition in utilizing the raw EEG-based CNN classifiers and emphasizes the importance of considering context information. Emotion recognition Multi-task learning (MTL) Convolutional neural network (CNN) Electroencephalogram (EEG) Park, Hoonseok verfasserin (orcid)0000-0003-0776-6570 aut Kim, Sangyeon verfasserin (orcid)0000-0001-9316-7982 aut Park, Donghyun verfasserin (orcid)0000-0002-0887-6168 aut Jung, Jae-Yoon verfasserin (orcid)0000-0002-4850-6284 aut Lee, Sangwon verfasserin aut Nam, Chang S. verfasserin (orcid)0000-0001-9005-0703 aut Enthalten in Expert systems with applications Amsterdam [u.a.] : Elsevier Science, 1990 227 Online-Ressource (DE-627)320577961 (DE-600)2017237-0 (DE-576)11481807X nnns volume:227 GBV_USEFLAG_U GBV_ELV SYSFLAG_U GBV_ILN_20 GBV_ILN_22 GBV_ILN_23 GBV_ILN_24 GBV_ILN_31 GBV_ILN_32 GBV_ILN_40 GBV_ILN_60 GBV_ILN_62 GBV_ILN_65 GBV_ILN_69 GBV_ILN_70 GBV_ILN_73 GBV_ILN_74 GBV_ILN_90 GBV_ILN_95 GBV_ILN_100 GBV_ILN_101 GBV_ILN_105 GBV_ILN_110 GBV_ILN_150 GBV_ILN_151 GBV_ILN_187 GBV_ILN_213 GBV_ILN_224 GBV_ILN_230 GBV_ILN_370 GBV_ILN_602 GBV_ILN_702 GBV_ILN_2001 GBV_ILN_2003 GBV_ILN_2004 GBV_ILN_2005 GBV_ILN_2007 GBV_ILN_2008 GBV_ILN_2009 GBV_ILN_2010 GBV_ILN_2011 GBV_ILN_2014 GBV_ILN_2015 GBV_ILN_2020 GBV_ILN_2021 GBV_ILN_2025 GBV_ILN_2026 GBV_ILN_2027 GBV_ILN_2034 GBV_ILN_2044 GBV_ILN_2048 GBV_ILN_2049 GBV_ILN_2050 GBV_ILN_2055 GBV_ILN_2056 GBV_ILN_2059 GBV_ILN_2061 GBV_ILN_2064 GBV_ILN_2088 GBV_ILN_2106 GBV_ILN_2110 GBV_ILN_2111 GBV_ILN_2112 GBV_ILN_2122 GBV_ILN_2129 GBV_ILN_2143 GBV_ILN_2152 GBV_ILN_2153 GBV_ILN_2190 GBV_ILN_2232 GBV_ILN_2336 GBV_ILN_2470 GBV_ILN_2507 GBV_ILN_4035 GBV_ILN_4037 GBV_ILN_4112 GBV_ILN_4125 GBV_ILN_4242 GBV_ILN_4249 GBV_ILN_4251 GBV_ILN_4305 GBV_ILN_4306 GBV_ILN_4307 GBV_ILN_4313 GBV_ILN_4322 GBV_ILN_4323 GBV_ILN_4324 GBV_ILN_4325 GBV_ILN_4326 GBV_ILN_4333 GBV_ILN_4334 GBV_ILN_4338 GBV_ILN_4393 GBV_ILN_4700 54.72 Künstliche Intelligenz VZ AR 227 |
allfields_unstemmed |
10.1016/j.eswa.2023.120348 doi (DE-627)ELV01017026X (ELSEVIER)S0957-4174(23)00850-3 DE-627 ger DE-627 rda eng 004 VZ 54.72 bkl Choo, Sanghyun verfasserin aut Effectiveness of multi-task deep learning framework for EEG-based emotion and context recognition 2023 nicht spezifiziert zzz rdacontent Computermedien c rdamedia Online-Ressource cr rdacarrier Studies have investigated electroencephalogram (EEG)-based emotion recognition using hand-crafted EEG features (e.g., differential entropy) or the annotated emotion categories without any additional emotion factors (e.g., context). The effectiveness of raw EEG-based emotion recognition remains for further investigation. In this study, we investigated the effectiveness of multi-task learning (MTL) for raw EEG-based convolutional neural networks (CNNs) in emotion recognition with auxiliary context information. Thirty subjects participated in this study, where their brain signals were collected when watching six types of emotion images (social/nonsocial-fear, social/nonsocial-sad, and social/nonsocial-neutral). For the MTL architecture, we utilized temporal and spatial filtering layers from raw EEG-based CNNs as shared and task-specific layers for emotion and context classification tasks. Subject-dependent classifications and five repeated five-fold cross-validation were performed to test the classification accuracy for all comparison models. Our results showed that (1) the MTL classifier had a significantly higher classification accuracy and improved the performance of the single-task learnings (STLs) for both emotion and context, and (2) the ShallowConvNet was the best network architecture among the considered CNNs for the MTL with statistically significant improvement to the raw EEG-based STLs. This shows that the MTL can be a promising method for emotion recognition in utilizing the raw EEG-based CNN classifiers and emphasizes the importance of considering context information. Emotion recognition Multi-task learning (MTL) Convolutional neural network (CNN) Electroencephalogram (EEG) Park, Hoonseok verfasserin (orcid)0000-0003-0776-6570 aut Kim, Sangyeon verfasserin (orcid)0000-0001-9316-7982 aut Park, Donghyun verfasserin (orcid)0000-0002-0887-6168 aut Jung, Jae-Yoon verfasserin (orcid)0000-0002-4850-6284 aut Lee, Sangwon verfasserin aut Nam, Chang S. verfasserin (orcid)0000-0001-9005-0703 aut Enthalten in Expert systems with applications Amsterdam [u.a.] : Elsevier Science, 1990 227 Online-Ressource (DE-627)320577961 (DE-600)2017237-0 (DE-576)11481807X nnns volume:227 GBV_USEFLAG_U GBV_ELV SYSFLAG_U GBV_ILN_20 GBV_ILN_22 GBV_ILN_23 GBV_ILN_24 GBV_ILN_31 GBV_ILN_32 GBV_ILN_40 GBV_ILN_60 GBV_ILN_62 GBV_ILN_65 GBV_ILN_69 GBV_ILN_70 GBV_ILN_73 GBV_ILN_74 GBV_ILN_90 GBV_ILN_95 GBV_ILN_100 GBV_ILN_101 GBV_ILN_105 GBV_ILN_110 GBV_ILN_150 GBV_ILN_151 GBV_ILN_187 GBV_ILN_213 GBV_ILN_224 GBV_ILN_230 GBV_ILN_370 GBV_ILN_602 GBV_ILN_702 GBV_ILN_2001 GBV_ILN_2003 GBV_ILN_2004 GBV_ILN_2005 GBV_ILN_2007 GBV_ILN_2008 GBV_ILN_2009 GBV_ILN_2010 GBV_ILN_2011 GBV_ILN_2014 GBV_ILN_2015 GBV_ILN_2020 GBV_ILN_2021 GBV_ILN_2025 GBV_ILN_2026 GBV_ILN_2027 GBV_ILN_2034 GBV_ILN_2044 GBV_ILN_2048 GBV_ILN_2049 GBV_ILN_2050 GBV_ILN_2055 GBV_ILN_2056 GBV_ILN_2059 GBV_ILN_2061 GBV_ILN_2064 GBV_ILN_2088 GBV_ILN_2106 GBV_ILN_2110 GBV_ILN_2111 GBV_ILN_2112 GBV_ILN_2122 GBV_ILN_2129 GBV_ILN_2143 GBV_ILN_2152 GBV_ILN_2153 GBV_ILN_2190 GBV_ILN_2232 GBV_ILN_2336 GBV_ILN_2470 GBV_ILN_2507 GBV_ILN_4035 GBV_ILN_4037 GBV_ILN_4112 GBV_ILN_4125 GBV_ILN_4242 GBV_ILN_4249 GBV_ILN_4251 GBV_ILN_4305 GBV_ILN_4306 GBV_ILN_4307 GBV_ILN_4313 GBV_ILN_4322 GBV_ILN_4323 GBV_ILN_4324 GBV_ILN_4325 GBV_ILN_4326 GBV_ILN_4333 GBV_ILN_4334 GBV_ILN_4338 GBV_ILN_4393 GBV_ILN_4700 54.72 Künstliche Intelligenz VZ AR 227 |
allfieldsGer |
10.1016/j.eswa.2023.120348 doi (DE-627)ELV01017026X (ELSEVIER)S0957-4174(23)00850-3 DE-627 ger DE-627 rda eng 004 VZ 54.72 bkl Choo, Sanghyun verfasserin aut Effectiveness of multi-task deep learning framework for EEG-based emotion and context recognition 2023 nicht spezifiziert zzz rdacontent Computermedien c rdamedia Online-Ressource cr rdacarrier Studies have investigated electroencephalogram (EEG)-based emotion recognition using hand-crafted EEG features (e.g., differential entropy) or the annotated emotion categories without any additional emotion factors (e.g., context). The effectiveness of raw EEG-based emotion recognition remains for further investigation. In this study, we investigated the effectiveness of multi-task learning (MTL) for raw EEG-based convolutional neural networks (CNNs) in emotion recognition with auxiliary context information. Thirty subjects participated in this study, where their brain signals were collected when watching six types of emotion images (social/nonsocial-fear, social/nonsocial-sad, and social/nonsocial-neutral). For the MTL architecture, we utilized temporal and spatial filtering layers from raw EEG-based CNNs as shared and task-specific layers for emotion and context classification tasks. Subject-dependent classifications and five repeated five-fold cross-validation were performed to test the classification accuracy for all comparison models. Our results showed that (1) the MTL classifier had a significantly higher classification accuracy and improved the performance of the single-task learnings (STLs) for both emotion and context, and (2) the ShallowConvNet was the best network architecture among the considered CNNs for the MTL with statistically significant improvement to the raw EEG-based STLs. This shows that the MTL can be a promising method for emotion recognition in utilizing the raw EEG-based CNN classifiers and emphasizes the importance of considering context information. Emotion recognition Multi-task learning (MTL) Convolutional neural network (CNN) Electroencephalogram (EEG) Park, Hoonseok verfasserin (orcid)0000-0003-0776-6570 aut Kim, Sangyeon verfasserin (orcid)0000-0001-9316-7982 aut Park, Donghyun verfasserin (orcid)0000-0002-0887-6168 aut Jung, Jae-Yoon verfasserin (orcid)0000-0002-4850-6284 aut Lee, Sangwon verfasserin aut Nam, Chang S. verfasserin (orcid)0000-0001-9005-0703 aut Enthalten in Expert systems with applications Amsterdam [u.a.] : Elsevier Science, 1990 227 Online-Ressource (DE-627)320577961 (DE-600)2017237-0 (DE-576)11481807X nnns volume:227 GBV_USEFLAG_U GBV_ELV SYSFLAG_U GBV_ILN_20 GBV_ILN_22 GBV_ILN_23 GBV_ILN_24 GBV_ILN_31 GBV_ILN_32 GBV_ILN_40 GBV_ILN_60 GBV_ILN_62 GBV_ILN_65 GBV_ILN_69 GBV_ILN_70 GBV_ILN_73 GBV_ILN_74 GBV_ILN_90 GBV_ILN_95 GBV_ILN_100 GBV_ILN_101 GBV_ILN_105 GBV_ILN_110 GBV_ILN_150 GBV_ILN_151 GBV_ILN_187 GBV_ILN_213 GBV_ILN_224 GBV_ILN_230 GBV_ILN_370 GBV_ILN_602 GBV_ILN_702 GBV_ILN_2001 GBV_ILN_2003 GBV_ILN_2004 GBV_ILN_2005 GBV_ILN_2007 GBV_ILN_2008 GBV_ILN_2009 GBV_ILN_2010 GBV_ILN_2011 GBV_ILN_2014 GBV_ILN_2015 GBV_ILN_2020 GBV_ILN_2021 GBV_ILN_2025 GBV_ILN_2026 GBV_ILN_2027 GBV_ILN_2034 GBV_ILN_2044 GBV_ILN_2048 GBV_ILN_2049 GBV_ILN_2050 GBV_ILN_2055 GBV_ILN_2056 GBV_ILN_2059 GBV_ILN_2061 GBV_ILN_2064 GBV_ILN_2088 GBV_ILN_2106 GBV_ILN_2110 GBV_ILN_2111 GBV_ILN_2112 GBV_ILN_2122 GBV_ILN_2129 GBV_ILN_2143 GBV_ILN_2152 GBV_ILN_2153 GBV_ILN_2190 GBV_ILN_2232 GBV_ILN_2336 GBV_ILN_2470 GBV_ILN_2507 GBV_ILN_4035 GBV_ILN_4037 GBV_ILN_4112 GBV_ILN_4125 GBV_ILN_4242 GBV_ILN_4249 GBV_ILN_4251 GBV_ILN_4305 GBV_ILN_4306 GBV_ILN_4307 GBV_ILN_4313 GBV_ILN_4322 GBV_ILN_4323 GBV_ILN_4324 GBV_ILN_4325 GBV_ILN_4326 GBV_ILN_4333 GBV_ILN_4334 GBV_ILN_4338 GBV_ILN_4393 GBV_ILN_4700 54.72 Künstliche Intelligenz VZ AR 227 |
allfieldsSound |
10.1016/j.eswa.2023.120348 doi (DE-627)ELV01017026X (ELSEVIER)S0957-4174(23)00850-3 DE-627 ger DE-627 rda eng 004 VZ 54.72 bkl Choo, Sanghyun verfasserin aut Effectiveness of multi-task deep learning framework for EEG-based emotion and context recognition 2023 nicht spezifiziert zzz rdacontent Computermedien c rdamedia Online-Ressource cr rdacarrier Studies have investigated electroencephalogram (EEG)-based emotion recognition using hand-crafted EEG features (e.g., differential entropy) or the annotated emotion categories without any additional emotion factors (e.g., context). The effectiveness of raw EEG-based emotion recognition remains for further investigation. In this study, we investigated the effectiveness of multi-task learning (MTL) for raw EEG-based convolutional neural networks (CNNs) in emotion recognition with auxiliary context information. Thirty subjects participated in this study, where their brain signals were collected when watching six types of emotion images (social/nonsocial-fear, social/nonsocial-sad, and social/nonsocial-neutral). For the MTL architecture, we utilized temporal and spatial filtering layers from raw EEG-based CNNs as shared and task-specific layers for emotion and context classification tasks. Subject-dependent classifications and five repeated five-fold cross-validation were performed to test the classification accuracy for all comparison models. Our results showed that (1) the MTL classifier had a significantly higher classification accuracy and improved the performance of the single-task learnings (STLs) for both emotion and context, and (2) the ShallowConvNet was the best network architecture among the considered CNNs for the MTL with statistically significant improvement to the raw EEG-based STLs. This shows that the MTL can be a promising method for emotion recognition in utilizing the raw EEG-based CNN classifiers and emphasizes the importance of considering context information. Emotion recognition Multi-task learning (MTL) Convolutional neural network (CNN) Electroencephalogram (EEG) Park, Hoonseok verfasserin (orcid)0000-0003-0776-6570 aut Kim, Sangyeon verfasserin (orcid)0000-0001-9316-7982 aut Park, Donghyun verfasserin (orcid)0000-0002-0887-6168 aut Jung, Jae-Yoon verfasserin (orcid)0000-0002-4850-6284 aut Lee, Sangwon verfasserin aut Nam, Chang S. verfasserin (orcid)0000-0001-9005-0703 aut Enthalten in Expert systems with applications Amsterdam [u.a.] : Elsevier Science, 1990 227 Online-Ressource (DE-627)320577961 (DE-600)2017237-0 (DE-576)11481807X nnns volume:227 GBV_USEFLAG_U GBV_ELV SYSFLAG_U GBV_ILN_20 GBV_ILN_22 GBV_ILN_23 GBV_ILN_24 GBV_ILN_31 GBV_ILN_32 GBV_ILN_40 GBV_ILN_60 GBV_ILN_62 GBV_ILN_65 GBV_ILN_69 GBV_ILN_70 GBV_ILN_73 GBV_ILN_74 GBV_ILN_90 GBV_ILN_95 GBV_ILN_100 GBV_ILN_101 GBV_ILN_105 GBV_ILN_110 GBV_ILN_150 GBV_ILN_151 GBV_ILN_187 GBV_ILN_213 GBV_ILN_224 GBV_ILN_230 GBV_ILN_370 GBV_ILN_602 GBV_ILN_702 GBV_ILN_2001 GBV_ILN_2003 GBV_ILN_2004 GBV_ILN_2005 GBV_ILN_2007 GBV_ILN_2008 GBV_ILN_2009 GBV_ILN_2010 GBV_ILN_2011 GBV_ILN_2014 GBV_ILN_2015 GBV_ILN_2020 GBV_ILN_2021 GBV_ILN_2025 GBV_ILN_2026 GBV_ILN_2027 GBV_ILN_2034 GBV_ILN_2044 GBV_ILN_2048 GBV_ILN_2049 GBV_ILN_2050 GBV_ILN_2055 GBV_ILN_2056 GBV_ILN_2059 GBV_ILN_2061 GBV_ILN_2064 GBV_ILN_2088 GBV_ILN_2106 GBV_ILN_2110 GBV_ILN_2111 GBV_ILN_2112 GBV_ILN_2122 GBV_ILN_2129 GBV_ILN_2143 GBV_ILN_2152 GBV_ILN_2153 GBV_ILN_2190 GBV_ILN_2232 GBV_ILN_2336 GBV_ILN_2470 GBV_ILN_2507 GBV_ILN_4035 GBV_ILN_4037 GBV_ILN_4112 GBV_ILN_4125 GBV_ILN_4242 GBV_ILN_4249 GBV_ILN_4251 GBV_ILN_4305 GBV_ILN_4306 GBV_ILN_4307 GBV_ILN_4313 GBV_ILN_4322 GBV_ILN_4323 GBV_ILN_4324 GBV_ILN_4325 GBV_ILN_4326 GBV_ILN_4333 GBV_ILN_4334 GBV_ILN_4338 GBV_ILN_4393 GBV_ILN_4700 54.72 Künstliche Intelligenz VZ AR 227 |
language |
English |
source |
Enthalten in Expert systems with applications 227 volume:227 |
sourceStr |
Enthalten in Expert systems with applications 227 volume:227 |
format_phy_str_mv |
Article |
bklname |
Künstliche Intelligenz |
institution |
findex.gbv.de |
topic_facet |
Emotion recognition Multi-task learning (MTL) Convolutional neural network (CNN) Electroencephalogram (EEG) |
dewey-raw |
004 |
isfreeaccess_bool |
false |
container_title |
Expert systems with applications |
authorswithroles_txt_mv |
Choo, Sanghyun @@aut@@ Park, Hoonseok @@aut@@ Kim, Sangyeon @@aut@@ Park, Donghyun @@aut@@ Jung, Jae-Yoon @@aut@@ Lee, Sangwon @@aut@@ Nam, Chang S. @@aut@@ |
publishDateDaySort_date |
2023-01-01T00:00:00Z |
hierarchy_top_id |
320577961 |
dewey-sort |
14 |
id |
ELV01017026X |
language_de |
englisch |
fullrecord |
<?xml version="1.0" encoding="UTF-8"?><collection xmlns="http://www.loc.gov/MARC21/slim"><record><leader>01000caa a22002652 4500</leader><controlfield tag="001">ELV01017026X</controlfield><controlfield tag="003">DE-627</controlfield><controlfield tag="005">20230927092443.0</controlfield><controlfield tag="007">cr uuu---uuuuu</controlfield><controlfield tag="008">230607s2023 xx |||||o 00| ||eng c</controlfield><datafield tag="024" ind1="7" ind2=" "><subfield code="a">10.1016/j.eswa.2023.120348</subfield><subfield code="2">doi</subfield></datafield><datafield tag="035" ind1=" " ind2=" "><subfield code="a">(DE-627)ELV01017026X</subfield></datafield><datafield tag="035" ind1=" " ind2=" "><subfield code="a">(ELSEVIER)S0957-4174(23)00850-3</subfield></datafield><datafield tag="040" ind1=" " ind2=" "><subfield code="a">DE-627</subfield><subfield code="b">ger</subfield><subfield code="c">DE-627</subfield><subfield code="e">rda</subfield></datafield><datafield tag="041" ind1=" " ind2=" "><subfield code="a">eng</subfield></datafield><datafield tag="082" ind1="0" ind2="4"><subfield code="a">004</subfield><subfield code="q">VZ</subfield></datafield><datafield tag="084" ind1=" " ind2=" "><subfield code="a">54.72</subfield><subfield code="2">bkl</subfield></datafield><datafield tag="100" ind1="1" ind2=" "><subfield code="a">Choo, Sanghyun</subfield><subfield code="e">verfasserin</subfield><subfield code="4">aut</subfield></datafield><datafield tag="245" ind1="1" ind2="0"><subfield code="a">Effectiveness of multi-task deep learning framework for EEG-based emotion and context recognition</subfield></datafield><datafield tag="264" ind1=" " ind2="1"><subfield code="c">2023</subfield></datafield><datafield tag="336" ind1=" " ind2=" "><subfield code="a">nicht spezifiziert</subfield><subfield code="b">zzz</subfield><subfield code="2">rdacontent</subfield></datafield><datafield tag="337" ind1=" " ind2=" "><subfield code="a">Computermedien</subfield><subfield code="b">c</subfield><subfield code="2">rdamedia</subfield></datafield><datafield tag="338" ind1=" " ind2=" "><subfield code="a">Online-Ressource</subfield><subfield code="b">cr</subfield><subfield code="2">rdacarrier</subfield></datafield><datafield tag="520" ind1=" " ind2=" "><subfield code="a">Studies have investigated electroencephalogram (EEG)-based emotion recognition using hand-crafted EEG features (e.g., differential entropy) or the annotated emotion categories without any additional emotion factors (e.g., context). The effectiveness of raw EEG-based emotion recognition remains for further investigation. In this study, we investigated the effectiveness of multi-task learning (MTL) for raw EEG-based convolutional neural networks (CNNs) in emotion recognition with auxiliary context information. Thirty subjects participated in this study, where their brain signals were collected when watching six types of emotion images (social/nonsocial-fear, social/nonsocial-sad, and social/nonsocial-neutral). For the MTL architecture, we utilized temporal and spatial filtering layers from raw EEG-based CNNs as shared and task-specific layers for emotion and context classification tasks. Subject-dependent classifications and five repeated five-fold cross-validation were performed to test the classification accuracy for all comparison models. Our results showed that (1) the MTL classifier had a significantly higher classification accuracy and improved the performance of the single-task learnings (STLs) for both emotion and context, and (2) the ShallowConvNet was the best network architecture among the considered CNNs for the MTL with statistically significant improvement to the raw EEG-based STLs. This shows that the MTL can be a promising method for emotion recognition in utilizing the raw EEG-based CNN classifiers and emphasizes the importance of considering context information.</subfield></datafield><datafield tag="650" ind1=" " ind2="4"><subfield code="a">Emotion recognition</subfield></datafield><datafield tag="650" ind1=" " ind2="4"><subfield code="a">Multi-task learning (MTL)</subfield></datafield><datafield tag="650" ind1=" " ind2="4"><subfield code="a">Convolutional neural network (CNN)</subfield></datafield><datafield tag="650" ind1=" " ind2="4"><subfield code="a">Electroencephalogram (EEG)</subfield></datafield><datafield tag="700" ind1="1" ind2=" "><subfield code="a">Park, Hoonseok</subfield><subfield code="e">verfasserin</subfield><subfield code="0">(orcid)0000-0003-0776-6570</subfield><subfield code="4">aut</subfield></datafield><datafield tag="700" ind1="1" ind2=" "><subfield code="a">Kim, Sangyeon</subfield><subfield code="e">verfasserin</subfield><subfield code="0">(orcid)0000-0001-9316-7982</subfield><subfield code="4">aut</subfield></datafield><datafield tag="700" ind1="1" ind2=" "><subfield code="a">Park, Donghyun</subfield><subfield code="e">verfasserin</subfield><subfield code="0">(orcid)0000-0002-0887-6168</subfield><subfield code="4">aut</subfield></datafield><datafield tag="700" ind1="1" ind2=" "><subfield code="a">Jung, Jae-Yoon</subfield><subfield code="e">verfasserin</subfield><subfield code="0">(orcid)0000-0002-4850-6284</subfield><subfield code="4">aut</subfield></datafield><datafield tag="700" ind1="1" ind2=" "><subfield code="a">Lee, Sangwon</subfield><subfield code="e">verfasserin</subfield><subfield code="4">aut</subfield></datafield><datafield tag="700" ind1="1" ind2=" "><subfield code="a">Nam, Chang S.</subfield><subfield code="e">verfasserin</subfield><subfield code="0">(orcid)0000-0001-9005-0703</subfield><subfield code="4">aut</subfield></datafield><datafield tag="773" ind1="0" ind2="8"><subfield code="i">Enthalten in</subfield><subfield code="t">Expert systems with applications</subfield><subfield code="d">Amsterdam [u.a.] : Elsevier Science, 1990</subfield><subfield code="g">227</subfield><subfield code="h">Online-Ressource</subfield><subfield code="w">(DE-627)320577961</subfield><subfield code="w">(DE-600)2017237-0</subfield><subfield code="w">(DE-576)11481807X</subfield><subfield code="7">nnns</subfield></datafield><datafield tag="773" ind1="1" ind2="8"><subfield code="g">volume:227</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_USEFLAG_U</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ELV</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">SYSFLAG_U</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_20</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_22</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_23</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_24</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_31</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_32</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_40</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_60</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_62</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_65</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_69</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_70</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_73</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_74</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_90</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_95</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_100</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_101</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_105</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_110</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_150</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_151</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_187</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_213</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_224</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_230</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_370</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_602</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_702</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2001</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2003</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2004</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2005</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2007</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2008</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2009</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2010</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2011</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2014</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2015</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2020</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2021</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2025</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2026</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2027</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2034</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2044</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2048</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2049</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2050</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2055</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2056</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2059</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2061</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2064</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2088</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2106</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2110</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2111</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2112</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2122</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2129</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2143</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2152</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2153</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2190</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2232</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2336</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2470</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2507</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4035</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4037</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4112</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4125</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4242</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4249</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4251</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4305</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4306</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4307</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4313</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4322</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4323</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4324</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4325</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4326</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4333</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4334</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4338</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4393</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4700</subfield></datafield><datafield tag="936" ind1="b" ind2="k"><subfield code="a">54.72</subfield><subfield code="j">Künstliche Intelligenz</subfield><subfield code="q">VZ</subfield></datafield><datafield tag="951" ind1=" " ind2=" "><subfield code="a">AR</subfield></datafield><datafield tag="952" ind1=" " ind2=" "><subfield code="d">227</subfield></datafield></record></collection>
|
author |
Choo, Sanghyun |
spellingShingle |
Choo, Sanghyun ddc 004 bkl 54.72 misc Emotion recognition misc Multi-task learning (MTL) misc Convolutional neural network (CNN) misc Electroencephalogram (EEG) Effectiveness of multi-task deep learning framework for EEG-based emotion and context recognition |
authorStr |
Choo, Sanghyun |
ppnlink_with_tag_str_mv |
@@773@@(DE-627)320577961 |
format |
electronic Article |
dewey-ones |
004 - Data processing & computer science |
delete_txt_mv |
keep |
author_role |
aut aut aut aut aut aut aut |
collection |
elsevier |
remote_str |
true |
illustrated |
Not Illustrated |
topic_title |
004 VZ 54.72 bkl Effectiveness of multi-task deep learning framework for EEG-based emotion and context recognition Emotion recognition Multi-task learning (MTL) Convolutional neural network (CNN) Electroencephalogram (EEG) |
topic |
ddc 004 bkl 54.72 misc Emotion recognition misc Multi-task learning (MTL) misc Convolutional neural network (CNN) misc Electroencephalogram (EEG) |
topic_unstemmed |
ddc 004 bkl 54.72 misc Emotion recognition misc Multi-task learning (MTL) misc Convolutional neural network (CNN) misc Electroencephalogram (EEG) |
topic_browse |
ddc 004 bkl 54.72 misc Emotion recognition misc Multi-task learning (MTL) misc Convolutional neural network (CNN) misc Electroencephalogram (EEG) |
format_facet |
Elektronische Aufsätze Aufsätze Elektronische Ressource |
format_main_str_mv |
Text Zeitschrift/Artikel |
carriertype_str_mv |
cr |
hierarchy_parent_title |
Expert systems with applications |
hierarchy_parent_id |
320577961 |
dewey-tens |
000 - Computer science, knowledge & systems |
hierarchy_top_title |
Expert systems with applications |
isfreeaccess_txt |
false |
familylinks_str_mv |
(DE-627)320577961 (DE-600)2017237-0 (DE-576)11481807X |
title |
Effectiveness of multi-task deep learning framework for EEG-based emotion and context recognition |
ctrlnum |
(DE-627)ELV01017026X (ELSEVIER)S0957-4174(23)00850-3 |
title_full |
Effectiveness of multi-task deep learning framework for EEG-based emotion and context recognition |
author_sort |
Choo, Sanghyun |
journal |
Expert systems with applications |
journalStr |
Expert systems with applications |
lang_code |
eng |
isOA_bool |
false |
dewey-hundreds |
000 - Computer science, information & general works |
recordtype |
marc |
publishDateSort |
2023 |
contenttype_str_mv |
zzz |
author_browse |
Choo, Sanghyun Park, Hoonseok Kim, Sangyeon Park, Donghyun Jung, Jae-Yoon Lee, Sangwon Nam, Chang S. |
container_volume |
227 |
class |
004 VZ 54.72 bkl |
format_se |
Elektronische Aufsätze |
author-letter |
Choo, Sanghyun |
doi_str_mv |
10.1016/j.eswa.2023.120348 |
normlink |
(ORCID)0000-0003-0776-6570 (ORCID)0000-0001-9316-7982 (ORCID)0000-0002-0887-6168 (ORCID)0000-0002-4850-6284 (ORCID)0000-0001-9005-0703 |
normlink_prefix_str_mv |
(orcid)0000-0003-0776-6570 (orcid)0000-0001-9316-7982 (orcid)0000-0002-0887-6168 (orcid)0000-0002-4850-6284 (orcid)0000-0001-9005-0703 |
dewey-full |
004 |
author2-role |
verfasserin |
title_sort |
effectiveness of multi-task deep learning framework for eeg-based emotion and context recognition |
title_auth |
Effectiveness of multi-task deep learning framework for EEG-based emotion and context recognition |
abstract |
Studies have investigated electroencephalogram (EEG)-based emotion recognition using hand-crafted EEG features (e.g., differential entropy) or the annotated emotion categories without any additional emotion factors (e.g., context). The effectiveness of raw EEG-based emotion recognition remains for further investigation. In this study, we investigated the effectiveness of multi-task learning (MTL) for raw EEG-based convolutional neural networks (CNNs) in emotion recognition with auxiliary context information. Thirty subjects participated in this study, where their brain signals were collected when watching six types of emotion images (social/nonsocial-fear, social/nonsocial-sad, and social/nonsocial-neutral). For the MTL architecture, we utilized temporal and spatial filtering layers from raw EEG-based CNNs as shared and task-specific layers for emotion and context classification tasks. Subject-dependent classifications and five repeated five-fold cross-validation were performed to test the classification accuracy for all comparison models. Our results showed that (1) the MTL classifier had a significantly higher classification accuracy and improved the performance of the single-task learnings (STLs) for both emotion and context, and (2) the ShallowConvNet was the best network architecture among the considered CNNs for the MTL with statistically significant improvement to the raw EEG-based STLs. This shows that the MTL can be a promising method for emotion recognition in utilizing the raw EEG-based CNN classifiers and emphasizes the importance of considering context information. |
abstractGer |
Studies have investigated electroencephalogram (EEG)-based emotion recognition using hand-crafted EEG features (e.g., differential entropy) or the annotated emotion categories without any additional emotion factors (e.g., context). The effectiveness of raw EEG-based emotion recognition remains for further investigation. In this study, we investigated the effectiveness of multi-task learning (MTL) for raw EEG-based convolutional neural networks (CNNs) in emotion recognition with auxiliary context information. Thirty subjects participated in this study, where their brain signals were collected when watching six types of emotion images (social/nonsocial-fear, social/nonsocial-sad, and social/nonsocial-neutral). For the MTL architecture, we utilized temporal and spatial filtering layers from raw EEG-based CNNs as shared and task-specific layers for emotion and context classification tasks. Subject-dependent classifications and five repeated five-fold cross-validation were performed to test the classification accuracy for all comparison models. Our results showed that (1) the MTL classifier had a significantly higher classification accuracy and improved the performance of the single-task learnings (STLs) for both emotion and context, and (2) the ShallowConvNet was the best network architecture among the considered CNNs for the MTL with statistically significant improvement to the raw EEG-based STLs. This shows that the MTL can be a promising method for emotion recognition in utilizing the raw EEG-based CNN classifiers and emphasizes the importance of considering context information. |
abstract_unstemmed |
Studies have investigated electroencephalogram (EEG)-based emotion recognition using hand-crafted EEG features (e.g., differential entropy) or the annotated emotion categories without any additional emotion factors (e.g., context). The effectiveness of raw EEG-based emotion recognition remains for further investigation. In this study, we investigated the effectiveness of multi-task learning (MTL) for raw EEG-based convolutional neural networks (CNNs) in emotion recognition with auxiliary context information. Thirty subjects participated in this study, where their brain signals were collected when watching six types of emotion images (social/nonsocial-fear, social/nonsocial-sad, and social/nonsocial-neutral). For the MTL architecture, we utilized temporal and spatial filtering layers from raw EEG-based CNNs as shared and task-specific layers for emotion and context classification tasks. Subject-dependent classifications and five repeated five-fold cross-validation were performed to test the classification accuracy for all comparison models. Our results showed that (1) the MTL classifier had a significantly higher classification accuracy and improved the performance of the single-task learnings (STLs) for both emotion and context, and (2) the ShallowConvNet was the best network architecture among the considered CNNs for the MTL with statistically significant improvement to the raw EEG-based STLs. This shows that the MTL can be a promising method for emotion recognition in utilizing the raw EEG-based CNN classifiers and emphasizes the importance of considering context information. |
collection_details |
GBV_USEFLAG_U GBV_ELV SYSFLAG_U GBV_ILN_20 GBV_ILN_22 GBV_ILN_23 GBV_ILN_24 GBV_ILN_31 GBV_ILN_32 GBV_ILN_40 GBV_ILN_60 GBV_ILN_62 GBV_ILN_65 GBV_ILN_69 GBV_ILN_70 GBV_ILN_73 GBV_ILN_74 GBV_ILN_90 GBV_ILN_95 GBV_ILN_100 GBV_ILN_101 GBV_ILN_105 GBV_ILN_110 GBV_ILN_150 GBV_ILN_151 GBV_ILN_187 GBV_ILN_213 GBV_ILN_224 GBV_ILN_230 GBV_ILN_370 GBV_ILN_602 GBV_ILN_702 GBV_ILN_2001 GBV_ILN_2003 GBV_ILN_2004 GBV_ILN_2005 GBV_ILN_2007 GBV_ILN_2008 GBV_ILN_2009 GBV_ILN_2010 GBV_ILN_2011 GBV_ILN_2014 GBV_ILN_2015 GBV_ILN_2020 GBV_ILN_2021 GBV_ILN_2025 GBV_ILN_2026 GBV_ILN_2027 GBV_ILN_2034 GBV_ILN_2044 GBV_ILN_2048 GBV_ILN_2049 GBV_ILN_2050 GBV_ILN_2055 GBV_ILN_2056 GBV_ILN_2059 GBV_ILN_2061 GBV_ILN_2064 GBV_ILN_2088 GBV_ILN_2106 GBV_ILN_2110 GBV_ILN_2111 GBV_ILN_2112 GBV_ILN_2122 GBV_ILN_2129 GBV_ILN_2143 GBV_ILN_2152 GBV_ILN_2153 GBV_ILN_2190 GBV_ILN_2232 GBV_ILN_2336 GBV_ILN_2470 GBV_ILN_2507 GBV_ILN_4035 GBV_ILN_4037 GBV_ILN_4112 GBV_ILN_4125 GBV_ILN_4242 GBV_ILN_4249 GBV_ILN_4251 GBV_ILN_4305 GBV_ILN_4306 GBV_ILN_4307 GBV_ILN_4313 GBV_ILN_4322 GBV_ILN_4323 GBV_ILN_4324 GBV_ILN_4325 GBV_ILN_4326 GBV_ILN_4333 GBV_ILN_4334 GBV_ILN_4338 GBV_ILN_4393 GBV_ILN_4700 |
title_short |
Effectiveness of multi-task deep learning framework for EEG-based emotion and context recognition |
remote_bool |
true |
author2 |
Park, Hoonseok Kim, Sangyeon Park, Donghyun Jung, Jae-Yoon Lee, Sangwon Nam, Chang S. |
author2Str |
Park, Hoonseok Kim, Sangyeon Park, Donghyun Jung, Jae-Yoon Lee, Sangwon Nam, Chang S. |
ppnlink |
320577961 |
mediatype_str_mv |
c |
isOA_txt |
false |
hochschulschrift_bool |
false |
doi_str |
10.1016/j.eswa.2023.120348 |
up_date |
2024-07-06T17:03:57.652Z |
_version_ |
1803850023076626432 |
fullrecord_marcxml |
<?xml version="1.0" encoding="UTF-8"?><collection xmlns="http://www.loc.gov/MARC21/slim"><record><leader>01000caa a22002652 4500</leader><controlfield tag="001">ELV01017026X</controlfield><controlfield tag="003">DE-627</controlfield><controlfield tag="005">20230927092443.0</controlfield><controlfield tag="007">cr uuu---uuuuu</controlfield><controlfield tag="008">230607s2023 xx |||||o 00| ||eng c</controlfield><datafield tag="024" ind1="7" ind2=" "><subfield code="a">10.1016/j.eswa.2023.120348</subfield><subfield code="2">doi</subfield></datafield><datafield tag="035" ind1=" " ind2=" "><subfield code="a">(DE-627)ELV01017026X</subfield></datafield><datafield tag="035" ind1=" " ind2=" "><subfield code="a">(ELSEVIER)S0957-4174(23)00850-3</subfield></datafield><datafield tag="040" ind1=" " ind2=" "><subfield code="a">DE-627</subfield><subfield code="b">ger</subfield><subfield code="c">DE-627</subfield><subfield code="e">rda</subfield></datafield><datafield tag="041" ind1=" " ind2=" "><subfield code="a">eng</subfield></datafield><datafield tag="082" ind1="0" ind2="4"><subfield code="a">004</subfield><subfield code="q">VZ</subfield></datafield><datafield tag="084" ind1=" " ind2=" "><subfield code="a">54.72</subfield><subfield code="2">bkl</subfield></datafield><datafield tag="100" ind1="1" ind2=" "><subfield code="a">Choo, Sanghyun</subfield><subfield code="e">verfasserin</subfield><subfield code="4">aut</subfield></datafield><datafield tag="245" ind1="1" ind2="0"><subfield code="a">Effectiveness of multi-task deep learning framework for EEG-based emotion and context recognition</subfield></datafield><datafield tag="264" ind1=" " ind2="1"><subfield code="c">2023</subfield></datafield><datafield tag="336" ind1=" " ind2=" "><subfield code="a">nicht spezifiziert</subfield><subfield code="b">zzz</subfield><subfield code="2">rdacontent</subfield></datafield><datafield tag="337" ind1=" " ind2=" "><subfield code="a">Computermedien</subfield><subfield code="b">c</subfield><subfield code="2">rdamedia</subfield></datafield><datafield tag="338" ind1=" " ind2=" "><subfield code="a">Online-Ressource</subfield><subfield code="b">cr</subfield><subfield code="2">rdacarrier</subfield></datafield><datafield tag="520" ind1=" " ind2=" "><subfield code="a">Studies have investigated electroencephalogram (EEG)-based emotion recognition using hand-crafted EEG features (e.g., differential entropy) or the annotated emotion categories without any additional emotion factors (e.g., context). The effectiveness of raw EEG-based emotion recognition remains for further investigation. In this study, we investigated the effectiveness of multi-task learning (MTL) for raw EEG-based convolutional neural networks (CNNs) in emotion recognition with auxiliary context information. Thirty subjects participated in this study, where their brain signals were collected when watching six types of emotion images (social/nonsocial-fear, social/nonsocial-sad, and social/nonsocial-neutral). For the MTL architecture, we utilized temporal and spatial filtering layers from raw EEG-based CNNs as shared and task-specific layers for emotion and context classification tasks. Subject-dependent classifications and five repeated five-fold cross-validation were performed to test the classification accuracy for all comparison models. Our results showed that (1) the MTL classifier had a significantly higher classification accuracy and improved the performance of the single-task learnings (STLs) for both emotion and context, and (2) the ShallowConvNet was the best network architecture among the considered CNNs for the MTL with statistically significant improvement to the raw EEG-based STLs. This shows that the MTL can be a promising method for emotion recognition in utilizing the raw EEG-based CNN classifiers and emphasizes the importance of considering context information.</subfield></datafield><datafield tag="650" ind1=" " ind2="4"><subfield code="a">Emotion recognition</subfield></datafield><datafield tag="650" ind1=" " ind2="4"><subfield code="a">Multi-task learning (MTL)</subfield></datafield><datafield tag="650" ind1=" " ind2="4"><subfield code="a">Convolutional neural network (CNN)</subfield></datafield><datafield tag="650" ind1=" " ind2="4"><subfield code="a">Electroencephalogram (EEG)</subfield></datafield><datafield tag="700" ind1="1" ind2=" "><subfield code="a">Park, Hoonseok</subfield><subfield code="e">verfasserin</subfield><subfield code="0">(orcid)0000-0003-0776-6570</subfield><subfield code="4">aut</subfield></datafield><datafield tag="700" ind1="1" ind2=" "><subfield code="a">Kim, Sangyeon</subfield><subfield code="e">verfasserin</subfield><subfield code="0">(orcid)0000-0001-9316-7982</subfield><subfield code="4">aut</subfield></datafield><datafield tag="700" ind1="1" ind2=" "><subfield code="a">Park, Donghyun</subfield><subfield code="e">verfasserin</subfield><subfield code="0">(orcid)0000-0002-0887-6168</subfield><subfield code="4">aut</subfield></datafield><datafield tag="700" ind1="1" ind2=" "><subfield code="a">Jung, Jae-Yoon</subfield><subfield code="e">verfasserin</subfield><subfield code="0">(orcid)0000-0002-4850-6284</subfield><subfield code="4">aut</subfield></datafield><datafield tag="700" ind1="1" ind2=" "><subfield code="a">Lee, Sangwon</subfield><subfield code="e">verfasserin</subfield><subfield code="4">aut</subfield></datafield><datafield tag="700" ind1="1" ind2=" "><subfield code="a">Nam, Chang S.</subfield><subfield code="e">verfasserin</subfield><subfield code="0">(orcid)0000-0001-9005-0703</subfield><subfield code="4">aut</subfield></datafield><datafield tag="773" ind1="0" ind2="8"><subfield code="i">Enthalten in</subfield><subfield code="t">Expert systems with applications</subfield><subfield code="d">Amsterdam [u.a.] : Elsevier Science, 1990</subfield><subfield code="g">227</subfield><subfield code="h">Online-Ressource</subfield><subfield code="w">(DE-627)320577961</subfield><subfield code="w">(DE-600)2017237-0</subfield><subfield code="w">(DE-576)11481807X</subfield><subfield code="7">nnns</subfield></datafield><datafield tag="773" ind1="1" ind2="8"><subfield code="g">volume:227</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_USEFLAG_U</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ELV</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">SYSFLAG_U</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_20</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_22</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_23</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_24</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_31</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_32</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_40</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_60</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_62</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_65</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_69</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_70</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_73</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_74</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_90</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_95</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_100</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_101</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_105</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_110</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_150</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_151</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_187</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_213</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_224</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_230</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_370</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_602</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_702</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2001</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2003</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2004</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2005</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2007</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2008</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2009</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2010</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2011</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2014</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2015</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2020</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2021</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2025</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2026</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2027</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2034</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2044</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2048</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2049</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2050</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2055</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2056</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2059</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2061</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2064</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2088</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2106</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2110</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2111</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2112</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2122</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2129</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2143</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2152</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2153</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2190</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2232</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2336</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2470</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2507</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4035</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4037</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4112</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4125</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4242</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4249</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4251</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4305</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4306</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4307</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4313</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4322</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4323</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4324</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4325</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4326</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4333</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4334</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4338</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4393</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4700</subfield></datafield><datafield tag="936" ind1="b" ind2="k"><subfield code="a">54.72</subfield><subfield code="j">Künstliche Intelligenz</subfield><subfield code="q">VZ</subfield></datafield><datafield tag="951" ind1=" " ind2=" "><subfield code="a">AR</subfield></datafield><datafield tag="952" ind1=" " ind2=" "><subfield code="d">227</subfield></datafield></record></collection>
|
score |
7.3994217 |