Excellent fine-tuning: From specific-subject classification to cross-task classification for motor imagery
With the popularity of deep learning, motor imagery electroencephalogram (MI-EEG) recognition based on feature extractors and classifiers has performed well. However, the features extracted by most models are not discriminative enough and are limited to specific-subject classifi-cation. We proposed...
Ausführliche Beschreibung
Autor*in: |
Jia, Xueyu [verfasserIn] Song, Yonghao [verfasserIn] Xie, Longhan [verfasserIn] |
---|
Format: |
E-Artikel |
---|---|
Sprache: |
Englisch |
Erschienen: |
2022 |
---|
Schlagwörter: |
---|
Übergeordnetes Werk: |
Enthalten in: Biomedical signal processing and control - Amsterdam [u.a.] : Elsevier, 2006, 79 |
---|---|
Übergeordnetes Werk: |
volume:79 |
DOI / URN: |
10.1016/j.bspc.2022.104051 |
---|
Katalog-ID: |
ELV009708952 |
---|
LEADER | 01000naa a22002652 4500 | ||
---|---|---|---|
001 | ELV009708952 | ||
003 | DE-627 | ||
005 | 20230530141212.0 | ||
007 | cr uuu---uuuuu | ||
008 | 230530s2022 xx |||||o 00| ||eng c | ||
024 | 7 | |a 10.1016/j.bspc.2022.104051 |2 doi | |
035 | |a (DE-627)ELV009708952 | ||
035 | |a (ELSEVIER)S1746-8094(22)00524-9 | ||
040 | |a DE-627 |b ger |c DE-627 |e rda | ||
041 | |a eng | ||
082 | 0 | 4 | |a 610 |q VZ |
084 | |a 44.09 |2 bkl | ||
084 | |a 44.32 |2 bkl | ||
100 | 1 | |a Jia, Xueyu |e verfasserin |4 aut | |
245 | 1 | 0 | |a Excellent fine-tuning: From specific-subject classification to cross-task classification for motor imagery |
264 | 1 | |c 2022 | |
336 | |a nicht spezifiziert |b zzz |2 rdacontent | ||
337 | |a Computermedien |b c |2 rdamedia | ||
338 | |a Online-Ressource |b cr |2 rdacarrier | ||
520 | |a With the popularity of deep learning, motor imagery electroencephalogram (MI-EEG) recognition based on feature extractors and classifiers has performed well. However, the features extracted by most models are not discriminative enough and are limited to specific-subject classifi-cation. We proposed a novel model Metric-based Spatial Filtering Transformer (MSFT) that utilizes additive angular margin loss to enforce the deep model to improve inter-class separability while enhancing intra-class compactness. Besides, a data augmentation method called EEG pyramid was applied to the model. Our model not only outperforms many recent benchmarks in specific-subject classifi-cation, but also is used for cross-subject and even cross-task classification. We did some experiments using BCI competition IV 2a and 2b datasets to evaluate the average accuracy. The Specific-subject: 86.11 % for 2a, 88.39 % for 2b. The Cross-subject: 61.92 % for 2a. The Cross-task: training the feature extractor with 2a data and then fine-tuning the classifier with 2b can achieve an average accuracy of 83.38 %. Our method is more general than most benchmarks and can deal with different kinds of classification situations. | ||
650 | 4 | |a Motor imagery | |
650 | 4 | |a Common spatial pattern | |
650 | 4 | |a Transformer | |
650 | 4 | |a Metric learning | |
650 | 4 | |a Cross subjects | |
700 | 1 | |a Song, Yonghao |e verfasserin |4 aut | |
700 | 1 | |a Xie, Longhan |e verfasserin |4 aut | |
773 | 0 | 8 | |i Enthalten in |t Biomedical signal processing and control |d Amsterdam [u.a.] : Elsevier, 2006 |g 79 |h Online-Ressource |w (DE-627)515537861 |w (DE-600)2241886-6 |w (DE-576)261592653 |x 1746-8108 |7 nnns |
773 | 1 | 8 | |g volume:79 |
912 | |a GBV_USEFLAG_U | ||
912 | |a SYSFLAG_U | ||
912 | |a GBV_ELV | ||
912 | |a SSG-OLC-PHA | ||
912 | |a GBV_ILN_20 | ||
912 | |a GBV_ILN_22 | ||
912 | |a GBV_ILN_23 | ||
912 | |a GBV_ILN_24 | ||
912 | |a GBV_ILN_31 | ||
912 | |a GBV_ILN_32 | ||
912 | |a GBV_ILN_40 | ||
912 | |a GBV_ILN_60 | ||
912 | |a GBV_ILN_62 | ||
912 | |a GBV_ILN_63 | ||
912 | |a GBV_ILN_65 | ||
912 | |a GBV_ILN_69 | ||
912 | |a GBV_ILN_70 | ||
912 | |a GBV_ILN_73 | ||
912 | |a GBV_ILN_74 | ||
912 | |a GBV_ILN_90 | ||
912 | |a GBV_ILN_95 | ||
912 | |a GBV_ILN_100 | ||
912 | |a GBV_ILN_101 | ||
912 | |a GBV_ILN_105 | ||
912 | |a GBV_ILN_110 | ||
912 | |a GBV_ILN_150 | ||
912 | |a GBV_ILN_151 | ||
912 | |a GBV_ILN_224 | ||
912 | |a GBV_ILN_370 | ||
912 | |a GBV_ILN_602 | ||
912 | |a GBV_ILN_702 | ||
912 | |a GBV_ILN_2003 | ||
912 | |a GBV_ILN_2004 | ||
912 | |a GBV_ILN_2005 | ||
912 | |a GBV_ILN_2006 | ||
912 | |a GBV_ILN_2008 | ||
912 | |a GBV_ILN_2011 | ||
912 | |a GBV_ILN_2014 | ||
912 | |a GBV_ILN_2015 | ||
912 | |a GBV_ILN_2020 | ||
912 | |a GBV_ILN_2021 | ||
912 | |a GBV_ILN_2025 | ||
912 | |a GBV_ILN_2027 | ||
912 | |a GBV_ILN_2034 | ||
912 | |a GBV_ILN_2038 | ||
912 | |a GBV_ILN_2044 | ||
912 | |a GBV_ILN_2048 | ||
912 | |a GBV_ILN_2049 | ||
912 | |a GBV_ILN_2050 | ||
912 | |a GBV_ILN_2056 | ||
912 | |a GBV_ILN_2059 | ||
912 | |a GBV_ILN_2061 | ||
912 | |a GBV_ILN_2064 | ||
912 | |a GBV_ILN_2065 | ||
912 | |a GBV_ILN_2068 | ||
912 | |a GBV_ILN_2088 | ||
912 | |a GBV_ILN_2111 | ||
912 | |a GBV_ILN_2112 | ||
912 | |a GBV_ILN_2113 | ||
912 | |a GBV_ILN_2118 | ||
912 | |a GBV_ILN_2122 | ||
912 | |a GBV_ILN_2129 | ||
912 | |a GBV_ILN_2143 | ||
912 | |a GBV_ILN_2147 | ||
912 | |a GBV_ILN_2148 | ||
912 | |a GBV_ILN_2152 | ||
912 | |a GBV_ILN_2153 | ||
912 | |a GBV_ILN_2190 | ||
912 | |a GBV_ILN_2336 | ||
912 | |a GBV_ILN_2470 | ||
912 | |a GBV_ILN_2507 | ||
912 | |a GBV_ILN_2522 | ||
912 | |a GBV_ILN_4035 | ||
912 | |a GBV_ILN_4037 | ||
912 | |a GBV_ILN_4046 | ||
912 | |a GBV_ILN_4112 | ||
912 | |a GBV_ILN_4125 | ||
912 | |a GBV_ILN_4126 | ||
912 | |a GBV_ILN_4242 | ||
912 | |a GBV_ILN_4251 | ||
912 | |a GBV_ILN_4305 | ||
912 | |a GBV_ILN_4313 | ||
912 | |a GBV_ILN_4322 | ||
912 | |a GBV_ILN_4323 | ||
912 | |a GBV_ILN_4324 | ||
912 | |a GBV_ILN_4325 | ||
912 | |a GBV_ILN_4326 | ||
912 | |a GBV_ILN_4333 | ||
912 | |a GBV_ILN_4334 | ||
912 | |a GBV_ILN_4335 | ||
912 | |a GBV_ILN_4338 | ||
912 | |a GBV_ILN_4393 | ||
936 | b | k | |a 44.09 |j Medizintechnik |q VZ |
936 | b | k | |a 44.32 |j Medizinische Mathematik |j medizinische Statistik |q VZ |
951 | |a AR | ||
952 | |d 79 |
author_variant |
x j xj y s ys l x lx |
---|---|
matchkey_str |
article:17468108:2022----::xelnfntnnfoseiisbetlsiiainorstscas |
hierarchy_sort_str |
2022 |
bklnumber |
44.09 44.32 |
publishDate |
2022 |
allfields |
10.1016/j.bspc.2022.104051 doi (DE-627)ELV009708952 (ELSEVIER)S1746-8094(22)00524-9 DE-627 ger DE-627 rda eng 610 VZ 44.09 bkl 44.32 bkl Jia, Xueyu verfasserin aut Excellent fine-tuning: From specific-subject classification to cross-task classification for motor imagery 2022 nicht spezifiziert zzz rdacontent Computermedien c rdamedia Online-Ressource cr rdacarrier With the popularity of deep learning, motor imagery electroencephalogram (MI-EEG) recognition based on feature extractors and classifiers has performed well. However, the features extracted by most models are not discriminative enough and are limited to specific-subject classifi-cation. We proposed a novel model Metric-based Spatial Filtering Transformer (MSFT) that utilizes additive angular margin loss to enforce the deep model to improve inter-class separability while enhancing intra-class compactness. Besides, a data augmentation method called EEG pyramid was applied to the model. Our model not only outperforms many recent benchmarks in specific-subject classifi-cation, but also is used for cross-subject and even cross-task classification. We did some experiments using BCI competition IV 2a and 2b datasets to evaluate the average accuracy. The Specific-subject: 86.11 % for 2a, 88.39 % for 2b. The Cross-subject: 61.92 % for 2a. The Cross-task: training the feature extractor with 2a data and then fine-tuning the classifier with 2b can achieve an average accuracy of 83.38 %. Our method is more general than most benchmarks and can deal with different kinds of classification situations. Motor imagery Common spatial pattern Transformer Metric learning Cross subjects Song, Yonghao verfasserin aut Xie, Longhan verfasserin aut Enthalten in Biomedical signal processing and control Amsterdam [u.a.] : Elsevier, 2006 79 Online-Ressource (DE-627)515537861 (DE-600)2241886-6 (DE-576)261592653 1746-8108 nnns volume:79 GBV_USEFLAG_U SYSFLAG_U GBV_ELV SSG-OLC-PHA GBV_ILN_20 GBV_ILN_22 GBV_ILN_23 GBV_ILN_24 GBV_ILN_31 GBV_ILN_32 GBV_ILN_40 GBV_ILN_60 GBV_ILN_62 GBV_ILN_63 GBV_ILN_65 GBV_ILN_69 GBV_ILN_70 GBV_ILN_73 GBV_ILN_74 GBV_ILN_90 GBV_ILN_95 GBV_ILN_100 GBV_ILN_101 GBV_ILN_105 GBV_ILN_110 GBV_ILN_150 GBV_ILN_151 GBV_ILN_224 GBV_ILN_370 GBV_ILN_602 GBV_ILN_702 GBV_ILN_2003 GBV_ILN_2004 GBV_ILN_2005 GBV_ILN_2006 GBV_ILN_2008 GBV_ILN_2011 GBV_ILN_2014 GBV_ILN_2015 GBV_ILN_2020 GBV_ILN_2021 GBV_ILN_2025 GBV_ILN_2027 GBV_ILN_2034 GBV_ILN_2038 GBV_ILN_2044 GBV_ILN_2048 GBV_ILN_2049 GBV_ILN_2050 GBV_ILN_2056 GBV_ILN_2059 GBV_ILN_2061 GBV_ILN_2064 GBV_ILN_2065 GBV_ILN_2068 GBV_ILN_2088 GBV_ILN_2111 GBV_ILN_2112 GBV_ILN_2113 GBV_ILN_2118 GBV_ILN_2122 GBV_ILN_2129 GBV_ILN_2143 GBV_ILN_2147 GBV_ILN_2148 GBV_ILN_2152 GBV_ILN_2153 GBV_ILN_2190 GBV_ILN_2336 GBV_ILN_2470 GBV_ILN_2507 GBV_ILN_2522 GBV_ILN_4035 GBV_ILN_4037 GBV_ILN_4046 GBV_ILN_4112 GBV_ILN_4125 GBV_ILN_4126 GBV_ILN_4242 GBV_ILN_4251 GBV_ILN_4305 GBV_ILN_4313 GBV_ILN_4322 GBV_ILN_4323 GBV_ILN_4324 GBV_ILN_4325 GBV_ILN_4326 GBV_ILN_4333 GBV_ILN_4334 GBV_ILN_4335 GBV_ILN_4338 GBV_ILN_4393 44.09 Medizintechnik VZ 44.32 Medizinische Mathematik medizinische Statistik VZ AR 79 |
spelling |
10.1016/j.bspc.2022.104051 doi (DE-627)ELV009708952 (ELSEVIER)S1746-8094(22)00524-9 DE-627 ger DE-627 rda eng 610 VZ 44.09 bkl 44.32 bkl Jia, Xueyu verfasserin aut Excellent fine-tuning: From specific-subject classification to cross-task classification for motor imagery 2022 nicht spezifiziert zzz rdacontent Computermedien c rdamedia Online-Ressource cr rdacarrier With the popularity of deep learning, motor imagery electroencephalogram (MI-EEG) recognition based on feature extractors and classifiers has performed well. However, the features extracted by most models are not discriminative enough and are limited to specific-subject classifi-cation. We proposed a novel model Metric-based Spatial Filtering Transformer (MSFT) that utilizes additive angular margin loss to enforce the deep model to improve inter-class separability while enhancing intra-class compactness. Besides, a data augmentation method called EEG pyramid was applied to the model. Our model not only outperforms many recent benchmarks in specific-subject classifi-cation, but also is used for cross-subject and even cross-task classification. We did some experiments using BCI competition IV 2a and 2b datasets to evaluate the average accuracy. The Specific-subject: 86.11 % for 2a, 88.39 % for 2b. The Cross-subject: 61.92 % for 2a. The Cross-task: training the feature extractor with 2a data and then fine-tuning the classifier with 2b can achieve an average accuracy of 83.38 %. Our method is more general than most benchmarks and can deal with different kinds of classification situations. Motor imagery Common spatial pattern Transformer Metric learning Cross subjects Song, Yonghao verfasserin aut Xie, Longhan verfasserin aut Enthalten in Biomedical signal processing and control Amsterdam [u.a.] : Elsevier, 2006 79 Online-Ressource (DE-627)515537861 (DE-600)2241886-6 (DE-576)261592653 1746-8108 nnns volume:79 GBV_USEFLAG_U SYSFLAG_U GBV_ELV SSG-OLC-PHA GBV_ILN_20 GBV_ILN_22 GBV_ILN_23 GBV_ILN_24 GBV_ILN_31 GBV_ILN_32 GBV_ILN_40 GBV_ILN_60 GBV_ILN_62 GBV_ILN_63 GBV_ILN_65 GBV_ILN_69 GBV_ILN_70 GBV_ILN_73 GBV_ILN_74 GBV_ILN_90 GBV_ILN_95 GBV_ILN_100 GBV_ILN_101 GBV_ILN_105 GBV_ILN_110 GBV_ILN_150 GBV_ILN_151 GBV_ILN_224 GBV_ILN_370 GBV_ILN_602 GBV_ILN_702 GBV_ILN_2003 GBV_ILN_2004 GBV_ILN_2005 GBV_ILN_2006 GBV_ILN_2008 GBV_ILN_2011 GBV_ILN_2014 GBV_ILN_2015 GBV_ILN_2020 GBV_ILN_2021 GBV_ILN_2025 GBV_ILN_2027 GBV_ILN_2034 GBV_ILN_2038 GBV_ILN_2044 GBV_ILN_2048 GBV_ILN_2049 GBV_ILN_2050 GBV_ILN_2056 GBV_ILN_2059 GBV_ILN_2061 GBV_ILN_2064 GBV_ILN_2065 GBV_ILN_2068 GBV_ILN_2088 GBV_ILN_2111 GBV_ILN_2112 GBV_ILN_2113 GBV_ILN_2118 GBV_ILN_2122 GBV_ILN_2129 GBV_ILN_2143 GBV_ILN_2147 GBV_ILN_2148 GBV_ILN_2152 GBV_ILN_2153 GBV_ILN_2190 GBV_ILN_2336 GBV_ILN_2470 GBV_ILN_2507 GBV_ILN_2522 GBV_ILN_4035 GBV_ILN_4037 GBV_ILN_4046 GBV_ILN_4112 GBV_ILN_4125 GBV_ILN_4126 GBV_ILN_4242 GBV_ILN_4251 GBV_ILN_4305 GBV_ILN_4313 GBV_ILN_4322 GBV_ILN_4323 GBV_ILN_4324 GBV_ILN_4325 GBV_ILN_4326 GBV_ILN_4333 GBV_ILN_4334 GBV_ILN_4335 GBV_ILN_4338 GBV_ILN_4393 44.09 Medizintechnik VZ 44.32 Medizinische Mathematik medizinische Statistik VZ AR 79 |
allfields_unstemmed |
10.1016/j.bspc.2022.104051 doi (DE-627)ELV009708952 (ELSEVIER)S1746-8094(22)00524-9 DE-627 ger DE-627 rda eng 610 VZ 44.09 bkl 44.32 bkl Jia, Xueyu verfasserin aut Excellent fine-tuning: From specific-subject classification to cross-task classification for motor imagery 2022 nicht spezifiziert zzz rdacontent Computermedien c rdamedia Online-Ressource cr rdacarrier With the popularity of deep learning, motor imagery electroencephalogram (MI-EEG) recognition based on feature extractors and classifiers has performed well. However, the features extracted by most models are not discriminative enough and are limited to specific-subject classifi-cation. We proposed a novel model Metric-based Spatial Filtering Transformer (MSFT) that utilizes additive angular margin loss to enforce the deep model to improve inter-class separability while enhancing intra-class compactness. Besides, a data augmentation method called EEG pyramid was applied to the model. Our model not only outperforms many recent benchmarks in specific-subject classifi-cation, but also is used for cross-subject and even cross-task classification. We did some experiments using BCI competition IV 2a and 2b datasets to evaluate the average accuracy. The Specific-subject: 86.11 % for 2a, 88.39 % for 2b. The Cross-subject: 61.92 % for 2a. The Cross-task: training the feature extractor with 2a data and then fine-tuning the classifier with 2b can achieve an average accuracy of 83.38 %. Our method is more general than most benchmarks and can deal with different kinds of classification situations. Motor imagery Common spatial pattern Transformer Metric learning Cross subjects Song, Yonghao verfasserin aut Xie, Longhan verfasserin aut Enthalten in Biomedical signal processing and control Amsterdam [u.a.] : Elsevier, 2006 79 Online-Ressource (DE-627)515537861 (DE-600)2241886-6 (DE-576)261592653 1746-8108 nnns volume:79 GBV_USEFLAG_U SYSFLAG_U GBV_ELV SSG-OLC-PHA GBV_ILN_20 GBV_ILN_22 GBV_ILN_23 GBV_ILN_24 GBV_ILN_31 GBV_ILN_32 GBV_ILN_40 GBV_ILN_60 GBV_ILN_62 GBV_ILN_63 GBV_ILN_65 GBV_ILN_69 GBV_ILN_70 GBV_ILN_73 GBV_ILN_74 GBV_ILN_90 GBV_ILN_95 GBV_ILN_100 GBV_ILN_101 GBV_ILN_105 GBV_ILN_110 GBV_ILN_150 GBV_ILN_151 GBV_ILN_224 GBV_ILN_370 GBV_ILN_602 GBV_ILN_702 GBV_ILN_2003 GBV_ILN_2004 GBV_ILN_2005 GBV_ILN_2006 GBV_ILN_2008 GBV_ILN_2011 GBV_ILN_2014 GBV_ILN_2015 GBV_ILN_2020 GBV_ILN_2021 GBV_ILN_2025 GBV_ILN_2027 GBV_ILN_2034 GBV_ILN_2038 GBV_ILN_2044 GBV_ILN_2048 GBV_ILN_2049 GBV_ILN_2050 GBV_ILN_2056 GBV_ILN_2059 GBV_ILN_2061 GBV_ILN_2064 GBV_ILN_2065 GBV_ILN_2068 GBV_ILN_2088 GBV_ILN_2111 GBV_ILN_2112 GBV_ILN_2113 GBV_ILN_2118 GBV_ILN_2122 GBV_ILN_2129 GBV_ILN_2143 GBV_ILN_2147 GBV_ILN_2148 GBV_ILN_2152 GBV_ILN_2153 GBV_ILN_2190 GBV_ILN_2336 GBV_ILN_2470 GBV_ILN_2507 GBV_ILN_2522 GBV_ILN_4035 GBV_ILN_4037 GBV_ILN_4046 GBV_ILN_4112 GBV_ILN_4125 GBV_ILN_4126 GBV_ILN_4242 GBV_ILN_4251 GBV_ILN_4305 GBV_ILN_4313 GBV_ILN_4322 GBV_ILN_4323 GBV_ILN_4324 GBV_ILN_4325 GBV_ILN_4326 GBV_ILN_4333 GBV_ILN_4334 GBV_ILN_4335 GBV_ILN_4338 GBV_ILN_4393 44.09 Medizintechnik VZ 44.32 Medizinische Mathematik medizinische Statistik VZ AR 79 |
allfieldsGer |
10.1016/j.bspc.2022.104051 doi (DE-627)ELV009708952 (ELSEVIER)S1746-8094(22)00524-9 DE-627 ger DE-627 rda eng 610 VZ 44.09 bkl 44.32 bkl Jia, Xueyu verfasserin aut Excellent fine-tuning: From specific-subject classification to cross-task classification for motor imagery 2022 nicht spezifiziert zzz rdacontent Computermedien c rdamedia Online-Ressource cr rdacarrier With the popularity of deep learning, motor imagery electroencephalogram (MI-EEG) recognition based on feature extractors and classifiers has performed well. However, the features extracted by most models are not discriminative enough and are limited to specific-subject classifi-cation. We proposed a novel model Metric-based Spatial Filtering Transformer (MSFT) that utilizes additive angular margin loss to enforce the deep model to improve inter-class separability while enhancing intra-class compactness. Besides, a data augmentation method called EEG pyramid was applied to the model. Our model not only outperforms many recent benchmarks in specific-subject classifi-cation, but also is used for cross-subject and even cross-task classification. We did some experiments using BCI competition IV 2a and 2b datasets to evaluate the average accuracy. The Specific-subject: 86.11 % for 2a, 88.39 % for 2b. The Cross-subject: 61.92 % for 2a. The Cross-task: training the feature extractor with 2a data and then fine-tuning the classifier with 2b can achieve an average accuracy of 83.38 %. Our method is more general than most benchmarks and can deal with different kinds of classification situations. Motor imagery Common spatial pattern Transformer Metric learning Cross subjects Song, Yonghao verfasserin aut Xie, Longhan verfasserin aut Enthalten in Biomedical signal processing and control Amsterdam [u.a.] : Elsevier, 2006 79 Online-Ressource (DE-627)515537861 (DE-600)2241886-6 (DE-576)261592653 1746-8108 nnns volume:79 GBV_USEFLAG_U SYSFLAG_U GBV_ELV SSG-OLC-PHA GBV_ILN_20 GBV_ILN_22 GBV_ILN_23 GBV_ILN_24 GBV_ILN_31 GBV_ILN_32 GBV_ILN_40 GBV_ILN_60 GBV_ILN_62 GBV_ILN_63 GBV_ILN_65 GBV_ILN_69 GBV_ILN_70 GBV_ILN_73 GBV_ILN_74 GBV_ILN_90 GBV_ILN_95 GBV_ILN_100 GBV_ILN_101 GBV_ILN_105 GBV_ILN_110 GBV_ILN_150 GBV_ILN_151 GBV_ILN_224 GBV_ILN_370 GBV_ILN_602 GBV_ILN_702 GBV_ILN_2003 GBV_ILN_2004 GBV_ILN_2005 GBV_ILN_2006 GBV_ILN_2008 GBV_ILN_2011 GBV_ILN_2014 GBV_ILN_2015 GBV_ILN_2020 GBV_ILN_2021 GBV_ILN_2025 GBV_ILN_2027 GBV_ILN_2034 GBV_ILN_2038 GBV_ILN_2044 GBV_ILN_2048 GBV_ILN_2049 GBV_ILN_2050 GBV_ILN_2056 GBV_ILN_2059 GBV_ILN_2061 GBV_ILN_2064 GBV_ILN_2065 GBV_ILN_2068 GBV_ILN_2088 GBV_ILN_2111 GBV_ILN_2112 GBV_ILN_2113 GBV_ILN_2118 GBV_ILN_2122 GBV_ILN_2129 GBV_ILN_2143 GBV_ILN_2147 GBV_ILN_2148 GBV_ILN_2152 GBV_ILN_2153 GBV_ILN_2190 GBV_ILN_2336 GBV_ILN_2470 GBV_ILN_2507 GBV_ILN_2522 GBV_ILN_4035 GBV_ILN_4037 GBV_ILN_4046 GBV_ILN_4112 GBV_ILN_4125 GBV_ILN_4126 GBV_ILN_4242 GBV_ILN_4251 GBV_ILN_4305 GBV_ILN_4313 GBV_ILN_4322 GBV_ILN_4323 GBV_ILN_4324 GBV_ILN_4325 GBV_ILN_4326 GBV_ILN_4333 GBV_ILN_4334 GBV_ILN_4335 GBV_ILN_4338 GBV_ILN_4393 44.09 Medizintechnik VZ 44.32 Medizinische Mathematik medizinische Statistik VZ AR 79 |
allfieldsSound |
10.1016/j.bspc.2022.104051 doi (DE-627)ELV009708952 (ELSEVIER)S1746-8094(22)00524-9 DE-627 ger DE-627 rda eng 610 VZ 44.09 bkl 44.32 bkl Jia, Xueyu verfasserin aut Excellent fine-tuning: From specific-subject classification to cross-task classification for motor imagery 2022 nicht spezifiziert zzz rdacontent Computermedien c rdamedia Online-Ressource cr rdacarrier With the popularity of deep learning, motor imagery electroencephalogram (MI-EEG) recognition based on feature extractors and classifiers has performed well. However, the features extracted by most models are not discriminative enough and are limited to specific-subject classifi-cation. We proposed a novel model Metric-based Spatial Filtering Transformer (MSFT) that utilizes additive angular margin loss to enforce the deep model to improve inter-class separability while enhancing intra-class compactness. Besides, a data augmentation method called EEG pyramid was applied to the model. Our model not only outperforms many recent benchmarks in specific-subject classifi-cation, but also is used for cross-subject and even cross-task classification. We did some experiments using BCI competition IV 2a and 2b datasets to evaluate the average accuracy. The Specific-subject: 86.11 % for 2a, 88.39 % for 2b. The Cross-subject: 61.92 % for 2a. The Cross-task: training the feature extractor with 2a data and then fine-tuning the classifier with 2b can achieve an average accuracy of 83.38 %. Our method is more general than most benchmarks and can deal with different kinds of classification situations. Motor imagery Common spatial pattern Transformer Metric learning Cross subjects Song, Yonghao verfasserin aut Xie, Longhan verfasserin aut Enthalten in Biomedical signal processing and control Amsterdam [u.a.] : Elsevier, 2006 79 Online-Ressource (DE-627)515537861 (DE-600)2241886-6 (DE-576)261592653 1746-8108 nnns volume:79 GBV_USEFLAG_U SYSFLAG_U GBV_ELV SSG-OLC-PHA GBV_ILN_20 GBV_ILN_22 GBV_ILN_23 GBV_ILN_24 GBV_ILN_31 GBV_ILN_32 GBV_ILN_40 GBV_ILN_60 GBV_ILN_62 GBV_ILN_63 GBV_ILN_65 GBV_ILN_69 GBV_ILN_70 GBV_ILN_73 GBV_ILN_74 GBV_ILN_90 GBV_ILN_95 GBV_ILN_100 GBV_ILN_101 GBV_ILN_105 GBV_ILN_110 GBV_ILN_150 GBV_ILN_151 GBV_ILN_224 GBV_ILN_370 GBV_ILN_602 GBV_ILN_702 GBV_ILN_2003 GBV_ILN_2004 GBV_ILN_2005 GBV_ILN_2006 GBV_ILN_2008 GBV_ILN_2011 GBV_ILN_2014 GBV_ILN_2015 GBV_ILN_2020 GBV_ILN_2021 GBV_ILN_2025 GBV_ILN_2027 GBV_ILN_2034 GBV_ILN_2038 GBV_ILN_2044 GBV_ILN_2048 GBV_ILN_2049 GBV_ILN_2050 GBV_ILN_2056 GBV_ILN_2059 GBV_ILN_2061 GBV_ILN_2064 GBV_ILN_2065 GBV_ILN_2068 GBV_ILN_2088 GBV_ILN_2111 GBV_ILN_2112 GBV_ILN_2113 GBV_ILN_2118 GBV_ILN_2122 GBV_ILN_2129 GBV_ILN_2143 GBV_ILN_2147 GBV_ILN_2148 GBV_ILN_2152 GBV_ILN_2153 GBV_ILN_2190 GBV_ILN_2336 GBV_ILN_2470 GBV_ILN_2507 GBV_ILN_2522 GBV_ILN_4035 GBV_ILN_4037 GBV_ILN_4046 GBV_ILN_4112 GBV_ILN_4125 GBV_ILN_4126 GBV_ILN_4242 GBV_ILN_4251 GBV_ILN_4305 GBV_ILN_4313 GBV_ILN_4322 GBV_ILN_4323 GBV_ILN_4324 GBV_ILN_4325 GBV_ILN_4326 GBV_ILN_4333 GBV_ILN_4334 GBV_ILN_4335 GBV_ILN_4338 GBV_ILN_4393 44.09 Medizintechnik VZ 44.32 Medizinische Mathematik medizinische Statistik VZ AR 79 |
language |
English |
source |
Enthalten in Biomedical signal processing and control 79 volume:79 |
sourceStr |
Enthalten in Biomedical signal processing and control 79 volume:79 |
format_phy_str_mv |
Article |
bklname |
Medizintechnik Medizinische Mathematik medizinische Statistik |
institution |
findex.gbv.de |
topic_facet |
Motor imagery Common spatial pattern Transformer Metric learning Cross subjects |
dewey-raw |
610 |
isfreeaccess_bool |
false |
container_title |
Biomedical signal processing and control |
authorswithroles_txt_mv |
Jia, Xueyu @@aut@@ Song, Yonghao @@aut@@ Xie, Longhan @@aut@@ |
publishDateDaySort_date |
2022-01-01T00:00:00Z |
hierarchy_top_id |
515537861 |
dewey-sort |
3610 |
id |
ELV009708952 |
language_de |
englisch |
fullrecord |
<?xml version="1.0" encoding="UTF-8"?><collection xmlns="http://www.loc.gov/MARC21/slim"><record><leader>01000naa a22002652 4500</leader><controlfield tag="001">ELV009708952</controlfield><controlfield tag="003">DE-627</controlfield><controlfield tag="005">20230530141212.0</controlfield><controlfield tag="007">cr uuu---uuuuu</controlfield><controlfield tag="008">230530s2022 xx |||||o 00| ||eng c</controlfield><datafield tag="024" ind1="7" ind2=" "><subfield code="a">10.1016/j.bspc.2022.104051</subfield><subfield code="2">doi</subfield></datafield><datafield tag="035" ind1=" " ind2=" "><subfield code="a">(DE-627)ELV009708952</subfield></datafield><datafield tag="035" ind1=" " ind2=" "><subfield code="a">(ELSEVIER)S1746-8094(22)00524-9</subfield></datafield><datafield tag="040" ind1=" " ind2=" "><subfield code="a">DE-627</subfield><subfield code="b">ger</subfield><subfield code="c">DE-627</subfield><subfield code="e">rda</subfield></datafield><datafield tag="041" ind1=" " ind2=" "><subfield code="a">eng</subfield></datafield><datafield tag="082" ind1="0" ind2="4"><subfield code="a">610</subfield><subfield code="q">VZ</subfield></datafield><datafield tag="084" ind1=" " ind2=" "><subfield code="a">44.09</subfield><subfield code="2">bkl</subfield></datafield><datafield tag="084" ind1=" " ind2=" "><subfield code="a">44.32</subfield><subfield code="2">bkl</subfield></datafield><datafield tag="100" ind1="1" ind2=" "><subfield code="a">Jia, Xueyu</subfield><subfield code="e">verfasserin</subfield><subfield code="4">aut</subfield></datafield><datafield tag="245" ind1="1" ind2="0"><subfield code="a">Excellent fine-tuning: From specific-subject classification to cross-task classification for motor imagery</subfield></datafield><datafield tag="264" ind1=" " ind2="1"><subfield code="c">2022</subfield></datafield><datafield tag="336" ind1=" " ind2=" "><subfield code="a">nicht spezifiziert</subfield><subfield code="b">zzz</subfield><subfield code="2">rdacontent</subfield></datafield><datafield tag="337" ind1=" " ind2=" "><subfield code="a">Computermedien</subfield><subfield code="b">c</subfield><subfield code="2">rdamedia</subfield></datafield><datafield tag="338" ind1=" " ind2=" "><subfield code="a">Online-Ressource</subfield><subfield code="b">cr</subfield><subfield code="2">rdacarrier</subfield></datafield><datafield tag="520" ind1=" " ind2=" "><subfield code="a">With the popularity of deep learning, motor imagery electroencephalogram (MI-EEG) recognition based on feature extractors and classifiers has performed well. However, the features extracted by most models are not discriminative enough and are limited to specific-subject classifi-cation. We proposed a novel model Metric-based Spatial Filtering Transformer (MSFT) that utilizes additive angular margin loss to enforce the deep model to improve inter-class separability while enhancing intra-class compactness. Besides, a data augmentation method called EEG pyramid was applied to the model. Our model not only outperforms many recent benchmarks in specific-subject classifi-cation, but also is used for cross-subject and even cross-task classification. We did some experiments using BCI competition IV 2a and 2b datasets to evaluate the average accuracy. The Specific-subject: 86.11 % for 2a, 88.39 % for 2b. The Cross-subject: 61.92 % for 2a. The Cross-task: training the feature extractor with 2a data and then fine-tuning the classifier with 2b can achieve an average accuracy of 83.38 %. Our method is more general than most benchmarks and can deal with different kinds of classification situations.</subfield></datafield><datafield tag="650" ind1=" " ind2="4"><subfield code="a">Motor imagery</subfield></datafield><datafield tag="650" ind1=" " ind2="4"><subfield code="a">Common spatial pattern</subfield></datafield><datafield tag="650" ind1=" " ind2="4"><subfield code="a">Transformer</subfield></datafield><datafield tag="650" ind1=" " ind2="4"><subfield code="a">Metric learning</subfield></datafield><datafield tag="650" ind1=" " ind2="4"><subfield code="a">Cross subjects</subfield></datafield><datafield tag="700" ind1="1" ind2=" "><subfield code="a">Song, Yonghao</subfield><subfield code="e">verfasserin</subfield><subfield code="4">aut</subfield></datafield><datafield tag="700" ind1="1" ind2=" "><subfield code="a">Xie, Longhan</subfield><subfield code="e">verfasserin</subfield><subfield code="4">aut</subfield></datafield><datafield tag="773" ind1="0" ind2="8"><subfield code="i">Enthalten in</subfield><subfield code="t">Biomedical signal processing and control</subfield><subfield code="d">Amsterdam [u.a.] : Elsevier, 2006</subfield><subfield code="g">79</subfield><subfield code="h">Online-Ressource</subfield><subfield code="w">(DE-627)515537861</subfield><subfield code="w">(DE-600)2241886-6</subfield><subfield code="w">(DE-576)261592653</subfield><subfield code="x">1746-8108</subfield><subfield code="7">nnns</subfield></datafield><datafield tag="773" ind1="1" ind2="8"><subfield code="g">volume:79</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_USEFLAG_U</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">SYSFLAG_U</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ELV</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">SSG-OLC-PHA</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_20</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_22</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_23</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_24</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_31</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_32</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_40</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_60</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_62</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_63</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_65</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_69</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_70</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_73</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_74</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_90</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_95</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_100</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_101</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_105</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_110</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_150</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_151</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_224</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_370</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_602</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_702</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2003</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2004</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2005</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2006</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2008</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2011</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2014</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2015</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2020</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2021</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2025</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2027</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2034</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2038</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2044</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2048</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2049</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2050</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2056</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2059</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2061</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2064</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2065</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2068</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2088</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2111</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2112</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2113</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2118</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2122</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2129</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2143</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2147</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2148</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2152</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2153</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2190</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2336</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2470</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2507</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2522</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4035</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4037</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4046</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4112</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4125</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4126</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4242</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4251</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4305</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4313</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4322</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4323</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4324</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4325</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4326</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4333</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4334</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4335</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4338</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4393</subfield></datafield><datafield tag="936" ind1="b" ind2="k"><subfield code="a">44.09</subfield><subfield code="j">Medizintechnik</subfield><subfield code="q">VZ</subfield></datafield><datafield tag="936" ind1="b" ind2="k"><subfield code="a">44.32</subfield><subfield code="j">Medizinische Mathematik</subfield><subfield code="j">medizinische Statistik</subfield><subfield code="q">VZ</subfield></datafield><datafield tag="951" ind1=" " ind2=" "><subfield code="a">AR</subfield></datafield><datafield tag="952" ind1=" " ind2=" "><subfield code="d">79</subfield></datafield></record></collection>
|
author |
Jia, Xueyu |
spellingShingle |
Jia, Xueyu ddc 610 bkl 44.09 bkl 44.32 misc Motor imagery misc Common spatial pattern misc Transformer misc Metric learning misc Cross subjects Excellent fine-tuning: From specific-subject classification to cross-task classification for motor imagery |
authorStr |
Jia, Xueyu |
ppnlink_with_tag_str_mv |
@@773@@(DE-627)515537861 |
format |
electronic Article |
dewey-ones |
610 - Medicine & health |
delete_txt_mv |
keep |
author_role |
aut aut aut |
collection |
elsevier |
remote_str |
true |
illustrated |
Not Illustrated |
issn |
1746-8108 |
topic_title |
610 VZ 44.09 bkl 44.32 bkl Excellent fine-tuning: From specific-subject classification to cross-task classification for motor imagery Motor imagery Common spatial pattern Transformer Metric learning Cross subjects |
topic |
ddc 610 bkl 44.09 bkl 44.32 misc Motor imagery misc Common spatial pattern misc Transformer misc Metric learning misc Cross subjects |
topic_unstemmed |
ddc 610 bkl 44.09 bkl 44.32 misc Motor imagery misc Common spatial pattern misc Transformer misc Metric learning misc Cross subjects |
topic_browse |
ddc 610 bkl 44.09 bkl 44.32 misc Motor imagery misc Common spatial pattern misc Transformer misc Metric learning misc Cross subjects |
format_facet |
Elektronische Aufsätze Aufsätze Elektronische Ressource |
format_main_str_mv |
Text Zeitschrift/Artikel |
carriertype_str_mv |
cr |
hierarchy_parent_title |
Biomedical signal processing and control |
hierarchy_parent_id |
515537861 |
dewey-tens |
610 - Medicine & health |
hierarchy_top_title |
Biomedical signal processing and control |
isfreeaccess_txt |
false |
familylinks_str_mv |
(DE-627)515537861 (DE-600)2241886-6 (DE-576)261592653 |
title |
Excellent fine-tuning: From specific-subject classification to cross-task classification for motor imagery |
ctrlnum |
(DE-627)ELV009708952 (ELSEVIER)S1746-8094(22)00524-9 |
title_full |
Excellent fine-tuning: From specific-subject classification to cross-task classification for motor imagery |
author_sort |
Jia, Xueyu |
journal |
Biomedical signal processing and control |
journalStr |
Biomedical signal processing and control |
lang_code |
eng |
isOA_bool |
false |
dewey-hundreds |
600 - Technology |
recordtype |
marc |
publishDateSort |
2022 |
contenttype_str_mv |
zzz |
author_browse |
Jia, Xueyu Song, Yonghao Xie, Longhan |
container_volume |
79 |
class |
610 VZ 44.09 bkl 44.32 bkl |
format_se |
Elektronische Aufsätze |
author-letter |
Jia, Xueyu |
doi_str_mv |
10.1016/j.bspc.2022.104051 |
dewey-full |
610 |
author2-role |
verfasserin |
title_sort |
excellent fine-tuning: from specific-subject classification to cross-task classification for motor imagery |
title_auth |
Excellent fine-tuning: From specific-subject classification to cross-task classification for motor imagery |
abstract |
With the popularity of deep learning, motor imagery electroencephalogram (MI-EEG) recognition based on feature extractors and classifiers has performed well. However, the features extracted by most models are not discriminative enough and are limited to specific-subject classifi-cation. We proposed a novel model Metric-based Spatial Filtering Transformer (MSFT) that utilizes additive angular margin loss to enforce the deep model to improve inter-class separability while enhancing intra-class compactness. Besides, a data augmentation method called EEG pyramid was applied to the model. Our model not only outperforms many recent benchmarks in specific-subject classifi-cation, but also is used for cross-subject and even cross-task classification. We did some experiments using BCI competition IV 2a and 2b datasets to evaluate the average accuracy. The Specific-subject: 86.11 % for 2a, 88.39 % for 2b. The Cross-subject: 61.92 % for 2a. The Cross-task: training the feature extractor with 2a data and then fine-tuning the classifier with 2b can achieve an average accuracy of 83.38 %. Our method is more general than most benchmarks and can deal with different kinds of classification situations. |
abstractGer |
With the popularity of deep learning, motor imagery electroencephalogram (MI-EEG) recognition based on feature extractors and classifiers has performed well. However, the features extracted by most models are not discriminative enough and are limited to specific-subject classifi-cation. We proposed a novel model Metric-based Spatial Filtering Transformer (MSFT) that utilizes additive angular margin loss to enforce the deep model to improve inter-class separability while enhancing intra-class compactness. Besides, a data augmentation method called EEG pyramid was applied to the model. Our model not only outperforms many recent benchmarks in specific-subject classifi-cation, but also is used for cross-subject and even cross-task classification. We did some experiments using BCI competition IV 2a and 2b datasets to evaluate the average accuracy. The Specific-subject: 86.11 % for 2a, 88.39 % for 2b. The Cross-subject: 61.92 % for 2a. The Cross-task: training the feature extractor with 2a data and then fine-tuning the classifier with 2b can achieve an average accuracy of 83.38 %. Our method is more general than most benchmarks and can deal with different kinds of classification situations. |
abstract_unstemmed |
With the popularity of deep learning, motor imagery electroencephalogram (MI-EEG) recognition based on feature extractors and classifiers has performed well. However, the features extracted by most models are not discriminative enough and are limited to specific-subject classifi-cation. We proposed a novel model Metric-based Spatial Filtering Transformer (MSFT) that utilizes additive angular margin loss to enforce the deep model to improve inter-class separability while enhancing intra-class compactness. Besides, a data augmentation method called EEG pyramid was applied to the model. Our model not only outperforms many recent benchmarks in specific-subject classifi-cation, but also is used for cross-subject and even cross-task classification. We did some experiments using BCI competition IV 2a and 2b datasets to evaluate the average accuracy. The Specific-subject: 86.11 % for 2a, 88.39 % for 2b. The Cross-subject: 61.92 % for 2a. The Cross-task: training the feature extractor with 2a data and then fine-tuning the classifier with 2b can achieve an average accuracy of 83.38 %. Our method is more general than most benchmarks and can deal with different kinds of classification situations. |
collection_details |
GBV_USEFLAG_U SYSFLAG_U GBV_ELV SSG-OLC-PHA GBV_ILN_20 GBV_ILN_22 GBV_ILN_23 GBV_ILN_24 GBV_ILN_31 GBV_ILN_32 GBV_ILN_40 GBV_ILN_60 GBV_ILN_62 GBV_ILN_63 GBV_ILN_65 GBV_ILN_69 GBV_ILN_70 GBV_ILN_73 GBV_ILN_74 GBV_ILN_90 GBV_ILN_95 GBV_ILN_100 GBV_ILN_101 GBV_ILN_105 GBV_ILN_110 GBV_ILN_150 GBV_ILN_151 GBV_ILN_224 GBV_ILN_370 GBV_ILN_602 GBV_ILN_702 GBV_ILN_2003 GBV_ILN_2004 GBV_ILN_2005 GBV_ILN_2006 GBV_ILN_2008 GBV_ILN_2011 GBV_ILN_2014 GBV_ILN_2015 GBV_ILN_2020 GBV_ILN_2021 GBV_ILN_2025 GBV_ILN_2027 GBV_ILN_2034 GBV_ILN_2038 GBV_ILN_2044 GBV_ILN_2048 GBV_ILN_2049 GBV_ILN_2050 GBV_ILN_2056 GBV_ILN_2059 GBV_ILN_2061 GBV_ILN_2064 GBV_ILN_2065 GBV_ILN_2068 GBV_ILN_2088 GBV_ILN_2111 GBV_ILN_2112 GBV_ILN_2113 GBV_ILN_2118 GBV_ILN_2122 GBV_ILN_2129 GBV_ILN_2143 GBV_ILN_2147 GBV_ILN_2148 GBV_ILN_2152 GBV_ILN_2153 GBV_ILN_2190 GBV_ILN_2336 GBV_ILN_2470 GBV_ILN_2507 GBV_ILN_2522 GBV_ILN_4035 GBV_ILN_4037 GBV_ILN_4046 GBV_ILN_4112 GBV_ILN_4125 GBV_ILN_4126 GBV_ILN_4242 GBV_ILN_4251 GBV_ILN_4305 GBV_ILN_4313 GBV_ILN_4322 GBV_ILN_4323 GBV_ILN_4324 GBV_ILN_4325 GBV_ILN_4326 GBV_ILN_4333 GBV_ILN_4334 GBV_ILN_4335 GBV_ILN_4338 GBV_ILN_4393 |
title_short |
Excellent fine-tuning: From specific-subject classification to cross-task classification for motor imagery |
remote_bool |
true |
author2 |
Song, Yonghao Xie, Longhan |
author2Str |
Song, Yonghao Xie, Longhan |
ppnlink |
515537861 |
mediatype_str_mv |
c |
isOA_txt |
false |
hochschulschrift_bool |
false |
doi_str |
10.1016/j.bspc.2022.104051 |
up_date |
2024-07-07T00:05:09.516Z |
_version_ |
1803876522543546368 |
fullrecord_marcxml |
<?xml version="1.0" encoding="UTF-8"?><collection xmlns="http://www.loc.gov/MARC21/slim"><record><leader>01000naa a22002652 4500</leader><controlfield tag="001">ELV009708952</controlfield><controlfield tag="003">DE-627</controlfield><controlfield tag="005">20230530141212.0</controlfield><controlfield tag="007">cr uuu---uuuuu</controlfield><controlfield tag="008">230530s2022 xx |||||o 00| ||eng c</controlfield><datafield tag="024" ind1="7" ind2=" "><subfield code="a">10.1016/j.bspc.2022.104051</subfield><subfield code="2">doi</subfield></datafield><datafield tag="035" ind1=" " ind2=" "><subfield code="a">(DE-627)ELV009708952</subfield></datafield><datafield tag="035" ind1=" " ind2=" "><subfield code="a">(ELSEVIER)S1746-8094(22)00524-9</subfield></datafield><datafield tag="040" ind1=" " ind2=" "><subfield code="a">DE-627</subfield><subfield code="b">ger</subfield><subfield code="c">DE-627</subfield><subfield code="e">rda</subfield></datafield><datafield tag="041" ind1=" " ind2=" "><subfield code="a">eng</subfield></datafield><datafield tag="082" ind1="0" ind2="4"><subfield code="a">610</subfield><subfield code="q">VZ</subfield></datafield><datafield tag="084" ind1=" " ind2=" "><subfield code="a">44.09</subfield><subfield code="2">bkl</subfield></datafield><datafield tag="084" ind1=" " ind2=" "><subfield code="a">44.32</subfield><subfield code="2">bkl</subfield></datafield><datafield tag="100" ind1="1" ind2=" "><subfield code="a">Jia, Xueyu</subfield><subfield code="e">verfasserin</subfield><subfield code="4">aut</subfield></datafield><datafield tag="245" ind1="1" ind2="0"><subfield code="a">Excellent fine-tuning: From specific-subject classification to cross-task classification for motor imagery</subfield></datafield><datafield tag="264" ind1=" " ind2="1"><subfield code="c">2022</subfield></datafield><datafield tag="336" ind1=" " ind2=" "><subfield code="a">nicht spezifiziert</subfield><subfield code="b">zzz</subfield><subfield code="2">rdacontent</subfield></datafield><datafield tag="337" ind1=" " ind2=" "><subfield code="a">Computermedien</subfield><subfield code="b">c</subfield><subfield code="2">rdamedia</subfield></datafield><datafield tag="338" ind1=" " ind2=" "><subfield code="a">Online-Ressource</subfield><subfield code="b">cr</subfield><subfield code="2">rdacarrier</subfield></datafield><datafield tag="520" ind1=" " ind2=" "><subfield code="a">With the popularity of deep learning, motor imagery electroencephalogram (MI-EEG) recognition based on feature extractors and classifiers has performed well. However, the features extracted by most models are not discriminative enough and are limited to specific-subject classifi-cation. We proposed a novel model Metric-based Spatial Filtering Transformer (MSFT) that utilizes additive angular margin loss to enforce the deep model to improve inter-class separability while enhancing intra-class compactness. Besides, a data augmentation method called EEG pyramid was applied to the model. Our model not only outperforms many recent benchmarks in specific-subject classifi-cation, but also is used for cross-subject and even cross-task classification. We did some experiments using BCI competition IV 2a and 2b datasets to evaluate the average accuracy. The Specific-subject: 86.11 % for 2a, 88.39 % for 2b. The Cross-subject: 61.92 % for 2a. The Cross-task: training the feature extractor with 2a data and then fine-tuning the classifier with 2b can achieve an average accuracy of 83.38 %. Our method is more general than most benchmarks and can deal with different kinds of classification situations.</subfield></datafield><datafield tag="650" ind1=" " ind2="4"><subfield code="a">Motor imagery</subfield></datafield><datafield tag="650" ind1=" " ind2="4"><subfield code="a">Common spatial pattern</subfield></datafield><datafield tag="650" ind1=" " ind2="4"><subfield code="a">Transformer</subfield></datafield><datafield tag="650" ind1=" " ind2="4"><subfield code="a">Metric learning</subfield></datafield><datafield tag="650" ind1=" " ind2="4"><subfield code="a">Cross subjects</subfield></datafield><datafield tag="700" ind1="1" ind2=" "><subfield code="a">Song, Yonghao</subfield><subfield code="e">verfasserin</subfield><subfield code="4">aut</subfield></datafield><datafield tag="700" ind1="1" ind2=" "><subfield code="a">Xie, Longhan</subfield><subfield code="e">verfasserin</subfield><subfield code="4">aut</subfield></datafield><datafield tag="773" ind1="0" ind2="8"><subfield code="i">Enthalten in</subfield><subfield code="t">Biomedical signal processing and control</subfield><subfield code="d">Amsterdam [u.a.] : Elsevier, 2006</subfield><subfield code="g">79</subfield><subfield code="h">Online-Ressource</subfield><subfield code="w">(DE-627)515537861</subfield><subfield code="w">(DE-600)2241886-6</subfield><subfield code="w">(DE-576)261592653</subfield><subfield code="x">1746-8108</subfield><subfield code="7">nnns</subfield></datafield><datafield tag="773" ind1="1" ind2="8"><subfield code="g">volume:79</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_USEFLAG_U</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">SYSFLAG_U</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ELV</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">SSG-OLC-PHA</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_20</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_22</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_23</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_24</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_31</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_32</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_40</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_60</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_62</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_63</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_65</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_69</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_70</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_73</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_74</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_90</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_95</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_100</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_101</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_105</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_110</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_150</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_151</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_224</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_370</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_602</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_702</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2003</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2004</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2005</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2006</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2008</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2011</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2014</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2015</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2020</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2021</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2025</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2027</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2034</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2038</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2044</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2048</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2049</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2050</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2056</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2059</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2061</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2064</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2065</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2068</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2088</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2111</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2112</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2113</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2118</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2122</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2129</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2143</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2147</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2148</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2152</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2153</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2190</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2336</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2470</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2507</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2522</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4035</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4037</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4046</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4112</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4125</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4126</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4242</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4251</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4305</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4313</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4322</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4323</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4324</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4325</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4326</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4333</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4334</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4335</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4338</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4393</subfield></datafield><datafield tag="936" ind1="b" ind2="k"><subfield code="a">44.09</subfield><subfield code="j">Medizintechnik</subfield><subfield code="q">VZ</subfield></datafield><datafield tag="936" ind1="b" ind2="k"><subfield code="a">44.32</subfield><subfield code="j">Medizinische Mathematik</subfield><subfield code="j">medizinische Statistik</subfield><subfield code="q">VZ</subfield></datafield><datafield tag="951" ind1=" " ind2=" "><subfield code="a">AR</subfield></datafield><datafield tag="952" ind1=" " ind2=" "><subfield code="d">79</subfield></datafield></record></collection>
|
score |
7.4003716 |