An Efficient Parameter Adaptive Support Vector Regression Using K-Means Clustering and Chaotic Slime Mould Algorithm
Support vector regression (SVR) performs satisfactorily in prediction problems, especially for small sample prediction. The setting parameters (e.g., kernel type and penalty factor) profoundly impact the performance and efficiency of SVR. The adaptive adjustment of the parameters has always been a r...
Ausführliche Beschreibung
Autor*in: |
Ziyi Chen [verfasserIn] Wenbai Liu [verfasserIn] |
---|
Format: |
E-Artikel |
---|---|
Sprache: |
Englisch |
Erschienen: |
2020 |
---|
Schlagwörter: |
---|
Übergeordnetes Werk: |
In: IEEE Access - IEEE, 2014, 8(2020), Seite 156851-156862 |
---|---|
Übergeordnetes Werk: |
volume:8 ; year:2020 ; pages:156851-156862 |
Links: |
---|
DOI / URN: |
10.1109/ACCESS.2020.3018866 |
---|
Katalog-ID: |
DOAJ072495138 |
---|
LEADER | 01000caa a22002652 4500 | ||
---|---|---|---|
001 | DOAJ072495138 | ||
003 | DE-627 | ||
005 | 20230503150956.0 | ||
007 | cr uuu---uuuuu | ||
008 | 230228s2020 xx |||||o 00| ||eng c | ||
024 | 7 | |a 10.1109/ACCESS.2020.3018866 |2 doi | |
035 | |a (DE-627)DOAJ072495138 | ||
035 | |a (DE-599)DOAJ5bc7aa2b4ebf454298178cc3199d7466 | ||
040 | |a DE-627 |b ger |c DE-627 |e rakwb | ||
041 | |a eng | ||
050 | 0 | |a TK1-9971 | |
100 | 0 | |a Ziyi Chen |e verfasserin |4 aut | |
245 | 1 | 3 | |a An Efficient Parameter Adaptive Support Vector Regression Using K-Means Clustering and Chaotic Slime Mould Algorithm |
264 | 1 | |c 2020 | |
336 | |a Text |b txt |2 rdacontent | ||
337 | |a Computermedien |b c |2 rdamedia | ||
338 | |a Online-Ressource |b cr |2 rdacarrier | ||
520 | |a Support vector regression (SVR) performs satisfactorily in prediction problems, especially for small sample prediction. The setting parameters (e.g., kernel type and penalty factor) profoundly impact the performance and efficiency of SVR. The adaptive adjustment of the parameters has always been a research hotspot. However, the substantial time cost and forecast accuracy of parameter adjustment are challenging to many scholars. The contradiction of big data prediction is especially prominent. In the paper, an SVR-based prediction approach is presented using the K-means clustering method (KMCM) and chaotic slime mould algorithm (CSMA). Eight high- and low-dimensional benchmark datasets are applied to obtain appropriate key parameters of KMCM and CSMA, and the forecast accuracy, stability performance and computation complexity are evaluated. The proposed approach obtains the optimal (joint best) forecast accuracy on 6 datasets and produces the most stable output on 3 datasets. It ranks first with a score of 0.024 in the overall evaluation. The outcomes reveal that the proposed approach is capable of tuning the parameters of SVR. KMCM, CSMA and SVR are skillfully integrated in this work and perform well. Although the performance is not outstanding in terms of stability, the proposed approach exhibits very strong performance with respect to prediction accuracy and computation complexity. This work validated the tremendous potential of the proposed approach in the prediction field. | ||
650 | 4 | |a Machine learning | |
650 | 4 | |a forecasting problem | |
650 | 4 | |a K-means clustering method | |
650 | 4 | |a chaotic slime mould algorithm | |
650 | 4 | |a support vector regression | |
653 | 0 | |a Electrical engineering. Electronics. Nuclear engineering | |
700 | 0 | |a Wenbai Liu |e verfasserin |4 aut | |
773 | 0 | 8 | |i In |t IEEE Access |d IEEE, 2014 |g 8(2020), Seite 156851-156862 |w (DE-627)728440385 |w (DE-600)2687964-5 |x 21693536 |7 nnns |
773 | 1 | 8 | |g volume:8 |g year:2020 |g pages:156851-156862 |
856 | 4 | 0 | |u https://doi.org/10.1109/ACCESS.2020.3018866 |z kostenfrei |
856 | 4 | 0 | |u https://doaj.org/article/5bc7aa2b4ebf454298178cc3199d7466 |z kostenfrei |
856 | 4 | 0 | |u https://ieeexplore.ieee.org/document/9174730/ |z kostenfrei |
856 | 4 | 2 | |u https://doaj.org/toc/2169-3536 |y Journal toc |z kostenfrei |
912 | |a GBV_USEFLAG_A | ||
912 | |a SYSFLAG_A | ||
912 | |a GBV_DOAJ | ||
912 | |a SSG-OLC-PHA | ||
912 | |a GBV_ILN_11 | ||
912 | |a GBV_ILN_20 | ||
912 | |a GBV_ILN_22 | ||
912 | |a GBV_ILN_23 | ||
912 | |a GBV_ILN_24 | ||
912 | |a GBV_ILN_31 | ||
912 | |a GBV_ILN_39 | ||
912 | |a GBV_ILN_40 | ||
912 | |a GBV_ILN_60 | ||
912 | |a GBV_ILN_62 | ||
912 | |a GBV_ILN_63 | ||
912 | |a GBV_ILN_65 | ||
912 | |a GBV_ILN_69 | ||
912 | |a GBV_ILN_70 | ||
912 | |a GBV_ILN_73 | ||
912 | |a GBV_ILN_95 | ||
912 | |a GBV_ILN_105 | ||
912 | |a GBV_ILN_110 | ||
912 | |a GBV_ILN_151 | ||
912 | |a GBV_ILN_161 | ||
912 | |a GBV_ILN_170 | ||
912 | |a GBV_ILN_213 | ||
912 | |a GBV_ILN_230 | ||
912 | |a GBV_ILN_285 | ||
912 | |a GBV_ILN_293 | ||
912 | |a GBV_ILN_370 | ||
912 | |a GBV_ILN_602 | ||
912 | |a GBV_ILN_2014 | ||
912 | |a GBV_ILN_4012 | ||
912 | |a GBV_ILN_4037 | ||
912 | |a GBV_ILN_4112 | ||
912 | |a GBV_ILN_4125 | ||
912 | |a GBV_ILN_4126 | ||
912 | |a GBV_ILN_4249 | ||
912 | |a GBV_ILN_4305 | ||
912 | |a GBV_ILN_4306 | ||
912 | |a GBV_ILN_4307 | ||
912 | |a GBV_ILN_4313 | ||
912 | |a GBV_ILN_4322 | ||
912 | |a GBV_ILN_4323 | ||
912 | |a GBV_ILN_4324 | ||
912 | |a GBV_ILN_4325 | ||
912 | |a GBV_ILN_4335 | ||
912 | |a GBV_ILN_4338 | ||
912 | |a GBV_ILN_4367 | ||
912 | |a GBV_ILN_4700 | ||
951 | |a AR | ||
952 | |d 8 |j 2020 |h 156851-156862 |
author_variant |
z c zc w l wl |
---|---|
matchkey_str |
article:21693536:2020----::nfiinprmtrdpieuprvcorgesouigmaslseign |
hierarchy_sort_str |
2020 |
callnumber-subject-code |
TK |
publishDate |
2020 |
allfields |
10.1109/ACCESS.2020.3018866 doi (DE-627)DOAJ072495138 (DE-599)DOAJ5bc7aa2b4ebf454298178cc3199d7466 DE-627 ger DE-627 rakwb eng TK1-9971 Ziyi Chen verfasserin aut An Efficient Parameter Adaptive Support Vector Regression Using K-Means Clustering and Chaotic Slime Mould Algorithm 2020 Text txt rdacontent Computermedien c rdamedia Online-Ressource cr rdacarrier Support vector regression (SVR) performs satisfactorily in prediction problems, especially for small sample prediction. The setting parameters (e.g., kernel type and penalty factor) profoundly impact the performance and efficiency of SVR. The adaptive adjustment of the parameters has always been a research hotspot. However, the substantial time cost and forecast accuracy of parameter adjustment are challenging to many scholars. The contradiction of big data prediction is especially prominent. In the paper, an SVR-based prediction approach is presented using the K-means clustering method (KMCM) and chaotic slime mould algorithm (CSMA). Eight high- and low-dimensional benchmark datasets are applied to obtain appropriate key parameters of KMCM and CSMA, and the forecast accuracy, stability performance and computation complexity are evaluated. The proposed approach obtains the optimal (joint best) forecast accuracy on 6 datasets and produces the most stable output on 3 datasets. It ranks first with a score of 0.024 in the overall evaluation. The outcomes reveal that the proposed approach is capable of tuning the parameters of SVR. KMCM, CSMA and SVR are skillfully integrated in this work and perform well. Although the performance is not outstanding in terms of stability, the proposed approach exhibits very strong performance with respect to prediction accuracy and computation complexity. This work validated the tremendous potential of the proposed approach in the prediction field. Machine learning forecasting problem K-means clustering method chaotic slime mould algorithm support vector regression Electrical engineering. Electronics. Nuclear engineering Wenbai Liu verfasserin aut In IEEE Access IEEE, 2014 8(2020), Seite 156851-156862 (DE-627)728440385 (DE-600)2687964-5 21693536 nnns volume:8 year:2020 pages:156851-156862 https://doi.org/10.1109/ACCESS.2020.3018866 kostenfrei https://doaj.org/article/5bc7aa2b4ebf454298178cc3199d7466 kostenfrei https://ieeexplore.ieee.org/document/9174730/ kostenfrei https://doaj.org/toc/2169-3536 Journal toc kostenfrei GBV_USEFLAG_A SYSFLAG_A GBV_DOAJ SSG-OLC-PHA GBV_ILN_11 GBV_ILN_20 GBV_ILN_22 GBV_ILN_23 GBV_ILN_24 GBV_ILN_31 GBV_ILN_39 GBV_ILN_40 GBV_ILN_60 GBV_ILN_62 GBV_ILN_63 GBV_ILN_65 GBV_ILN_69 GBV_ILN_70 GBV_ILN_73 GBV_ILN_95 GBV_ILN_105 GBV_ILN_110 GBV_ILN_151 GBV_ILN_161 GBV_ILN_170 GBV_ILN_213 GBV_ILN_230 GBV_ILN_285 GBV_ILN_293 GBV_ILN_370 GBV_ILN_602 GBV_ILN_2014 GBV_ILN_4012 GBV_ILN_4037 GBV_ILN_4112 GBV_ILN_4125 GBV_ILN_4126 GBV_ILN_4249 GBV_ILN_4305 GBV_ILN_4306 GBV_ILN_4307 GBV_ILN_4313 GBV_ILN_4322 GBV_ILN_4323 GBV_ILN_4324 GBV_ILN_4325 GBV_ILN_4335 GBV_ILN_4338 GBV_ILN_4367 GBV_ILN_4700 AR 8 2020 156851-156862 |
spelling |
10.1109/ACCESS.2020.3018866 doi (DE-627)DOAJ072495138 (DE-599)DOAJ5bc7aa2b4ebf454298178cc3199d7466 DE-627 ger DE-627 rakwb eng TK1-9971 Ziyi Chen verfasserin aut An Efficient Parameter Adaptive Support Vector Regression Using K-Means Clustering and Chaotic Slime Mould Algorithm 2020 Text txt rdacontent Computermedien c rdamedia Online-Ressource cr rdacarrier Support vector regression (SVR) performs satisfactorily in prediction problems, especially for small sample prediction. The setting parameters (e.g., kernel type and penalty factor) profoundly impact the performance and efficiency of SVR. The adaptive adjustment of the parameters has always been a research hotspot. However, the substantial time cost and forecast accuracy of parameter adjustment are challenging to many scholars. The contradiction of big data prediction is especially prominent. In the paper, an SVR-based prediction approach is presented using the K-means clustering method (KMCM) and chaotic slime mould algorithm (CSMA). Eight high- and low-dimensional benchmark datasets are applied to obtain appropriate key parameters of KMCM and CSMA, and the forecast accuracy, stability performance and computation complexity are evaluated. The proposed approach obtains the optimal (joint best) forecast accuracy on 6 datasets and produces the most stable output on 3 datasets. It ranks first with a score of 0.024 in the overall evaluation. The outcomes reveal that the proposed approach is capable of tuning the parameters of SVR. KMCM, CSMA and SVR are skillfully integrated in this work and perform well. Although the performance is not outstanding in terms of stability, the proposed approach exhibits very strong performance with respect to prediction accuracy and computation complexity. This work validated the tremendous potential of the proposed approach in the prediction field. Machine learning forecasting problem K-means clustering method chaotic slime mould algorithm support vector regression Electrical engineering. Electronics. Nuclear engineering Wenbai Liu verfasserin aut In IEEE Access IEEE, 2014 8(2020), Seite 156851-156862 (DE-627)728440385 (DE-600)2687964-5 21693536 nnns volume:8 year:2020 pages:156851-156862 https://doi.org/10.1109/ACCESS.2020.3018866 kostenfrei https://doaj.org/article/5bc7aa2b4ebf454298178cc3199d7466 kostenfrei https://ieeexplore.ieee.org/document/9174730/ kostenfrei https://doaj.org/toc/2169-3536 Journal toc kostenfrei GBV_USEFLAG_A SYSFLAG_A GBV_DOAJ SSG-OLC-PHA GBV_ILN_11 GBV_ILN_20 GBV_ILN_22 GBV_ILN_23 GBV_ILN_24 GBV_ILN_31 GBV_ILN_39 GBV_ILN_40 GBV_ILN_60 GBV_ILN_62 GBV_ILN_63 GBV_ILN_65 GBV_ILN_69 GBV_ILN_70 GBV_ILN_73 GBV_ILN_95 GBV_ILN_105 GBV_ILN_110 GBV_ILN_151 GBV_ILN_161 GBV_ILN_170 GBV_ILN_213 GBV_ILN_230 GBV_ILN_285 GBV_ILN_293 GBV_ILN_370 GBV_ILN_602 GBV_ILN_2014 GBV_ILN_4012 GBV_ILN_4037 GBV_ILN_4112 GBV_ILN_4125 GBV_ILN_4126 GBV_ILN_4249 GBV_ILN_4305 GBV_ILN_4306 GBV_ILN_4307 GBV_ILN_4313 GBV_ILN_4322 GBV_ILN_4323 GBV_ILN_4324 GBV_ILN_4325 GBV_ILN_4335 GBV_ILN_4338 GBV_ILN_4367 GBV_ILN_4700 AR 8 2020 156851-156862 |
allfields_unstemmed |
10.1109/ACCESS.2020.3018866 doi (DE-627)DOAJ072495138 (DE-599)DOAJ5bc7aa2b4ebf454298178cc3199d7466 DE-627 ger DE-627 rakwb eng TK1-9971 Ziyi Chen verfasserin aut An Efficient Parameter Adaptive Support Vector Regression Using K-Means Clustering and Chaotic Slime Mould Algorithm 2020 Text txt rdacontent Computermedien c rdamedia Online-Ressource cr rdacarrier Support vector regression (SVR) performs satisfactorily in prediction problems, especially for small sample prediction. The setting parameters (e.g., kernel type and penalty factor) profoundly impact the performance and efficiency of SVR. The adaptive adjustment of the parameters has always been a research hotspot. However, the substantial time cost and forecast accuracy of parameter adjustment are challenging to many scholars. The contradiction of big data prediction is especially prominent. In the paper, an SVR-based prediction approach is presented using the K-means clustering method (KMCM) and chaotic slime mould algorithm (CSMA). Eight high- and low-dimensional benchmark datasets are applied to obtain appropriate key parameters of KMCM and CSMA, and the forecast accuracy, stability performance and computation complexity are evaluated. The proposed approach obtains the optimal (joint best) forecast accuracy on 6 datasets and produces the most stable output on 3 datasets. It ranks first with a score of 0.024 in the overall evaluation. The outcomes reveal that the proposed approach is capable of tuning the parameters of SVR. KMCM, CSMA and SVR are skillfully integrated in this work and perform well. Although the performance is not outstanding in terms of stability, the proposed approach exhibits very strong performance with respect to prediction accuracy and computation complexity. This work validated the tremendous potential of the proposed approach in the prediction field. Machine learning forecasting problem K-means clustering method chaotic slime mould algorithm support vector regression Electrical engineering. Electronics. Nuclear engineering Wenbai Liu verfasserin aut In IEEE Access IEEE, 2014 8(2020), Seite 156851-156862 (DE-627)728440385 (DE-600)2687964-5 21693536 nnns volume:8 year:2020 pages:156851-156862 https://doi.org/10.1109/ACCESS.2020.3018866 kostenfrei https://doaj.org/article/5bc7aa2b4ebf454298178cc3199d7466 kostenfrei https://ieeexplore.ieee.org/document/9174730/ kostenfrei https://doaj.org/toc/2169-3536 Journal toc kostenfrei GBV_USEFLAG_A SYSFLAG_A GBV_DOAJ SSG-OLC-PHA GBV_ILN_11 GBV_ILN_20 GBV_ILN_22 GBV_ILN_23 GBV_ILN_24 GBV_ILN_31 GBV_ILN_39 GBV_ILN_40 GBV_ILN_60 GBV_ILN_62 GBV_ILN_63 GBV_ILN_65 GBV_ILN_69 GBV_ILN_70 GBV_ILN_73 GBV_ILN_95 GBV_ILN_105 GBV_ILN_110 GBV_ILN_151 GBV_ILN_161 GBV_ILN_170 GBV_ILN_213 GBV_ILN_230 GBV_ILN_285 GBV_ILN_293 GBV_ILN_370 GBV_ILN_602 GBV_ILN_2014 GBV_ILN_4012 GBV_ILN_4037 GBV_ILN_4112 GBV_ILN_4125 GBV_ILN_4126 GBV_ILN_4249 GBV_ILN_4305 GBV_ILN_4306 GBV_ILN_4307 GBV_ILN_4313 GBV_ILN_4322 GBV_ILN_4323 GBV_ILN_4324 GBV_ILN_4325 GBV_ILN_4335 GBV_ILN_4338 GBV_ILN_4367 GBV_ILN_4700 AR 8 2020 156851-156862 |
allfieldsGer |
10.1109/ACCESS.2020.3018866 doi (DE-627)DOAJ072495138 (DE-599)DOAJ5bc7aa2b4ebf454298178cc3199d7466 DE-627 ger DE-627 rakwb eng TK1-9971 Ziyi Chen verfasserin aut An Efficient Parameter Adaptive Support Vector Regression Using K-Means Clustering and Chaotic Slime Mould Algorithm 2020 Text txt rdacontent Computermedien c rdamedia Online-Ressource cr rdacarrier Support vector regression (SVR) performs satisfactorily in prediction problems, especially for small sample prediction. The setting parameters (e.g., kernel type and penalty factor) profoundly impact the performance and efficiency of SVR. The adaptive adjustment of the parameters has always been a research hotspot. However, the substantial time cost and forecast accuracy of parameter adjustment are challenging to many scholars. The contradiction of big data prediction is especially prominent. In the paper, an SVR-based prediction approach is presented using the K-means clustering method (KMCM) and chaotic slime mould algorithm (CSMA). Eight high- and low-dimensional benchmark datasets are applied to obtain appropriate key parameters of KMCM and CSMA, and the forecast accuracy, stability performance and computation complexity are evaluated. The proposed approach obtains the optimal (joint best) forecast accuracy on 6 datasets and produces the most stable output on 3 datasets. It ranks first with a score of 0.024 in the overall evaluation. The outcomes reveal that the proposed approach is capable of tuning the parameters of SVR. KMCM, CSMA and SVR are skillfully integrated in this work and perform well. Although the performance is not outstanding in terms of stability, the proposed approach exhibits very strong performance with respect to prediction accuracy and computation complexity. This work validated the tremendous potential of the proposed approach in the prediction field. Machine learning forecasting problem K-means clustering method chaotic slime mould algorithm support vector regression Electrical engineering. Electronics. Nuclear engineering Wenbai Liu verfasserin aut In IEEE Access IEEE, 2014 8(2020), Seite 156851-156862 (DE-627)728440385 (DE-600)2687964-5 21693536 nnns volume:8 year:2020 pages:156851-156862 https://doi.org/10.1109/ACCESS.2020.3018866 kostenfrei https://doaj.org/article/5bc7aa2b4ebf454298178cc3199d7466 kostenfrei https://ieeexplore.ieee.org/document/9174730/ kostenfrei https://doaj.org/toc/2169-3536 Journal toc kostenfrei GBV_USEFLAG_A SYSFLAG_A GBV_DOAJ SSG-OLC-PHA GBV_ILN_11 GBV_ILN_20 GBV_ILN_22 GBV_ILN_23 GBV_ILN_24 GBV_ILN_31 GBV_ILN_39 GBV_ILN_40 GBV_ILN_60 GBV_ILN_62 GBV_ILN_63 GBV_ILN_65 GBV_ILN_69 GBV_ILN_70 GBV_ILN_73 GBV_ILN_95 GBV_ILN_105 GBV_ILN_110 GBV_ILN_151 GBV_ILN_161 GBV_ILN_170 GBV_ILN_213 GBV_ILN_230 GBV_ILN_285 GBV_ILN_293 GBV_ILN_370 GBV_ILN_602 GBV_ILN_2014 GBV_ILN_4012 GBV_ILN_4037 GBV_ILN_4112 GBV_ILN_4125 GBV_ILN_4126 GBV_ILN_4249 GBV_ILN_4305 GBV_ILN_4306 GBV_ILN_4307 GBV_ILN_4313 GBV_ILN_4322 GBV_ILN_4323 GBV_ILN_4324 GBV_ILN_4325 GBV_ILN_4335 GBV_ILN_4338 GBV_ILN_4367 GBV_ILN_4700 AR 8 2020 156851-156862 |
allfieldsSound |
10.1109/ACCESS.2020.3018866 doi (DE-627)DOAJ072495138 (DE-599)DOAJ5bc7aa2b4ebf454298178cc3199d7466 DE-627 ger DE-627 rakwb eng TK1-9971 Ziyi Chen verfasserin aut An Efficient Parameter Adaptive Support Vector Regression Using K-Means Clustering and Chaotic Slime Mould Algorithm 2020 Text txt rdacontent Computermedien c rdamedia Online-Ressource cr rdacarrier Support vector regression (SVR) performs satisfactorily in prediction problems, especially for small sample prediction. The setting parameters (e.g., kernel type and penalty factor) profoundly impact the performance and efficiency of SVR. The adaptive adjustment of the parameters has always been a research hotspot. However, the substantial time cost and forecast accuracy of parameter adjustment are challenging to many scholars. The contradiction of big data prediction is especially prominent. In the paper, an SVR-based prediction approach is presented using the K-means clustering method (KMCM) and chaotic slime mould algorithm (CSMA). Eight high- and low-dimensional benchmark datasets are applied to obtain appropriate key parameters of KMCM and CSMA, and the forecast accuracy, stability performance and computation complexity are evaluated. The proposed approach obtains the optimal (joint best) forecast accuracy on 6 datasets and produces the most stable output on 3 datasets. It ranks first with a score of 0.024 in the overall evaluation. The outcomes reveal that the proposed approach is capable of tuning the parameters of SVR. KMCM, CSMA and SVR are skillfully integrated in this work and perform well. Although the performance is not outstanding in terms of stability, the proposed approach exhibits very strong performance with respect to prediction accuracy and computation complexity. This work validated the tremendous potential of the proposed approach in the prediction field. Machine learning forecasting problem K-means clustering method chaotic slime mould algorithm support vector regression Electrical engineering. Electronics. Nuclear engineering Wenbai Liu verfasserin aut In IEEE Access IEEE, 2014 8(2020), Seite 156851-156862 (DE-627)728440385 (DE-600)2687964-5 21693536 nnns volume:8 year:2020 pages:156851-156862 https://doi.org/10.1109/ACCESS.2020.3018866 kostenfrei https://doaj.org/article/5bc7aa2b4ebf454298178cc3199d7466 kostenfrei https://ieeexplore.ieee.org/document/9174730/ kostenfrei https://doaj.org/toc/2169-3536 Journal toc kostenfrei GBV_USEFLAG_A SYSFLAG_A GBV_DOAJ SSG-OLC-PHA GBV_ILN_11 GBV_ILN_20 GBV_ILN_22 GBV_ILN_23 GBV_ILN_24 GBV_ILN_31 GBV_ILN_39 GBV_ILN_40 GBV_ILN_60 GBV_ILN_62 GBV_ILN_63 GBV_ILN_65 GBV_ILN_69 GBV_ILN_70 GBV_ILN_73 GBV_ILN_95 GBV_ILN_105 GBV_ILN_110 GBV_ILN_151 GBV_ILN_161 GBV_ILN_170 GBV_ILN_213 GBV_ILN_230 GBV_ILN_285 GBV_ILN_293 GBV_ILN_370 GBV_ILN_602 GBV_ILN_2014 GBV_ILN_4012 GBV_ILN_4037 GBV_ILN_4112 GBV_ILN_4125 GBV_ILN_4126 GBV_ILN_4249 GBV_ILN_4305 GBV_ILN_4306 GBV_ILN_4307 GBV_ILN_4313 GBV_ILN_4322 GBV_ILN_4323 GBV_ILN_4324 GBV_ILN_4325 GBV_ILN_4335 GBV_ILN_4338 GBV_ILN_4367 GBV_ILN_4700 AR 8 2020 156851-156862 |
language |
English |
source |
In IEEE Access 8(2020), Seite 156851-156862 volume:8 year:2020 pages:156851-156862 |
sourceStr |
In IEEE Access 8(2020), Seite 156851-156862 volume:8 year:2020 pages:156851-156862 |
format_phy_str_mv |
Article |
institution |
findex.gbv.de |
topic_facet |
Machine learning forecasting problem K-means clustering method chaotic slime mould algorithm support vector regression Electrical engineering. Electronics. Nuclear engineering |
isfreeaccess_bool |
true |
container_title |
IEEE Access |
authorswithroles_txt_mv |
Ziyi Chen @@aut@@ Wenbai Liu @@aut@@ |
publishDateDaySort_date |
2020-01-01T00:00:00Z |
hierarchy_top_id |
728440385 |
id |
DOAJ072495138 |
language_de |
englisch |
fullrecord |
<?xml version="1.0" encoding="UTF-8"?><collection xmlns="http://www.loc.gov/MARC21/slim"><record><leader>01000caa a22002652 4500</leader><controlfield tag="001">DOAJ072495138</controlfield><controlfield tag="003">DE-627</controlfield><controlfield tag="005">20230503150956.0</controlfield><controlfield tag="007">cr uuu---uuuuu</controlfield><controlfield tag="008">230228s2020 xx |||||o 00| ||eng c</controlfield><datafield tag="024" ind1="7" ind2=" "><subfield code="a">10.1109/ACCESS.2020.3018866</subfield><subfield code="2">doi</subfield></datafield><datafield tag="035" ind1=" " ind2=" "><subfield code="a">(DE-627)DOAJ072495138</subfield></datafield><datafield tag="035" ind1=" " ind2=" "><subfield code="a">(DE-599)DOAJ5bc7aa2b4ebf454298178cc3199d7466</subfield></datafield><datafield tag="040" ind1=" " ind2=" "><subfield code="a">DE-627</subfield><subfield code="b">ger</subfield><subfield code="c">DE-627</subfield><subfield code="e">rakwb</subfield></datafield><datafield tag="041" ind1=" " ind2=" "><subfield code="a">eng</subfield></datafield><datafield tag="050" ind1=" " ind2="0"><subfield code="a">TK1-9971</subfield></datafield><datafield tag="100" ind1="0" ind2=" "><subfield code="a">Ziyi Chen</subfield><subfield code="e">verfasserin</subfield><subfield code="4">aut</subfield></datafield><datafield tag="245" ind1="1" ind2="3"><subfield code="a">An Efficient Parameter Adaptive Support Vector Regression Using K-Means Clustering and Chaotic Slime Mould Algorithm</subfield></datafield><datafield tag="264" ind1=" " ind2="1"><subfield code="c">2020</subfield></datafield><datafield tag="336" ind1=" " ind2=" "><subfield code="a">Text</subfield><subfield code="b">txt</subfield><subfield code="2">rdacontent</subfield></datafield><datafield tag="337" ind1=" " ind2=" "><subfield code="a">Computermedien</subfield><subfield code="b">c</subfield><subfield code="2">rdamedia</subfield></datafield><datafield tag="338" ind1=" " ind2=" "><subfield code="a">Online-Ressource</subfield><subfield code="b">cr</subfield><subfield code="2">rdacarrier</subfield></datafield><datafield tag="520" ind1=" " ind2=" "><subfield code="a">Support vector regression (SVR) performs satisfactorily in prediction problems, especially for small sample prediction. The setting parameters (e.g., kernel type and penalty factor) profoundly impact the performance and efficiency of SVR. The adaptive adjustment of the parameters has always been a research hotspot. However, the substantial time cost and forecast accuracy of parameter adjustment are challenging to many scholars. The contradiction of big data prediction is especially prominent. In the paper, an SVR-based prediction approach is presented using the K-means clustering method (KMCM) and chaotic slime mould algorithm (CSMA). Eight high- and low-dimensional benchmark datasets are applied to obtain appropriate key parameters of KMCM and CSMA, and the forecast accuracy, stability performance and computation complexity are evaluated. The proposed approach obtains the optimal (joint best) forecast accuracy on 6 datasets and produces the most stable output on 3 datasets. It ranks first with a score of 0.024 in the overall evaluation. The outcomes reveal that the proposed approach is capable of tuning the parameters of SVR. KMCM, CSMA and SVR are skillfully integrated in this work and perform well. Although the performance is not outstanding in terms of stability, the proposed approach exhibits very strong performance with respect to prediction accuracy and computation complexity. This work validated the tremendous potential of the proposed approach in the prediction field.</subfield></datafield><datafield tag="650" ind1=" " ind2="4"><subfield code="a">Machine learning</subfield></datafield><datafield tag="650" ind1=" " ind2="4"><subfield code="a">forecasting problem</subfield></datafield><datafield tag="650" ind1=" " ind2="4"><subfield code="a">K-means clustering method</subfield></datafield><datafield tag="650" ind1=" " ind2="4"><subfield code="a">chaotic slime mould algorithm</subfield></datafield><datafield tag="650" ind1=" " ind2="4"><subfield code="a">support vector regression</subfield></datafield><datafield tag="653" ind1=" " ind2="0"><subfield code="a">Electrical engineering. Electronics. Nuclear engineering</subfield></datafield><datafield tag="700" ind1="0" ind2=" "><subfield code="a">Wenbai Liu</subfield><subfield code="e">verfasserin</subfield><subfield code="4">aut</subfield></datafield><datafield tag="773" ind1="0" ind2="8"><subfield code="i">In</subfield><subfield code="t">IEEE Access</subfield><subfield code="d">IEEE, 2014</subfield><subfield code="g">8(2020), Seite 156851-156862</subfield><subfield code="w">(DE-627)728440385</subfield><subfield code="w">(DE-600)2687964-5</subfield><subfield code="x">21693536</subfield><subfield code="7">nnns</subfield></datafield><datafield tag="773" ind1="1" ind2="8"><subfield code="g">volume:8</subfield><subfield code="g">year:2020</subfield><subfield code="g">pages:156851-156862</subfield></datafield><datafield tag="856" ind1="4" ind2="0"><subfield code="u">https://doi.org/10.1109/ACCESS.2020.3018866</subfield><subfield code="z">kostenfrei</subfield></datafield><datafield tag="856" ind1="4" ind2="0"><subfield code="u">https://doaj.org/article/5bc7aa2b4ebf454298178cc3199d7466</subfield><subfield code="z">kostenfrei</subfield></datafield><datafield tag="856" ind1="4" ind2="0"><subfield code="u">https://ieeexplore.ieee.org/document/9174730/</subfield><subfield code="z">kostenfrei</subfield></datafield><datafield tag="856" ind1="4" ind2="2"><subfield code="u">https://doaj.org/toc/2169-3536</subfield><subfield code="y">Journal toc</subfield><subfield code="z">kostenfrei</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_USEFLAG_A</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">SYSFLAG_A</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_DOAJ</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">SSG-OLC-PHA</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_11</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_20</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_22</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_23</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_24</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_31</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_39</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_40</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_60</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_62</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_63</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_65</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_69</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_70</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_73</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_95</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_105</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_110</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_151</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_161</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_170</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_213</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_230</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_285</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_293</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_370</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_602</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2014</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4012</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4037</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4112</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4125</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4126</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4249</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4305</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4306</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4307</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4313</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4322</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4323</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4324</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4325</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4335</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4338</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4367</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4700</subfield></datafield><datafield tag="951" ind1=" " ind2=" "><subfield code="a">AR</subfield></datafield><datafield tag="952" ind1=" " ind2=" "><subfield code="d">8</subfield><subfield code="j">2020</subfield><subfield code="h">156851-156862</subfield></datafield></record></collection>
|
callnumber-first |
T - Technology |
author |
Ziyi Chen |
spellingShingle |
Ziyi Chen misc TK1-9971 misc Machine learning misc forecasting problem misc K-means clustering method misc chaotic slime mould algorithm misc support vector regression misc Electrical engineering. Electronics. Nuclear engineering An Efficient Parameter Adaptive Support Vector Regression Using K-Means Clustering and Chaotic Slime Mould Algorithm |
authorStr |
Ziyi Chen |
ppnlink_with_tag_str_mv |
@@773@@(DE-627)728440385 |
format |
electronic Article |
delete_txt_mv |
keep |
author_role |
aut aut |
collection |
DOAJ |
remote_str |
true |
callnumber-label |
TK1-9971 |
illustrated |
Not Illustrated |
issn |
21693536 |
topic_title |
TK1-9971 An Efficient Parameter Adaptive Support Vector Regression Using K-Means Clustering and Chaotic Slime Mould Algorithm Machine learning forecasting problem K-means clustering method chaotic slime mould algorithm support vector regression |
topic |
misc TK1-9971 misc Machine learning misc forecasting problem misc K-means clustering method misc chaotic slime mould algorithm misc support vector regression misc Electrical engineering. Electronics. Nuclear engineering |
topic_unstemmed |
misc TK1-9971 misc Machine learning misc forecasting problem misc K-means clustering method misc chaotic slime mould algorithm misc support vector regression misc Electrical engineering. Electronics. Nuclear engineering |
topic_browse |
misc TK1-9971 misc Machine learning misc forecasting problem misc K-means clustering method misc chaotic slime mould algorithm misc support vector regression misc Electrical engineering. Electronics. Nuclear engineering |
format_facet |
Elektronische Aufsätze Aufsätze Elektronische Ressource |
format_main_str_mv |
Text Zeitschrift/Artikel |
carriertype_str_mv |
cr |
hierarchy_parent_title |
IEEE Access |
hierarchy_parent_id |
728440385 |
hierarchy_top_title |
IEEE Access |
isfreeaccess_txt |
true |
familylinks_str_mv |
(DE-627)728440385 (DE-600)2687964-5 |
title |
An Efficient Parameter Adaptive Support Vector Regression Using K-Means Clustering and Chaotic Slime Mould Algorithm |
ctrlnum |
(DE-627)DOAJ072495138 (DE-599)DOAJ5bc7aa2b4ebf454298178cc3199d7466 |
title_full |
An Efficient Parameter Adaptive Support Vector Regression Using K-Means Clustering and Chaotic Slime Mould Algorithm |
author_sort |
Ziyi Chen |
journal |
IEEE Access |
journalStr |
IEEE Access |
callnumber-first-code |
T |
lang_code |
eng |
isOA_bool |
true |
recordtype |
marc |
publishDateSort |
2020 |
contenttype_str_mv |
txt |
container_start_page |
156851 |
author_browse |
Ziyi Chen Wenbai Liu |
container_volume |
8 |
class |
TK1-9971 |
format_se |
Elektronische Aufsätze |
author-letter |
Ziyi Chen |
doi_str_mv |
10.1109/ACCESS.2020.3018866 |
author2-role |
verfasserin |
title_sort |
efficient parameter adaptive support vector regression using k-means clustering and chaotic slime mould algorithm |
callnumber |
TK1-9971 |
title_auth |
An Efficient Parameter Adaptive Support Vector Regression Using K-Means Clustering and Chaotic Slime Mould Algorithm |
abstract |
Support vector regression (SVR) performs satisfactorily in prediction problems, especially for small sample prediction. The setting parameters (e.g., kernel type and penalty factor) profoundly impact the performance and efficiency of SVR. The adaptive adjustment of the parameters has always been a research hotspot. However, the substantial time cost and forecast accuracy of parameter adjustment are challenging to many scholars. The contradiction of big data prediction is especially prominent. In the paper, an SVR-based prediction approach is presented using the K-means clustering method (KMCM) and chaotic slime mould algorithm (CSMA). Eight high- and low-dimensional benchmark datasets are applied to obtain appropriate key parameters of KMCM and CSMA, and the forecast accuracy, stability performance and computation complexity are evaluated. The proposed approach obtains the optimal (joint best) forecast accuracy on 6 datasets and produces the most stable output on 3 datasets. It ranks first with a score of 0.024 in the overall evaluation. The outcomes reveal that the proposed approach is capable of tuning the parameters of SVR. KMCM, CSMA and SVR are skillfully integrated in this work and perform well. Although the performance is not outstanding in terms of stability, the proposed approach exhibits very strong performance with respect to prediction accuracy and computation complexity. This work validated the tremendous potential of the proposed approach in the prediction field. |
abstractGer |
Support vector regression (SVR) performs satisfactorily in prediction problems, especially for small sample prediction. The setting parameters (e.g., kernel type and penalty factor) profoundly impact the performance and efficiency of SVR. The adaptive adjustment of the parameters has always been a research hotspot. However, the substantial time cost and forecast accuracy of parameter adjustment are challenging to many scholars. The contradiction of big data prediction is especially prominent. In the paper, an SVR-based prediction approach is presented using the K-means clustering method (KMCM) and chaotic slime mould algorithm (CSMA). Eight high- and low-dimensional benchmark datasets are applied to obtain appropriate key parameters of KMCM and CSMA, and the forecast accuracy, stability performance and computation complexity are evaluated. The proposed approach obtains the optimal (joint best) forecast accuracy on 6 datasets and produces the most stable output on 3 datasets. It ranks first with a score of 0.024 in the overall evaluation. The outcomes reveal that the proposed approach is capable of tuning the parameters of SVR. KMCM, CSMA and SVR are skillfully integrated in this work and perform well. Although the performance is not outstanding in terms of stability, the proposed approach exhibits very strong performance with respect to prediction accuracy and computation complexity. This work validated the tremendous potential of the proposed approach in the prediction field. |
abstract_unstemmed |
Support vector regression (SVR) performs satisfactorily in prediction problems, especially for small sample prediction. The setting parameters (e.g., kernel type and penalty factor) profoundly impact the performance and efficiency of SVR. The adaptive adjustment of the parameters has always been a research hotspot. However, the substantial time cost and forecast accuracy of parameter adjustment are challenging to many scholars. The contradiction of big data prediction is especially prominent. In the paper, an SVR-based prediction approach is presented using the K-means clustering method (KMCM) and chaotic slime mould algorithm (CSMA). Eight high- and low-dimensional benchmark datasets are applied to obtain appropriate key parameters of KMCM and CSMA, and the forecast accuracy, stability performance and computation complexity are evaluated. The proposed approach obtains the optimal (joint best) forecast accuracy on 6 datasets and produces the most stable output on 3 datasets. It ranks first with a score of 0.024 in the overall evaluation. The outcomes reveal that the proposed approach is capable of tuning the parameters of SVR. KMCM, CSMA and SVR are skillfully integrated in this work and perform well. Although the performance is not outstanding in terms of stability, the proposed approach exhibits very strong performance with respect to prediction accuracy and computation complexity. This work validated the tremendous potential of the proposed approach in the prediction field. |
collection_details |
GBV_USEFLAG_A SYSFLAG_A GBV_DOAJ SSG-OLC-PHA GBV_ILN_11 GBV_ILN_20 GBV_ILN_22 GBV_ILN_23 GBV_ILN_24 GBV_ILN_31 GBV_ILN_39 GBV_ILN_40 GBV_ILN_60 GBV_ILN_62 GBV_ILN_63 GBV_ILN_65 GBV_ILN_69 GBV_ILN_70 GBV_ILN_73 GBV_ILN_95 GBV_ILN_105 GBV_ILN_110 GBV_ILN_151 GBV_ILN_161 GBV_ILN_170 GBV_ILN_213 GBV_ILN_230 GBV_ILN_285 GBV_ILN_293 GBV_ILN_370 GBV_ILN_602 GBV_ILN_2014 GBV_ILN_4012 GBV_ILN_4037 GBV_ILN_4112 GBV_ILN_4125 GBV_ILN_4126 GBV_ILN_4249 GBV_ILN_4305 GBV_ILN_4306 GBV_ILN_4307 GBV_ILN_4313 GBV_ILN_4322 GBV_ILN_4323 GBV_ILN_4324 GBV_ILN_4325 GBV_ILN_4335 GBV_ILN_4338 GBV_ILN_4367 GBV_ILN_4700 |
title_short |
An Efficient Parameter Adaptive Support Vector Regression Using K-Means Clustering and Chaotic Slime Mould Algorithm |
url |
https://doi.org/10.1109/ACCESS.2020.3018866 https://doaj.org/article/5bc7aa2b4ebf454298178cc3199d7466 https://ieeexplore.ieee.org/document/9174730/ https://doaj.org/toc/2169-3536 |
remote_bool |
true |
author2 |
Wenbai Liu |
author2Str |
Wenbai Liu |
ppnlink |
728440385 |
callnumber-subject |
TK - Electrical and Nuclear Engineering |
mediatype_str_mv |
c |
isOA_txt |
true |
hochschulschrift_bool |
false |
doi_str |
10.1109/ACCESS.2020.3018866 |
callnumber-a |
TK1-9971 |
up_date |
2024-07-04T01:17:37.797Z |
_version_ |
1803609291147444224 |
fullrecord_marcxml |
<?xml version="1.0" encoding="UTF-8"?><collection xmlns="http://www.loc.gov/MARC21/slim"><record><leader>01000caa a22002652 4500</leader><controlfield tag="001">DOAJ072495138</controlfield><controlfield tag="003">DE-627</controlfield><controlfield tag="005">20230503150956.0</controlfield><controlfield tag="007">cr uuu---uuuuu</controlfield><controlfield tag="008">230228s2020 xx |||||o 00| ||eng c</controlfield><datafield tag="024" ind1="7" ind2=" "><subfield code="a">10.1109/ACCESS.2020.3018866</subfield><subfield code="2">doi</subfield></datafield><datafield tag="035" ind1=" " ind2=" "><subfield code="a">(DE-627)DOAJ072495138</subfield></datafield><datafield tag="035" ind1=" " ind2=" "><subfield code="a">(DE-599)DOAJ5bc7aa2b4ebf454298178cc3199d7466</subfield></datafield><datafield tag="040" ind1=" " ind2=" "><subfield code="a">DE-627</subfield><subfield code="b">ger</subfield><subfield code="c">DE-627</subfield><subfield code="e">rakwb</subfield></datafield><datafield tag="041" ind1=" " ind2=" "><subfield code="a">eng</subfield></datafield><datafield tag="050" ind1=" " ind2="0"><subfield code="a">TK1-9971</subfield></datafield><datafield tag="100" ind1="0" ind2=" "><subfield code="a">Ziyi Chen</subfield><subfield code="e">verfasserin</subfield><subfield code="4">aut</subfield></datafield><datafield tag="245" ind1="1" ind2="3"><subfield code="a">An Efficient Parameter Adaptive Support Vector Regression Using K-Means Clustering and Chaotic Slime Mould Algorithm</subfield></datafield><datafield tag="264" ind1=" " ind2="1"><subfield code="c">2020</subfield></datafield><datafield tag="336" ind1=" " ind2=" "><subfield code="a">Text</subfield><subfield code="b">txt</subfield><subfield code="2">rdacontent</subfield></datafield><datafield tag="337" ind1=" " ind2=" "><subfield code="a">Computermedien</subfield><subfield code="b">c</subfield><subfield code="2">rdamedia</subfield></datafield><datafield tag="338" ind1=" " ind2=" "><subfield code="a">Online-Ressource</subfield><subfield code="b">cr</subfield><subfield code="2">rdacarrier</subfield></datafield><datafield tag="520" ind1=" " ind2=" "><subfield code="a">Support vector regression (SVR) performs satisfactorily in prediction problems, especially for small sample prediction. The setting parameters (e.g., kernel type and penalty factor) profoundly impact the performance and efficiency of SVR. The adaptive adjustment of the parameters has always been a research hotspot. However, the substantial time cost and forecast accuracy of parameter adjustment are challenging to many scholars. The contradiction of big data prediction is especially prominent. In the paper, an SVR-based prediction approach is presented using the K-means clustering method (KMCM) and chaotic slime mould algorithm (CSMA). Eight high- and low-dimensional benchmark datasets are applied to obtain appropriate key parameters of KMCM and CSMA, and the forecast accuracy, stability performance and computation complexity are evaluated. The proposed approach obtains the optimal (joint best) forecast accuracy on 6 datasets and produces the most stable output on 3 datasets. It ranks first with a score of 0.024 in the overall evaluation. The outcomes reveal that the proposed approach is capable of tuning the parameters of SVR. KMCM, CSMA and SVR are skillfully integrated in this work and perform well. Although the performance is not outstanding in terms of stability, the proposed approach exhibits very strong performance with respect to prediction accuracy and computation complexity. This work validated the tremendous potential of the proposed approach in the prediction field.</subfield></datafield><datafield tag="650" ind1=" " ind2="4"><subfield code="a">Machine learning</subfield></datafield><datafield tag="650" ind1=" " ind2="4"><subfield code="a">forecasting problem</subfield></datafield><datafield tag="650" ind1=" " ind2="4"><subfield code="a">K-means clustering method</subfield></datafield><datafield tag="650" ind1=" " ind2="4"><subfield code="a">chaotic slime mould algorithm</subfield></datafield><datafield tag="650" ind1=" " ind2="4"><subfield code="a">support vector regression</subfield></datafield><datafield tag="653" ind1=" " ind2="0"><subfield code="a">Electrical engineering. Electronics. Nuclear engineering</subfield></datafield><datafield tag="700" ind1="0" ind2=" "><subfield code="a">Wenbai Liu</subfield><subfield code="e">verfasserin</subfield><subfield code="4">aut</subfield></datafield><datafield tag="773" ind1="0" ind2="8"><subfield code="i">In</subfield><subfield code="t">IEEE Access</subfield><subfield code="d">IEEE, 2014</subfield><subfield code="g">8(2020), Seite 156851-156862</subfield><subfield code="w">(DE-627)728440385</subfield><subfield code="w">(DE-600)2687964-5</subfield><subfield code="x">21693536</subfield><subfield code="7">nnns</subfield></datafield><datafield tag="773" ind1="1" ind2="8"><subfield code="g">volume:8</subfield><subfield code="g">year:2020</subfield><subfield code="g">pages:156851-156862</subfield></datafield><datafield tag="856" ind1="4" ind2="0"><subfield code="u">https://doi.org/10.1109/ACCESS.2020.3018866</subfield><subfield code="z">kostenfrei</subfield></datafield><datafield tag="856" ind1="4" ind2="0"><subfield code="u">https://doaj.org/article/5bc7aa2b4ebf454298178cc3199d7466</subfield><subfield code="z">kostenfrei</subfield></datafield><datafield tag="856" ind1="4" ind2="0"><subfield code="u">https://ieeexplore.ieee.org/document/9174730/</subfield><subfield code="z">kostenfrei</subfield></datafield><datafield tag="856" ind1="4" ind2="2"><subfield code="u">https://doaj.org/toc/2169-3536</subfield><subfield code="y">Journal toc</subfield><subfield code="z">kostenfrei</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_USEFLAG_A</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">SYSFLAG_A</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_DOAJ</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">SSG-OLC-PHA</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_11</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_20</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_22</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_23</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_24</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_31</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_39</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_40</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_60</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_62</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_63</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_65</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_69</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_70</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_73</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_95</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_105</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_110</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_151</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_161</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_170</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_213</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_230</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_285</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_293</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_370</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_602</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2014</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4012</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4037</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4112</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4125</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4126</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4249</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4305</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4306</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4307</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4313</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4322</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4323</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4324</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4325</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4335</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4338</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4367</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4700</subfield></datafield><datafield tag="951" ind1=" " ind2=" "><subfield code="a">AR</subfield></datafield><datafield tag="952" ind1=" " ind2=" "><subfield code="d">8</subfield><subfield code="j">2020</subfield><subfield code="h">156851-156862</subfield></datafield></record></collection>
|
score |
7.4017487 |