KNN Normalized Optimization and Platform Tuning Based on Hadoop
Big data has become part of the life for many people. The data about people’s life are being continously collected, analysized and applied as our society progresses into the big data era. Behind the scene, the computer server clusters need to process hundres of millions pieces of data every day. It...
Ausführliche Beschreibung
Autor*in: |
Chen Ma [verfasserIn] Yuhong Chi [verfasserIn] |
---|
Format: |
E-Artikel |
---|---|
Sprache: |
Englisch |
Erschienen: |
2022 |
---|
Schlagwörter: |
---|
Übergeordnetes Werk: |
In: IEEE Access - IEEE, 2014, 10(2022), Seite 81406-81433 |
---|---|
Übergeordnetes Werk: |
volume:10 ; year:2022 ; pages:81406-81433 |
Links: |
---|
DOI / URN: |
10.1109/ACCESS.2022.3195872 |
---|
Katalog-ID: |
DOAJ028284704 |
---|
LEADER | 01000caa a22002652 4500 | ||
---|---|---|---|
001 | DOAJ028284704 | ||
003 | DE-627 | ||
005 | 20230503061516.0 | ||
007 | cr uuu---uuuuu | ||
008 | 230226s2022 xx |||||o 00| ||eng c | ||
024 | 7 | |a 10.1109/ACCESS.2022.3195872 |2 doi | |
035 | |a (DE-627)DOAJ028284704 | ||
035 | |a (DE-599)DOAJd9e036c2d6b04f4d98846b28029d75d9 | ||
040 | |a DE-627 |b ger |c DE-627 |e rakwb | ||
041 | |a eng | ||
050 | 0 | |a TK1-9971 | |
100 | 0 | |a Chen Ma |e verfasserin |4 aut | |
245 | 1 | 0 | |a KNN Normalized Optimization and Platform Tuning Based on Hadoop |
264 | 1 | |c 2022 | |
336 | |a Text |b txt |2 rdacontent | ||
337 | |a Computermedien |b c |2 rdamedia | ||
338 | |a Online-Ressource |b cr |2 rdacarrier | ||
520 | |a Big data has become part of the life for many people. The data about people’s life are being continously collected, analysized and applied as our society progresses into the big data era. Behind the scene, the computer server clusters need to process hundres of millions pieces of data every day. It is very important to choose the right big data processing platform and algorithm to deal with different kinds of datasets. Therefore, in order to be fully familiar with the related work of driving big data processing, it is necessary to master the classification algorithm of data. It aims to help us carry out a classification model or operation analysis of classification function by screening and classifying the current data in data mining. In addition, the given data can be mapped to the specified category area, and the development trend of future data can be predicted through classification models. So this kind of algorithm helps to reduce the difficulty of work operation and improve people’s work efficiency. This paper optimizes the classical classification algorithm—KNN, and designs a new normalized algorithm called PEWM_G KNN. From the perspective of distance measurement, we use Pearson correlation coefficient to replace the traditional Euclidean Metric, then we further refine the study for attribute values of datasets and introduce the entropy weight method, combined with Pearson’s measure to optimize the distance calculation equation. After the K value is fixed, we added Gaussian Function to carry out the selection of classification. In this study, we compared the effects of every step, and tested datasets with different data types and sizes, in order to test the performance of the algorithm under different scenarios. The datasets we used include Iris, Breast Cancer, Dry Bean and HTRU2 (All datasets are from The University of California, Irvine). Finally, we further analyze the performance of different system configuration parameters on the prediction rate and time. The experimental results show that PEWM_G KNN algorithm has better optimization effect for datasets with more complex attribute values and more records than the original KNN algorithm. Moreover, the optimization of platform parameters improves prediction rates of algorithms and reduces the time. We tested PEWM_G KNN on the Hadoop platform, confiured with HDFS, Hadoop-YARN, ZooKeeper, Hadoop-HA and MapReduce. | ||
650 | 4 | |a KNN | |
650 | 4 | |a classification algorithm | |
650 | 4 | |a HDFS | |
650 | 4 | |a Hadoop-YARN | |
650 | 4 | |a ZooKeeper | |
650 | 4 | |a Hadoop-HA | |
653 | 0 | |a Electrical engineering. Electronics. Nuclear engineering | |
700 | 0 | |a Yuhong Chi |e verfasserin |4 aut | |
773 | 0 | 8 | |i In |t IEEE Access |d IEEE, 2014 |g 10(2022), Seite 81406-81433 |w (DE-627)728440385 |w (DE-600)2687964-5 |x 21693536 |7 nnns |
773 | 1 | 8 | |g volume:10 |g year:2022 |g pages:81406-81433 |
856 | 4 | 0 | |u https://doi.org/10.1109/ACCESS.2022.3195872 |z kostenfrei |
856 | 4 | 0 | |u https://doaj.org/article/d9e036c2d6b04f4d98846b28029d75d9 |z kostenfrei |
856 | 4 | 0 | |u https://ieeexplore.ieee.org/document/9847130/ |z kostenfrei |
856 | 4 | 2 | |u https://doaj.org/toc/2169-3536 |y Journal toc |z kostenfrei |
912 | |a GBV_USEFLAG_A | ||
912 | |a SYSFLAG_A | ||
912 | |a GBV_DOAJ | ||
912 | |a SSG-OLC-PHA | ||
912 | |a GBV_ILN_11 | ||
912 | |a GBV_ILN_20 | ||
912 | |a GBV_ILN_22 | ||
912 | |a GBV_ILN_23 | ||
912 | |a GBV_ILN_24 | ||
912 | |a GBV_ILN_31 | ||
912 | |a GBV_ILN_39 | ||
912 | |a GBV_ILN_40 | ||
912 | |a GBV_ILN_60 | ||
912 | |a GBV_ILN_62 | ||
912 | |a GBV_ILN_63 | ||
912 | |a GBV_ILN_65 | ||
912 | |a GBV_ILN_69 | ||
912 | |a GBV_ILN_70 | ||
912 | |a GBV_ILN_73 | ||
912 | |a GBV_ILN_95 | ||
912 | |a GBV_ILN_105 | ||
912 | |a GBV_ILN_110 | ||
912 | |a GBV_ILN_151 | ||
912 | |a GBV_ILN_161 | ||
912 | |a GBV_ILN_170 | ||
912 | |a GBV_ILN_213 | ||
912 | |a GBV_ILN_230 | ||
912 | |a GBV_ILN_285 | ||
912 | |a GBV_ILN_293 | ||
912 | |a GBV_ILN_370 | ||
912 | |a GBV_ILN_602 | ||
912 | |a GBV_ILN_2014 | ||
912 | |a GBV_ILN_4012 | ||
912 | |a GBV_ILN_4037 | ||
912 | |a GBV_ILN_4112 | ||
912 | |a GBV_ILN_4125 | ||
912 | |a GBV_ILN_4126 | ||
912 | |a GBV_ILN_4249 | ||
912 | |a GBV_ILN_4305 | ||
912 | |a GBV_ILN_4306 | ||
912 | |a GBV_ILN_4307 | ||
912 | |a GBV_ILN_4313 | ||
912 | |a GBV_ILN_4322 | ||
912 | |a GBV_ILN_4323 | ||
912 | |a GBV_ILN_4324 | ||
912 | |a GBV_ILN_4325 | ||
912 | |a GBV_ILN_4335 | ||
912 | |a GBV_ILN_4338 | ||
912 | |a GBV_ILN_4367 | ||
912 | |a GBV_ILN_4700 | ||
951 | |a AR | ||
952 | |d 10 |j 2022 |h 81406-81433 |
author_variant |
c m cm y c yc |
---|---|
matchkey_str |
article:21693536:2022----::nnraieotmztoadltomu |
hierarchy_sort_str |
2022 |
callnumber-subject-code |
TK |
publishDate |
2022 |
allfields |
10.1109/ACCESS.2022.3195872 doi (DE-627)DOAJ028284704 (DE-599)DOAJd9e036c2d6b04f4d98846b28029d75d9 DE-627 ger DE-627 rakwb eng TK1-9971 Chen Ma verfasserin aut KNN Normalized Optimization and Platform Tuning Based on Hadoop 2022 Text txt rdacontent Computermedien c rdamedia Online-Ressource cr rdacarrier Big data has become part of the life for many people. The data about people’s life are being continously collected, analysized and applied as our society progresses into the big data era. Behind the scene, the computer server clusters need to process hundres of millions pieces of data every day. It is very important to choose the right big data processing platform and algorithm to deal with different kinds of datasets. Therefore, in order to be fully familiar with the related work of driving big data processing, it is necessary to master the classification algorithm of data. It aims to help us carry out a classification model or operation analysis of classification function by screening and classifying the current data in data mining. In addition, the given data can be mapped to the specified category area, and the development trend of future data can be predicted through classification models. So this kind of algorithm helps to reduce the difficulty of work operation and improve people’s work efficiency. This paper optimizes the classical classification algorithm—KNN, and designs a new normalized algorithm called PEWM_G KNN. From the perspective of distance measurement, we use Pearson correlation coefficient to replace the traditional Euclidean Metric, then we further refine the study for attribute values of datasets and introduce the entropy weight method, combined with Pearson’s measure to optimize the distance calculation equation. After the K value is fixed, we added Gaussian Function to carry out the selection of classification. In this study, we compared the effects of every step, and tested datasets with different data types and sizes, in order to test the performance of the algorithm under different scenarios. The datasets we used include Iris, Breast Cancer, Dry Bean and HTRU2 (All datasets are from The University of California, Irvine). Finally, we further analyze the performance of different system configuration parameters on the prediction rate and time. The experimental results show that PEWM_G KNN algorithm has better optimization effect for datasets with more complex attribute values and more records than the original KNN algorithm. Moreover, the optimization of platform parameters improves prediction rates of algorithms and reduces the time. We tested PEWM_G KNN on the Hadoop platform, confiured with HDFS, Hadoop-YARN, ZooKeeper, Hadoop-HA and MapReduce. KNN classification algorithm HDFS Hadoop-YARN ZooKeeper Hadoop-HA Electrical engineering. Electronics. Nuclear engineering Yuhong Chi verfasserin aut In IEEE Access IEEE, 2014 10(2022), Seite 81406-81433 (DE-627)728440385 (DE-600)2687964-5 21693536 nnns volume:10 year:2022 pages:81406-81433 https://doi.org/10.1109/ACCESS.2022.3195872 kostenfrei https://doaj.org/article/d9e036c2d6b04f4d98846b28029d75d9 kostenfrei https://ieeexplore.ieee.org/document/9847130/ kostenfrei https://doaj.org/toc/2169-3536 Journal toc kostenfrei GBV_USEFLAG_A SYSFLAG_A GBV_DOAJ SSG-OLC-PHA GBV_ILN_11 GBV_ILN_20 GBV_ILN_22 GBV_ILN_23 GBV_ILN_24 GBV_ILN_31 GBV_ILN_39 GBV_ILN_40 GBV_ILN_60 GBV_ILN_62 GBV_ILN_63 GBV_ILN_65 GBV_ILN_69 GBV_ILN_70 GBV_ILN_73 GBV_ILN_95 GBV_ILN_105 GBV_ILN_110 GBV_ILN_151 GBV_ILN_161 GBV_ILN_170 GBV_ILN_213 GBV_ILN_230 GBV_ILN_285 GBV_ILN_293 GBV_ILN_370 GBV_ILN_602 GBV_ILN_2014 GBV_ILN_4012 GBV_ILN_4037 GBV_ILN_4112 GBV_ILN_4125 GBV_ILN_4126 GBV_ILN_4249 GBV_ILN_4305 GBV_ILN_4306 GBV_ILN_4307 GBV_ILN_4313 GBV_ILN_4322 GBV_ILN_4323 GBV_ILN_4324 GBV_ILN_4325 GBV_ILN_4335 GBV_ILN_4338 GBV_ILN_4367 GBV_ILN_4700 AR 10 2022 81406-81433 |
spelling |
10.1109/ACCESS.2022.3195872 doi (DE-627)DOAJ028284704 (DE-599)DOAJd9e036c2d6b04f4d98846b28029d75d9 DE-627 ger DE-627 rakwb eng TK1-9971 Chen Ma verfasserin aut KNN Normalized Optimization and Platform Tuning Based on Hadoop 2022 Text txt rdacontent Computermedien c rdamedia Online-Ressource cr rdacarrier Big data has become part of the life for many people. The data about people’s life are being continously collected, analysized and applied as our society progresses into the big data era. Behind the scene, the computer server clusters need to process hundres of millions pieces of data every day. It is very important to choose the right big data processing platform and algorithm to deal with different kinds of datasets. Therefore, in order to be fully familiar with the related work of driving big data processing, it is necessary to master the classification algorithm of data. It aims to help us carry out a classification model or operation analysis of classification function by screening and classifying the current data in data mining. In addition, the given data can be mapped to the specified category area, and the development trend of future data can be predicted through classification models. So this kind of algorithm helps to reduce the difficulty of work operation and improve people’s work efficiency. This paper optimizes the classical classification algorithm—KNN, and designs a new normalized algorithm called PEWM_G KNN. From the perspective of distance measurement, we use Pearson correlation coefficient to replace the traditional Euclidean Metric, then we further refine the study for attribute values of datasets and introduce the entropy weight method, combined with Pearson’s measure to optimize the distance calculation equation. After the K value is fixed, we added Gaussian Function to carry out the selection of classification. In this study, we compared the effects of every step, and tested datasets with different data types and sizes, in order to test the performance of the algorithm under different scenarios. The datasets we used include Iris, Breast Cancer, Dry Bean and HTRU2 (All datasets are from The University of California, Irvine). Finally, we further analyze the performance of different system configuration parameters on the prediction rate and time. The experimental results show that PEWM_G KNN algorithm has better optimization effect for datasets with more complex attribute values and more records than the original KNN algorithm. Moreover, the optimization of platform parameters improves prediction rates of algorithms and reduces the time. We tested PEWM_G KNN on the Hadoop platform, confiured with HDFS, Hadoop-YARN, ZooKeeper, Hadoop-HA and MapReduce. KNN classification algorithm HDFS Hadoop-YARN ZooKeeper Hadoop-HA Electrical engineering. Electronics. Nuclear engineering Yuhong Chi verfasserin aut In IEEE Access IEEE, 2014 10(2022), Seite 81406-81433 (DE-627)728440385 (DE-600)2687964-5 21693536 nnns volume:10 year:2022 pages:81406-81433 https://doi.org/10.1109/ACCESS.2022.3195872 kostenfrei https://doaj.org/article/d9e036c2d6b04f4d98846b28029d75d9 kostenfrei https://ieeexplore.ieee.org/document/9847130/ kostenfrei https://doaj.org/toc/2169-3536 Journal toc kostenfrei GBV_USEFLAG_A SYSFLAG_A GBV_DOAJ SSG-OLC-PHA GBV_ILN_11 GBV_ILN_20 GBV_ILN_22 GBV_ILN_23 GBV_ILN_24 GBV_ILN_31 GBV_ILN_39 GBV_ILN_40 GBV_ILN_60 GBV_ILN_62 GBV_ILN_63 GBV_ILN_65 GBV_ILN_69 GBV_ILN_70 GBV_ILN_73 GBV_ILN_95 GBV_ILN_105 GBV_ILN_110 GBV_ILN_151 GBV_ILN_161 GBV_ILN_170 GBV_ILN_213 GBV_ILN_230 GBV_ILN_285 GBV_ILN_293 GBV_ILN_370 GBV_ILN_602 GBV_ILN_2014 GBV_ILN_4012 GBV_ILN_4037 GBV_ILN_4112 GBV_ILN_4125 GBV_ILN_4126 GBV_ILN_4249 GBV_ILN_4305 GBV_ILN_4306 GBV_ILN_4307 GBV_ILN_4313 GBV_ILN_4322 GBV_ILN_4323 GBV_ILN_4324 GBV_ILN_4325 GBV_ILN_4335 GBV_ILN_4338 GBV_ILN_4367 GBV_ILN_4700 AR 10 2022 81406-81433 |
allfields_unstemmed |
10.1109/ACCESS.2022.3195872 doi (DE-627)DOAJ028284704 (DE-599)DOAJd9e036c2d6b04f4d98846b28029d75d9 DE-627 ger DE-627 rakwb eng TK1-9971 Chen Ma verfasserin aut KNN Normalized Optimization and Platform Tuning Based on Hadoop 2022 Text txt rdacontent Computermedien c rdamedia Online-Ressource cr rdacarrier Big data has become part of the life for many people. The data about people’s life are being continously collected, analysized and applied as our society progresses into the big data era. Behind the scene, the computer server clusters need to process hundres of millions pieces of data every day. It is very important to choose the right big data processing platform and algorithm to deal with different kinds of datasets. Therefore, in order to be fully familiar with the related work of driving big data processing, it is necessary to master the classification algorithm of data. It aims to help us carry out a classification model or operation analysis of classification function by screening and classifying the current data in data mining. In addition, the given data can be mapped to the specified category area, and the development trend of future data can be predicted through classification models. So this kind of algorithm helps to reduce the difficulty of work operation and improve people’s work efficiency. This paper optimizes the classical classification algorithm—KNN, and designs a new normalized algorithm called PEWM_G KNN. From the perspective of distance measurement, we use Pearson correlation coefficient to replace the traditional Euclidean Metric, then we further refine the study for attribute values of datasets and introduce the entropy weight method, combined with Pearson’s measure to optimize the distance calculation equation. After the K value is fixed, we added Gaussian Function to carry out the selection of classification. In this study, we compared the effects of every step, and tested datasets with different data types and sizes, in order to test the performance of the algorithm under different scenarios. The datasets we used include Iris, Breast Cancer, Dry Bean and HTRU2 (All datasets are from The University of California, Irvine). Finally, we further analyze the performance of different system configuration parameters on the prediction rate and time. The experimental results show that PEWM_G KNN algorithm has better optimization effect for datasets with more complex attribute values and more records than the original KNN algorithm. Moreover, the optimization of platform parameters improves prediction rates of algorithms and reduces the time. We tested PEWM_G KNN on the Hadoop platform, confiured with HDFS, Hadoop-YARN, ZooKeeper, Hadoop-HA and MapReduce. KNN classification algorithm HDFS Hadoop-YARN ZooKeeper Hadoop-HA Electrical engineering. Electronics. Nuclear engineering Yuhong Chi verfasserin aut In IEEE Access IEEE, 2014 10(2022), Seite 81406-81433 (DE-627)728440385 (DE-600)2687964-5 21693536 nnns volume:10 year:2022 pages:81406-81433 https://doi.org/10.1109/ACCESS.2022.3195872 kostenfrei https://doaj.org/article/d9e036c2d6b04f4d98846b28029d75d9 kostenfrei https://ieeexplore.ieee.org/document/9847130/ kostenfrei https://doaj.org/toc/2169-3536 Journal toc kostenfrei GBV_USEFLAG_A SYSFLAG_A GBV_DOAJ SSG-OLC-PHA GBV_ILN_11 GBV_ILN_20 GBV_ILN_22 GBV_ILN_23 GBV_ILN_24 GBV_ILN_31 GBV_ILN_39 GBV_ILN_40 GBV_ILN_60 GBV_ILN_62 GBV_ILN_63 GBV_ILN_65 GBV_ILN_69 GBV_ILN_70 GBV_ILN_73 GBV_ILN_95 GBV_ILN_105 GBV_ILN_110 GBV_ILN_151 GBV_ILN_161 GBV_ILN_170 GBV_ILN_213 GBV_ILN_230 GBV_ILN_285 GBV_ILN_293 GBV_ILN_370 GBV_ILN_602 GBV_ILN_2014 GBV_ILN_4012 GBV_ILN_4037 GBV_ILN_4112 GBV_ILN_4125 GBV_ILN_4126 GBV_ILN_4249 GBV_ILN_4305 GBV_ILN_4306 GBV_ILN_4307 GBV_ILN_4313 GBV_ILN_4322 GBV_ILN_4323 GBV_ILN_4324 GBV_ILN_4325 GBV_ILN_4335 GBV_ILN_4338 GBV_ILN_4367 GBV_ILN_4700 AR 10 2022 81406-81433 |
allfieldsGer |
10.1109/ACCESS.2022.3195872 doi (DE-627)DOAJ028284704 (DE-599)DOAJd9e036c2d6b04f4d98846b28029d75d9 DE-627 ger DE-627 rakwb eng TK1-9971 Chen Ma verfasserin aut KNN Normalized Optimization and Platform Tuning Based on Hadoop 2022 Text txt rdacontent Computermedien c rdamedia Online-Ressource cr rdacarrier Big data has become part of the life for many people. The data about people’s life are being continously collected, analysized and applied as our society progresses into the big data era. Behind the scene, the computer server clusters need to process hundres of millions pieces of data every day. It is very important to choose the right big data processing platform and algorithm to deal with different kinds of datasets. Therefore, in order to be fully familiar with the related work of driving big data processing, it is necessary to master the classification algorithm of data. It aims to help us carry out a classification model or operation analysis of classification function by screening and classifying the current data in data mining. In addition, the given data can be mapped to the specified category area, and the development trend of future data can be predicted through classification models. So this kind of algorithm helps to reduce the difficulty of work operation and improve people’s work efficiency. This paper optimizes the classical classification algorithm—KNN, and designs a new normalized algorithm called PEWM_G KNN. From the perspective of distance measurement, we use Pearson correlation coefficient to replace the traditional Euclidean Metric, then we further refine the study for attribute values of datasets and introduce the entropy weight method, combined with Pearson’s measure to optimize the distance calculation equation. After the K value is fixed, we added Gaussian Function to carry out the selection of classification. In this study, we compared the effects of every step, and tested datasets with different data types and sizes, in order to test the performance of the algorithm under different scenarios. The datasets we used include Iris, Breast Cancer, Dry Bean and HTRU2 (All datasets are from The University of California, Irvine). Finally, we further analyze the performance of different system configuration parameters on the prediction rate and time. The experimental results show that PEWM_G KNN algorithm has better optimization effect for datasets with more complex attribute values and more records than the original KNN algorithm. Moreover, the optimization of platform parameters improves prediction rates of algorithms and reduces the time. We tested PEWM_G KNN on the Hadoop platform, confiured with HDFS, Hadoop-YARN, ZooKeeper, Hadoop-HA and MapReduce. KNN classification algorithm HDFS Hadoop-YARN ZooKeeper Hadoop-HA Electrical engineering. Electronics. Nuclear engineering Yuhong Chi verfasserin aut In IEEE Access IEEE, 2014 10(2022), Seite 81406-81433 (DE-627)728440385 (DE-600)2687964-5 21693536 nnns volume:10 year:2022 pages:81406-81433 https://doi.org/10.1109/ACCESS.2022.3195872 kostenfrei https://doaj.org/article/d9e036c2d6b04f4d98846b28029d75d9 kostenfrei https://ieeexplore.ieee.org/document/9847130/ kostenfrei https://doaj.org/toc/2169-3536 Journal toc kostenfrei GBV_USEFLAG_A SYSFLAG_A GBV_DOAJ SSG-OLC-PHA GBV_ILN_11 GBV_ILN_20 GBV_ILN_22 GBV_ILN_23 GBV_ILN_24 GBV_ILN_31 GBV_ILN_39 GBV_ILN_40 GBV_ILN_60 GBV_ILN_62 GBV_ILN_63 GBV_ILN_65 GBV_ILN_69 GBV_ILN_70 GBV_ILN_73 GBV_ILN_95 GBV_ILN_105 GBV_ILN_110 GBV_ILN_151 GBV_ILN_161 GBV_ILN_170 GBV_ILN_213 GBV_ILN_230 GBV_ILN_285 GBV_ILN_293 GBV_ILN_370 GBV_ILN_602 GBV_ILN_2014 GBV_ILN_4012 GBV_ILN_4037 GBV_ILN_4112 GBV_ILN_4125 GBV_ILN_4126 GBV_ILN_4249 GBV_ILN_4305 GBV_ILN_4306 GBV_ILN_4307 GBV_ILN_4313 GBV_ILN_4322 GBV_ILN_4323 GBV_ILN_4324 GBV_ILN_4325 GBV_ILN_4335 GBV_ILN_4338 GBV_ILN_4367 GBV_ILN_4700 AR 10 2022 81406-81433 |
allfieldsSound |
10.1109/ACCESS.2022.3195872 doi (DE-627)DOAJ028284704 (DE-599)DOAJd9e036c2d6b04f4d98846b28029d75d9 DE-627 ger DE-627 rakwb eng TK1-9971 Chen Ma verfasserin aut KNN Normalized Optimization and Platform Tuning Based on Hadoop 2022 Text txt rdacontent Computermedien c rdamedia Online-Ressource cr rdacarrier Big data has become part of the life for many people. The data about people’s life are being continously collected, analysized and applied as our society progresses into the big data era. Behind the scene, the computer server clusters need to process hundres of millions pieces of data every day. It is very important to choose the right big data processing platform and algorithm to deal with different kinds of datasets. Therefore, in order to be fully familiar with the related work of driving big data processing, it is necessary to master the classification algorithm of data. It aims to help us carry out a classification model or operation analysis of classification function by screening and classifying the current data in data mining. In addition, the given data can be mapped to the specified category area, and the development trend of future data can be predicted through classification models. So this kind of algorithm helps to reduce the difficulty of work operation and improve people’s work efficiency. This paper optimizes the classical classification algorithm—KNN, and designs a new normalized algorithm called PEWM_G KNN. From the perspective of distance measurement, we use Pearson correlation coefficient to replace the traditional Euclidean Metric, then we further refine the study for attribute values of datasets and introduce the entropy weight method, combined with Pearson’s measure to optimize the distance calculation equation. After the K value is fixed, we added Gaussian Function to carry out the selection of classification. In this study, we compared the effects of every step, and tested datasets with different data types and sizes, in order to test the performance of the algorithm under different scenarios. The datasets we used include Iris, Breast Cancer, Dry Bean and HTRU2 (All datasets are from The University of California, Irvine). Finally, we further analyze the performance of different system configuration parameters on the prediction rate and time. The experimental results show that PEWM_G KNN algorithm has better optimization effect for datasets with more complex attribute values and more records than the original KNN algorithm. Moreover, the optimization of platform parameters improves prediction rates of algorithms and reduces the time. We tested PEWM_G KNN on the Hadoop platform, confiured with HDFS, Hadoop-YARN, ZooKeeper, Hadoop-HA and MapReduce. KNN classification algorithm HDFS Hadoop-YARN ZooKeeper Hadoop-HA Electrical engineering. Electronics. Nuclear engineering Yuhong Chi verfasserin aut In IEEE Access IEEE, 2014 10(2022), Seite 81406-81433 (DE-627)728440385 (DE-600)2687964-5 21693536 nnns volume:10 year:2022 pages:81406-81433 https://doi.org/10.1109/ACCESS.2022.3195872 kostenfrei https://doaj.org/article/d9e036c2d6b04f4d98846b28029d75d9 kostenfrei https://ieeexplore.ieee.org/document/9847130/ kostenfrei https://doaj.org/toc/2169-3536 Journal toc kostenfrei GBV_USEFLAG_A SYSFLAG_A GBV_DOAJ SSG-OLC-PHA GBV_ILN_11 GBV_ILN_20 GBV_ILN_22 GBV_ILN_23 GBV_ILN_24 GBV_ILN_31 GBV_ILN_39 GBV_ILN_40 GBV_ILN_60 GBV_ILN_62 GBV_ILN_63 GBV_ILN_65 GBV_ILN_69 GBV_ILN_70 GBV_ILN_73 GBV_ILN_95 GBV_ILN_105 GBV_ILN_110 GBV_ILN_151 GBV_ILN_161 GBV_ILN_170 GBV_ILN_213 GBV_ILN_230 GBV_ILN_285 GBV_ILN_293 GBV_ILN_370 GBV_ILN_602 GBV_ILN_2014 GBV_ILN_4012 GBV_ILN_4037 GBV_ILN_4112 GBV_ILN_4125 GBV_ILN_4126 GBV_ILN_4249 GBV_ILN_4305 GBV_ILN_4306 GBV_ILN_4307 GBV_ILN_4313 GBV_ILN_4322 GBV_ILN_4323 GBV_ILN_4324 GBV_ILN_4325 GBV_ILN_4335 GBV_ILN_4338 GBV_ILN_4367 GBV_ILN_4700 AR 10 2022 81406-81433 |
language |
English |
source |
In IEEE Access 10(2022), Seite 81406-81433 volume:10 year:2022 pages:81406-81433 |
sourceStr |
In IEEE Access 10(2022), Seite 81406-81433 volume:10 year:2022 pages:81406-81433 |
format_phy_str_mv |
Article |
institution |
findex.gbv.de |
topic_facet |
KNN classification algorithm HDFS Hadoop-YARN ZooKeeper Hadoop-HA Electrical engineering. Electronics. Nuclear engineering |
isfreeaccess_bool |
true |
container_title |
IEEE Access |
authorswithroles_txt_mv |
Chen Ma @@aut@@ Yuhong Chi @@aut@@ |
publishDateDaySort_date |
2022-01-01T00:00:00Z |
hierarchy_top_id |
728440385 |
id |
DOAJ028284704 |
language_de |
englisch |
fullrecord |
<?xml version="1.0" encoding="UTF-8"?><collection xmlns="http://www.loc.gov/MARC21/slim"><record><leader>01000caa a22002652 4500</leader><controlfield tag="001">DOAJ028284704</controlfield><controlfield tag="003">DE-627</controlfield><controlfield tag="005">20230503061516.0</controlfield><controlfield tag="007">cr uuu---uuuuu</controlfield><controlfield tag="008">230226s2022 xx |||||o 00| ||eng c</controlfield><datafield tag="024" ind1="7" ind2=" "><subfield code="a">10.1109/ACCESS.2022.3195872</subfield><subfield code="2">doi</subfield></datafield><datafield tag="035" ind1=" " ind2=" "><subfield code="a">(DE-627)DOAJ028284704</subfield></datafield><datafield tag="035" ind1=" " ind2=" "><subfield code="a">(DE-599)DOAJd9e036c2d6b04f4d98846b28029d75d9</subfield></datafield><datafield tag="040" ind1=" " ind2=" "><subfield code="a">DE-627</subfield><subfield code="b">ger</subfield><subfield code="c">DE-627</subfield><subfield code="e">rakwb</subfield></datafield><datafield tag="041" ind1=" " ind2=" "><subfield code="a">eng</subfield></datafield><datafield tag="050" ind1=" " ind2="0"><subfield code="a">TK1-9971</subfield></datafield><datafield tag="100" ind1="0" ind2=" "><subfield code="a">Chen Ma</subfield><subfield code="e">verfasserin</subfield><subfield code="4">aut</subfield></datafield><datafield tag="245" ind1="1" ind2="0"><subfield code="a">KNN Normalized Optimization and Platform Tuning Based on Hadoop</subfield></datafield><datafield tag="264" ind1=" " ind2="1"><subfield code="c">2022</subfield></datafield><datafield tag="336" ind1=" " ind2=" "><subfield code="a">Text</subfield><subfield code="b">txt</subfield><subfield code="2">rdacontent</subfield></datafield><datafield tag="337" ind1=" " ind2=" "><subfield code="a">Computermedien</subfield><subfield code="b">c</subfield><subfield code="2">rdamedia</subfield></datafield><datafield tag="338" ind1=" " ind2=" "><subfield code="a">Online-Ressource</subfield><subfield code="b">cr</subfield><subfield code="2">rdacarrier</subfield></datafield><datafield tag="520" ind1=" " ind2=" "><subfield code="a">Big data has become part of the life for many people. The data about people’s life are being continously collected, analysized and applied as our society progresses into the big data era. Behind the scene, the computer server clusters need to process hundres of millions pieces of data every day. It is very important to choose the right big data processing platform and algorithm to deal with different kinds of datasets. Therefore, in order to be fully familiar with the related work of driving big data processing, it is necessary to master the classification algorithm of data. It aims to help us carry out a classification model or operation analysis of classification function by screening and classifying the current data in data mining. In addition, the given data can be mapped to the specified category area, and the development trend of future data can be predicted through classification models. So this kind of algorithm helps to reduce the difficulty of work operation and improve people’s work efficiency. This paper optimizes the classical classification algorithm—KNN, and designs a new normalized algorithm called PEWM_G KNN. From the perspective of distance measurement, we use Pearson correlation coefficient to replace the traditional Euclidean Metric, then we further refine the study for attribute values of datasets and introduce the entropy weight method, combined with Pearson’s measure to optimize the distance calculation equation. After the K value is fixed, we added Gaussian Function to carry out the selection of classification. In this study, we compared the effects of every step, and tested datasets with different data types and sizes, in order to test the performance of the algorithm under different scenarios. The datasets we used include Iris, Breast Cancer, Dry Bean and HTRU2 (All datasets are from The University of California, Irvine). Finally, we further analyze the performance of different system configuration parameters on the prediction rate and time. The experimental results show that PEWM_G KNN algorithm has better optimization effect for datasets with more complex attribute values and more records than the original KNN algorithm. Moreover, the optimization of platform parameters improves prediction rates of algorithms and reduces the time. We tested PEWM_G KNN on the Hadoop platform, confiured with HDFS, Hadoop-YARN, ZooKeeper, Hadoop-HA and MapReduce.</subfield></datafield><datafield tag="650" ind1=" " ind2="4"><subfield code="a">KNN</subfield></datafield><datafield tag="650" ind1=" " ind2="4"><subfield code="a">classification algorithm</subfield></datafield><datafield tag="650" ind1=" " ind2="4"><subfield code="a">HDFS</subfield></datafield><datafield tag="650" ind1=" " ind2="4"><subfield code="a">Hadoop-YARN</subfield></datafield><datafield tag="650" ind1=" " ind2="4"><subfield code="a">ZooKeeper</subfield></datafield><datafield tag="650" ind1=" " ind2="4"><subfield code="a">Hadoop-HA</subfield></datafield><datafield tag="653" ind1=" " ind2="0"><subfield code="a">Electrical engineering. Electronics. Nuclear engineering</subfield></datafield><datafield tag="700" ind1="0" ind2=" "><subfield code="a">Yuhong Chi</subfield><subfield code="e">verfasserin</subfield><subfield code="4">aut</subfield></datafield><datafield tag="773" ind1="0" ind2="8"><subfield code="i">In</subfield><subfield code="t">IEEE Access</subfield><subfield code="d">IEEE, 2014</subfield><subfield code="g">10(2022), Seite 81406-81433</subfield><subfield code="w">(DE-627)728440385</subfield><subfield code="w">(DE-600)2687964-5</subfield><subfield code="x">21693536</subfield><subfield code="7">nnns</subfield></datafield><datafield tag="773" ind1="1" ind2="8"><subfield code="g">volume:10</subfield><subfield code="g">year:2022</subfield><subfield code="g">pages:81406-81433</subfield></datafield><datafield tag="856" ind1="4" ind2="0"><subfield code="u">https://doi.org/10.1109/ACCESS.2022.3195872</subfield><subfield code="z">kostenfrei</subfield></datafield><datafield tag="856" ind1="4" ind2="0"><subfield code="u">https://doaj.org/article/d9e036c2d6b04f4d98846b28029d75d9</subfield><subfield code="z">kostenfrei</subfield></datafield><datafield tag="856" ind1="4" ind2="0"><subfield code="u">https://ieeexplore.ieee.org/document/9847130/</subfield><subfield code="z">kostenfrei</subfield></datafield><datafield tag="856" ind1="4" ind2="2"><subfield code="u">https://doaj.org/toc/2169-3536</subfield><subfield code="y">Journal toc</subfield><subfield code="z">kostenfrei</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_USEFLAG_A</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">SYSFLAG_A</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_DOAJ</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">SSG-OLC-PHA</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_11</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_20</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_22</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_23</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_24</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_31</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_39</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_40</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_60</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_62</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_63</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_65</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_69</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_70</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_73</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_95</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_105</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_110</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_151</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_161</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_170</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_213</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_230</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_285</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_293</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_370</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_602</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2014</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4012</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4037</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4112</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4125</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4126</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4249</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4305</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4306</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4307</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4313</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4322</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4323</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4324</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4325</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4335</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4338</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4367</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4700</subfield></datafield><datafield tag="951" ind1=" " ind2=" "><subfield code="a">AR</subfield></datafield><datafield tag="952" ind1=" " ind2=" "><subfield code="d">10</subfield><subfield code="j">2022</subfield><subfield code="h">81406-81433</subfield></datafield></record></collection>
|
callnumber-first |
T - Technology |
author |
Chen Ma |
spellingShingle |
Chen Ma misc TK1-9971 misc KNN misc classification algorithm misc HDFS misc Hadoop-YARN misc ZooKeeper misc Hadoop-HA misc Electrical engineering. Electronics. Nuclear engineering KNN Normalized Optimization and Platform Tuning Based on Hadoop |
authorStr |
Chen Ma |
ppnlink_with_tag_str_mv |
@@773@@(DE-627)728440385 |
format |
electronic Article |
delete_txt_mv |
keep |
author_role |
aut aut |
collection |
DOAJ |
remote_str |
true |
callnumber-label |
TK1-9971 |
illustrated |
Not Illustrated |
issn |
21693536 |
topic_title |
TK1-9971 KNN Normalized Optimization and Platform Tuning Based on Hadoop KNN classification algorithm HDFS Hadoop-YARN ZooKeeper Hadoop-HA |
topic |
misc TK1-9971 misc KNN misc classification algorithm misc HDFS misc Hadoop-YARN misc ZooKeeper misc Hadoop-HA misc Electrical engineering. Electronics. Nuclear engineering |
topic_unstemmed |
misc TK1-9971 misc KNN misc classification algorithm misc HDFS misc Hadoop-YARN misc ZooKeeper misc Hadoop-HA misc Electrical engineering. Electronics. Nuclear engineering |
topic_browse |
misc TK1-9971 misc KNN misc classification algorithm misc HDFS misc Hadoop-YARN misc ZooKeeper misc Hadoop-HA misc Electrical engineering. Electronics. Nuclear engineering |
format_facet |
Elektronische Aufsätze Aufsätze Elektronische Ressource |
format_main_str_mv |
Text Zeitschrift/Artikel |
carriertype_str_mv |
cr |
hierarchy_parent_title |
IEEE Access |
hierarchy_parent_id |
728440385 |
hierarchy_top_title |
IEEE Access |
isfreeaccess_txt |
true |
familylinks_str_mv |
(DE-627)728440385 (DE-600)2687964-5 |
title |
KNN Normalized Optimization and Platform Tuning Based on Hadoop |
ctrlnum |
(DE-627)DOAJ028284704 (DE-599)DOAJd9e036c2d6b04f4d98846b28029d75d9 |
title_full |
KNN Normalized Optimization and Platform Tuning Based on Hadoop |
author_sort |
Chen Ma |
journal |
IEEE Access |
journalStr |
IEEE Access |
callnumber-first-code |
T |
lang_code |
eng |
isOA_bool |
true |
recordtype |
marc |
publishDateSort |
2022 |
contenttype_str_mv |
txt |
container_start_page |
81406 |
author_browse |
Chen Ma Yuhong Chi |
container_volume |
10 |
class |
TK1-9971 |
format_se |
Elektronische Aufsätze |
author-letter |
Chen Ma |
doi_str_mv |
10.1109/ACCESS.2022.3195872 |
author2-role |
verfasserin |
title_sort |
knn normalized optimization and platform tuning based on hadoop |
callnumber |
TK1-9971 |
title_auth |
KNN Normalized Optimization and Platform Tuning Based on Hadoop |
abstract |
Big data has become part of the life for many people. The data about people’s life are being continously collected, analysized and applied as our society progresses into the big data era. Behind the scene, the computer server clusters need to process hundres of millions pieces of data every day. It is very important to choose the right big data processing platform and algorithm to deal with different kinds of datasets. Therefore, in order to be fully familiar with the related work of driving big data processing, it is necessary to master the classification algorithm of data. It aims to help us carry out a classification model or operation analysis of classification function by screening and classifying the current data in data mining. In addition, the given data can be mapped to the specified category area, and the development trend of future data can be predicted through classification models. So this kind of algorithm helps to reduce the difficulty of work operation and improve people’s work efficiency. This paper optimizes the classical classification algorithm—KNN, and designs a new normalized algorithm called PEWM_G KNN. From the perspective of distance measurement, we use Pearson correlation coefficient to replace the traditional Euclidean Metric, then we further refine the study for attribute values of datasets and introduce the entropy weight method, combined with Pearson’s measure to optimize the distance calculation equation. After the K value is fixed, we added Gaussian Function to carry out the selection of classification. In this study, we compared the effects of every step, and tested datasets with different data types and sizes, in order to test the performance of the algorithm under different scenarios. The datasets we used include Iris, Breast Cancer, Dry Bean and HTRU2 (All datasets are from The University of California, Irvine). Finally, we further analyze the performance of different system configuration parameters on the prediction rate and time. The experimental results show that PEWM_G KNN algorithm has better optimization effect for datasets with more complex attribute values and more records than the original KNN algorithm. Moreover, the optimization of platform parameters improves prediction rates of algorithms and reduces the time. We tested PEWM_G KNN on the Hadoop platform, confiured with HDFS, Hadoop-YARN, ZooKeeper, Hadoop-HA and MapReduce. |
abstractGer |
Big data has become part of the life for many people. The data about people’s life are being continously collected, analysized and applied as our society progresses into the big data era. Behind the scene, the computer server clusters need to process hundres of millions pieces of data every day. It is very important to choose the right big data processing platform and algorithm to deal with different kinds of datasets. Therefore, in order to be fully familiar with the related work of driving big data processing, it is necessary to master the classification algorithm of data. It aims to help us carry out a classification model or operation analysis of classification function by screening and classifying the current data in data mining. In addition, the given data can be mapped to the specified category area, and the development trend of future data can be predicted through classification models. So this kind of algorithm helps to reduce the difficulty of work operation and improve people’s work efficiency. This paper optimizes the classical classification algorithm—KNN, and designs a new normalized algorithm called PEWM_G KNN. From the perspective of distance measurement, we use Pearson correlation coefficient to replace the traditional Euclidean Metric, then we further refine the study for attribute values of datasets and introduce the entropy weight method, combined with Pearson’s measure to optimize the distance calculation equation. After the K value is fixed, we added Gaussian Function to carry out the selection of classification. In this study, we compared the effects of every step, and tested datasets with different data types and sizes, in order to test the performance of the algorithm under different scenarios. The datasets we used include Iris, Breast Cancer, Dry Bean and HTRU2 (All datasets are from The University of California, Irvine). Finally, we further analyze the performance of different system configuration parameters on the prediction rate and time. The experimental results show that PEWM_G KNN algorithm has better optimization effect for datasets with more complex attribute values and more records than the original KNN algorithm. Moreover, the optimization of platform parameters improves prediction rates of algorithms and reduces the time. We tested PEWM_G KNN on the Hadoop platform, confiured with HDFS, Hadoop-YARN, ZooKeeper, Hadoop-HA and MapReduce. |
abstract_unstemmed |
Big data has become part of the life for many people. The data about people’s life are being continously collected, analysized and applied as our society progresses into the big data era. Behind the scene, the computer server clusters need to process hundres of millions pieces of data every day. It is very important to choose the right big data processing platform and algorithm to deal with different kinds of datasets. Therefore, in order to be fully familiar with the related work of driving big data processing, it is necessary to master the classification algorithm of data. It aims to help us carry out a classification model or operation analysis of classification function by screening and classifying the current data in data mining. In addition, the given data can be mapped to the specified category area, and the development trend of future data can be predicted through classification models. So this kind of algorithm helps to reduce the difficulty of work operation and improve people’s work efficiency. This paper optimizes the classical classification algorithm—KNN, and designs a new normalized algorithm called PEWM_G KNN. From the perspective of distance measurement, we use Pearson correlation coefficient to replace the traditional Euclidean Metric, then we further refine the study for attribute values of datasets and introduce the entropy weight method, combined with Pearson’s measure to optimize the distance calculation equation. After the K value is fixed, we added Gaussian Function to carry out the selection of classification. In this study, we compared the effects of every step, and tested datasets with different data types and sizes, in order to test the performance of the algorithm under different scenarios. The datasets we used include Iris, Breast Cancer, Dry Bean and HTRU2 (All datasets are from The University of California, Irvine). Finally, we further analyze the performance of different system configuration parameters on the prediction rate and time. The experimental results show that PEWM_G KNN algorithm has better optimization effect for datasets with more complex attribute values and more records than the original KNN algorithm. Moreover, the optimization of platform parameters improves prediction rates of algorithms and reduces the time. We tested PEWM_G KNN on the Hadoop platform, confiured with HDFS, Hadoop-YARN, ZooKeeper, Hadoop-HA and MapReduce. |
collection_details |
GBV_USEFLAG_A SYSFLAG_A GBV_DOAJ SSG-OLC-PHA GBV_ILN_11 GBV_ILN_20 GBV_ILN_22 GBV_ILN_23 GBV_ILN_24 GBV_ILN_31 GBV_ILN_39 GBV_ILN_40 GBV_ILN_60 GBV_ILN_62 GBV_ILN_63 GBV_ILN_65 GBV_ILN_69 GBV_ILN_70 GBV_ILN_73 GBV_ILN_95 GBV_ILN_105 GBV_ILN_110 GBV_ILN_151 GBV_ILN_161 GBV_ILN_170 GBV_ILN_213 GBV_ILN_230 GBV_ILN_285 GBV_ILN_293 GBV_ILN_370 GBV_ILN_602 GBV_ILN_2014 GBV_ILN_4012 GBV_ILN_4037 GBV_ILN_4112 GBV_ILN_4125 GBV_ILN_4126 GBV_ILN_4249 GBV_ILN_4305 GBV_ILN_4306 GBV_ILN_4307 GBV_ILN_4313 GBV_ILN_4322 GBV_ILN_4323 GBV_ILN_4324 GBV_ILN_4325 GBV_ILN_4335 GBV_ILN_4338 GBV_ILN_4367 GBV_ILN_4700 |
title_short |
KNN Normalized Optimization and Platform Tuning Based on Hadoop |
url |
https://doi.org/10.1109/ACCESS.2022.3195872 https://doaj.org/article/d9e036c2d6b04f4d98846b28029d75d9 https://ieeexplore.ieee.org/document/9847130/ https://doaj.org/toc/2169-3536 |
remote_bool |
true |
author2 |
Yuhong Chi |
author2Str |
Yuhong Chi |
ppnlink |
728440385 |
callnumber-subject |
TK - Electrical and Nuclear Engineering |
mediatype_str_mv |
c |
isOA_txt |
true |
hochschulschrift_bool |
false |
doi_str |
10.1109/ACCESS.2022.3195872 |
callnumber-a |
TK1-9971 |
up_date |
2024-07-03T16:44:34.253Z |
_version_ |
1803577012265156608 |
fullrecord_marcxml |
<?xml version="1.0" encoding="UTF-8"?><collection xmlns="http://www.loc.gov/MARC21/slim"><record><leader>01000caa a22002652 4500</leader><controlfield tag="001">DOAJ028284704</controlfield><controlfield tag="003">DE-627</controlfield><controlfield tag="005">20230503061516.0</controlfield><controlfield tag="007">cr uuu---uuuuu</controlfield><controlfield tag="008">230226s2022 xx |||||o 00| ||eng c</controlfield><datafield tag="024" ind1="7" ind2=" "><subfield code="a">10.1109/ACCESS.2022.3195872</subfield><subfield code="2">doi</subfield></datafield><datafield tag="035" ind1=" " ind2=" "><subfield code="a">(DE-627)DOAJ028284704</subfield></datafield><datafield tag="035" ind1=" " ind2=" "><subfield code="a">(DE-599)DOAJd9e036c2d6b04f4d98846b28029d75d9</subfield></datafield><datafield tag="040" ind1=" " ind2=" "><subfield code="a">DE-627</subfield><subfield code="b">ger</subfield><subfield code="c">DE-627</subfield><subfield code="e">rakwb</subfield></datafield><datafield tag="041" ind1=" " ind2=" "><subfield code="a">eng</subfield></datafield><datafield tag="050" ind1=" " ind2="0"><subfield code="a">TK1-9971</subfield></datafield><datafield tag="100" ind1="0" ind2=" "><subfield code="a">Chen Ma</subfield><subfield code="e">verfasserin</subfield><subfield code="4">aut</subfield></datafield><datafield tag="245" ind1="1" ind2="0"><subfield code="a">KNN Normalized Optimization and Platform Tuning Based on Hadoop</subfield></datafield><datafield tag="264" ind1=" " ind2="1"><subfield code="c">2022</subfield></datafield><datafield tag="336" ind1=" " ind2=" "><subfield code="a">Text</subfield><subfield code="b">txt</subfield><subfield code="2">rdacontent</subfield></datafield><datafield tag="337" ind1=" " ind2=" "><subfield code="a">Computermedien</subfield><subfield code="b">c</subfield><subfield code="2">rdamedia</subfield></datafield><datafield tag="338" ind1=" " ind2=" "><subfield code="a">Online-Ressource</subfield><subfield code="b">cr</subfield><subfield code="2">rdacarrier</subfield></datafield><datafield tag="520" ind1=" " ind2=" "><subfield code="a">Big data has become part of the life for many people. The data about people’s life are being continously collected, analysized and applied as our society progresses into the big data era. Behind the scene, the computer server clusters need to process hundres of millions pieces of data every day. It is very important to choose the right big data processing platform and algorithm to deal with different kinds of datasets. Therefore, in order to be fully familiar with the related work of driving big data processing, it is necessary to master the classification algorithm of data. It aims to help us carry out a classification model or operation analysis of classification function by screening and classifying the current data in data mining. In addition, the given data can be mapped to the specified category area, and the development trend of future data can be predicted through classification models. So this kind of algorithm helps to reduce the difficulty of work operation and improve people’s work efficiency. This paper optimizes the classical classification algorithm—KNN, and designs a new normalized algorithm called PEWM_G KNN. From the perspective of distance measurement, we use Pearson correlation coefficient to replace the traditional Euclidean Metric, then we further refine the study for attribute values of datasets and introduce the entropy weight method, combined with Pearson’s measure to optimize the distance calculation equation. After the K value is fixed, we added Gaussian Function to carry out the selection of classification. In this study, we compared the effects of every step, and tested datasets with different data types and sizes, in order to test the performance of the algorithm under different scenarios. The datasets we used include Iris, Breast Cancer, Dry Bean and HTRU2 (All datasets are from The University of California, Irvine). Finally, we further analyze the performance of different system configuration parameters on the prediction rate and time. The experimental results show that PEWM_G KNN algorithm has better optimization effect for datasets with more complex attribute values and more records than the original KNN algorithm. Moreover, the optimization of platform parameters improves prediction rates of algorithms and reduces the time. We tested PEWM_G KNN on the Hadoop platform, confiured with HDFS, Hadoop-YARN, ZooKeeper, Hadoop-HA and MapReduce.</subfield></datafield><datafield tag="650" ind1=" " ind2="4"><subfield code="a">KNN</subfield></datafield><datafield tag="650" ind1=" " ind2="4"><subfield code="a">classification algorithm</subfield></datafield><datafield tag="650" ind1=" " ind2="4"><subfield code="a">HDFS</subfield></datafield><datafield tag="650" ind1=" " ind2="4"><subfield code="a">Hadoop-YARN</subfield></datafield><datafield tag="650" ind1=" " ind2="4"><subfield code="a">ZooKeeper</subfield></datafield><datafield tag="650" ind1=" " ind2="4"><subfield code="a">Hadoop-HA</subfield></datafield><datafield tag="653" ind1=" " ind2="0"><subfield code="a">Electrical engineering. Electronics. Nuclear engineering</subfield></datafield><datafield tag="700" ind1="0" ind2=" "><subfield code="a">Yuhong Chi</subfield><subfield code="e">verfasserin</subfield><subfield code="4">aut</subfield></datafield><datafield tag="773" ind1="0" ind2="8"><subfield code="i">In</subfield><subfield code="t">IEEE Access</subfield><subfield code="d">IEEE, 2014</subfield><subfield code="g">10(2022), Seite 81406-81433</subfield><subfield code="w">(DE-627)728440385</subfield><subfield code="w">(DE-600)2687964-5</subfield><subfield code="x">21693536</subfield><subfield code="7">nnns</subfield></datafield><datafield tag="773" ind1="1" ind2="8"><subfield code="g">volume:10</subfield><subfield code="g">year:2022</subfield><subfield code="g">pages:81406-81433</subfield></datafield><datafield tag="856" ind1="4" ind2="0"><subfield code="u">https://doi.org/10.1109/ACCESS.2022.3195872</subfield><subfield code="z">kostenfrei</subfield></datafield><datafield tag="856" ind1="4" ind2="0"><subfield code="u">https://doaj.org/article/d9e036c2d6b04f4d98846b28029d75d9</subfield><subfield code="z">kostenfrei</subfield></datafield><datafield tag="856" ind1="4" ind2="0"><subfield code="u">https://ieeexplore.ieee.org/document/9847130/</subfield><subfield code="z">kostenfrei</subfield></datafield><datafield tag="856" ind1="4" ind2="2"><subfield code="u">https://doaj.org/toc/2169-3536</subfield><subfield code="y">Journal toc</subfield><subfield code="z">kostenfrei</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_USEFLAG_A</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">SYSFLAG_A</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_DOAJ</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">SSG-OLC-PHA</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_11</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_20</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_22</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_23</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_24</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_31</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_39</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_40</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_60</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_62</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_63</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_65</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_69</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_70</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_73</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_95</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_105</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_110</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_151</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_161</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_170</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_213</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_230</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_285</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_293</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_370</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_602</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2014</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4012</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4037</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4112</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4125</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4126</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4249</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4305</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4306</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4307</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4313</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4322</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4323</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4324</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4325</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4335</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4338</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4367</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4700</subfield></datafield><datafield tag="951" ind1=" " ind2=" "><subfield code="a">AR</subfield></datafield><datafield tag="952" ind1=" " ind2=" "><subfield code="d">10</subfield><subfield code="j">2022</subfield><subfield code="h">81406-81433</subfield></datafield></record></collection>
|
score |
7.3987617 |