A Modified RNN-Based Deep Learning Method for Prediction of Atmospheric Visibility
Accurate atmospheric visibility prediction is of great significance to public transport safety. However, since it is affected by multiple factors, there still remains difficulties in predicting its heterogenous spatial distribution and rapid temporal variation. In this paper, a recursive neural netw...
Ausführliche Beschreibung
Autor*in: |
Zengliang Zang [verfasserIn] Xulun Bao [verfasserIn] Yi Li [verfasserIn] Youming Qu [verfasserIn] Dan Niu [verfasserIn] Ning Liu [verfasserIn] Xisong Chen [verfasserIn] |
---|
Format: |
E-Artikel |
---|---|
Sprache: |
Englisch |
Erschienen: |
2023 |
---|
Schlagwörter: |
atmospheric visibility prediction |
---|
Übergeordnetes Werk: |
In: Remote Sensing - MDPI AG, 2009, 15(2023), 3, p 553 |
---|---|
Übergeordnetes Werk: |
volume:15 ; year:2023 ; number:3, p 553 |
Links: |
---|
DOI / URN: |
10.3390/rs15030553 |
---|
Katalog-ID: |
DOAJ080599451 |
---|
LEADER | 01000caa a22002652 4500 | ||
---|---|---|---|
001 | DOAJ080599451 | ||
003 | DE-627 | ||
005 | 20240413065351.0 | ||
007 | cr uuu---uuuuu | ||
008 | 230310s2023 xx |||||o 00| ||eng c | ||
024 | 7 | |a 10.3390/rs15030553 |2 doi | |
035 | |a (DE-627)DOAJ080599451 | ||
035 | |a (DE-599)DOAJ9d3db32e9cf04c9893815c6d21dca50c | ||
040 | |a DE-627 |b ger |c DE-627 |e rakwb | ||
041 | |a eng | ||
100 | 0 | |a Zengliang Zang |e verfasserin |4 aut | |
245 | 1 | 2 | |a A Modified RNN-Based Deep Learning Method for Prediction of Atmospheric Visibility |
264 | 1 | |c 2023 | |
336 | |a Text |b txt |2 rdacontent | ||
337 | |a Computermedien |b c |2 rdamedia | ||
338 | |a Online-Ressource |b cr |2 rdacarrier | ||
520 | |a Accurate atmospheric visibility prediction is of great significance to public transport safety. However, since it is affected by multiple factors, there still remains difficulties in predicting its heterogenous spatial distribution and rapid temporal variation. In this paper, a recursive neural network (RNN) prediction model modified with the frame-hopping transmission gate (FHTG), feature fusion module (FFM) and reverse scheduled sampling (RSS), named SwiftRNN, is developed. The new FHTG is used to accelerate training, the FFM is used for extraction and fusion of global and local features, and the RSS is employed to learn spatial details and improve prediction accuracy. Based on the ground-based monitoring data of atmospheric visibility from the China Meteorological Information Center during 1 January 2018 to 31 December 2020, the SwiftRNN model and two traditional ConvLSTM and PredRNN models are performed to predict hourly atmospheric visibility in central and eastern China at a lead of 12 h. The results show that the SwiftRNN model has better performance in the skill scores of visibility prediction than those of the ConvLSTM and PredRNN model. The averaged structural similarity (SSIM) of predictions at a lead up to 12 h is 0.444, 0.425 and 0.399 for the SwiftRNN, PredRNN and ConvLSTM model, respectively, and the averaged image perception similarity (LPIPS) is 0.289, 0.315 and 0.328, respectively. The averaged critical success index (CSI) of predictions over 1000 m fog area is 0.221, 0.205 and 0.194, respectively. Moreover, the training speed of the SwiftRNN model is 14.3% faster than the PredRNN model. It is also found that the prediction effect of the SwiftRNN model over 1000 m medium grade fog area is significantly improved along with lead times compared with the ConvLSTM and PredRNN model. All above results demonstrate the SwiftRNN model is a powerful tool in predicting atmospheric visibility. | ||
650 | 4 | |a atmospheric visibility prediction | |
650 | 4 | |a spatiotemporal sequence prediction | |
650 | 4 | |a RNN | |
650 | 4 | |a improve accuracy | |
650 | 4 | |a feature fusion | |
653 | 0 | |a Science | |
653 | 0 | |a Q | |
700 | 0 | |a Xulun Bao |e verfasserin |4 aut | |
700 | 0 | |a Yi Li |e verfasserin |4 aut | |
700 | 0 | |a Youming Qu |e verfasserin |4 aut | |
700 | 0 | |a Dan Niu |e verfasserin |4 aut | |
700 | 0 | |a Ning Liu |e verfasserin |4 aut | |
700 | 0 | |a Xisong Chen |e verfasserin |4 aut | |
773 | 0 | 8 | |i In |t Remote Sensing |d MDPI AG, 2009 |g 15(2023), 3, p 553 |w (DE-627)608937916 |w (DE-600)2513863-7 |x 20724292 |7 nnns |
773 | 1 | 8 | |g volume:15 |g year:2023 |g number:3, p 553 |
856 | 4 | 0 | |u https://doi.org/10.3390/rs15030553 |z kostenfrei |
856 | 4 | 0 | |u https://doaj.org/article/9d3db32e9cf04c9893815c6d21dca50c |z kostenfrei |
856 | 4 | 0 | |u https://www.mdpi.com/2072-4292/15/3/553 |z kostenfrei |
856 | 4 | 2 | |u https://doaj.org/toc/2072-4292 |y Journal toc |z kostenfrei |
912 | |a GBV_USEFLAG_A | ||
912 | |a SYSFLAG_A | ||
912 | |a GBV_DOAJ | ||
912 | |a GBV_ILN_20 | ||
912 | |a GBV_ILN_22 | ||
912 | |a GBV_ILN_23 | ||
912 | |a GBV_ILN_24 | ||
912 | |a GBV_ILN_39 | ||
912 | |a GBV_ILN_40 | ||
912 | |a GBV_ILN_60 | ||
912 | |a GBV_ILN_62 | ||
912 | |a GBV_ILN_63 | ||
912 | |a GBV_ILN_65 | ||
912 | |a GBV_ILN_69 | ||
912 | |a GBV_ILN_70 | ||
912 | |a GBV_ILN_73 | ||
912 | |a GBV_ILN_95 | ||
912 | |a GBV_ILN_105 | ||
912 | |a GBV_ILN_110 | ||
912 | |a GBV_ILN_151 | ||
912 | |a GBV_ILN_161 | ||
912 | |a GBV_ILN_170 | ||
912 | |a GBV_ILN_206 | ||
912 | |a GBV_ILN_213 | ||
912 | |a GBV_ILN_230 | ||
912 | |a GBV_ILN_285 | ||
912 | |a GBV_ILN_293 | ||
912 | |a GBV_ILN_370 | ||
912 | |a GBV_ILN_602 | ||
912 | |a GBV_ILN_2005 | ||
912 | |a GBV_ILN_2009 | ||
912 | |a GBV_ILN_2011 | ||
912 | |a GBV_ILN_2014 | ||
912 | |a GBV_ILN_2055 | ||
912 | |a GBV_ILN_2108 | ||
912 | |a GBV_ILN_2111 | ||
912 | |a GBV_ILN_2119 | ||
912 | |a GBV_ILN_4012 | ||
912 | |a GBV_ILN_4037 | ||
912 | |a GBV_ILN_4112 | ||
912 | |a GBV_ILN_4125 | ||
912 | |a GBV_ILN_4126 | ||
912 | |a GBV_ILN_4249 | ||
912 | |a GBV_ILN_4305 | ||
912 | |a GBV_ILN_4306 | ||
912 | |a GBV_ILN_4307 | ||
912 | |a GBV_ILN_4313 | ||
912 | |a GBV_ILN_4322 | ||
912 | |a GBV_ILN_4323 | ||
912 | |a GBV_ILN_4324 | ||
912 | |a GBV_ILN_4325 | ||
912 | |a GBV_ILN_4335 | ||
912 | |a GBV_ILN_4338 | ||
912 | |a GBV_ILN_4367 | ||
912 | |a GBV_ILN_4392 | ||
912 | |a GBV_ILN_4700 | ||
951 | |a AR | ||
952 | |d 15 |j 2023 |e 3, p 553 |
author_variant |
z z zz x b xb y l yl y q yq d n dn n l nl x c xc |
---|---|
matchkey_str |
article:20724292:2023----::mdfernaedelannmtofrrdcinf |
hierarchy_sort_str |
2023 |
publishDate |
2023 |
allfields |
10.3390/rs15030553 doi (DE-627)DOAJ080599451 (DE-599)DOAJ9d3db32e9cf04c9893815c6d21dca50c DE-627 ger DE-627 rakwb eng Zengliang Zang verfasserin aut A Modified RNN-Based Deep Learning Method for Prediction of Atmospheric Visibility 2023 Text txt rdacontent Computermedien c rdamedia Online-Ressource cr rdacarrier Accurate atmospheric visibility prediction is of great significance to public transport safety. However, since it is affected by multiple factors, there still remains difficulties in predicting its heterogenous spatial distribution and rapid temporal variation. In this paper, a recursive neural network (RNN) prediction model modified with the frame-hopping transmission gate (FHTG), feature fusion module (FFM) and reverse scheduled sampling (RSS), named SwiftRNN, is developed. The new FHTG is used to accelerate training, the FFM is used for extraction and fusion of global and local features, and the RSS is employed to learn spatial details and improve prediction accuracy. Based on the ground-based monitoring data of atmospheric visibility from the China Meteorological Information Center during 1 January 2018 to 31 December 2020, the SwiftRNN model and two traditional ConvLSTM and PredRNN models are performed to predict hourly atmospheric visibility in central and eastern China at a lead of 12 h. The results show that the SwiftRNN model has better performance in the skill scores of visibility prediction than those of the ConvLSTM and PredRNN model. The averaged structural similarity (SSIM) of predictions at a lead up to 12 h is 0.444, 0.425 and 0.399 for the SwiftRNN, PredRNN and ConvLSTM model, respectively, and the averaged image perception similarity (LPIPS) is 0.289, 0.315 and 0.328, respectively. The averaged critical success index (CSI) of predictions over 1000 m fog area is 0.221, 0.205 and 0.194, respectively. Moreover, the training speed of the SwiftRNN model is 14.3% faster than the PredRNN model. It is also found that the prediction effect of the SwiftRNN model over 1000 m medium grade fog area is significantly improved along with lead times compared with the ConvLSTM and PredRNN model. All above results demonstrate the SwiftRNN model is a powerful tool in predicting atmospheric visibility. atmospheric visibility prediction spatiotemporal sequence prediction RNN improve accuracy feature fusion Science Q Xulun Bao verfasserin aut Yi Li verfasserin aut Youming Qu verfasserin aut Dan Niu verfasserin aut Ning Liu verfasserin aut Xisong Chen verfasserin aut In Remote Sensing MDPI AG, 2009 15(2023), 3, p 553 (DE-627)608937916 (DE-600)2513863-7 20724292 nnns volume:15 year:2023 number:3, p 553 https://doi.org/10.3390/rs15030553 kostenfrei https://doaj.org/article/9d3db32e9cf04c9893815c6d21dca50c kostenfrei https://www.mdpi.com/2072-4292/15/3/553 kostenfrei https://doaj.org/toc/2072-4292 Journal toc kostenfrei GBV_USEFLAG_A SYSFLAG_A GBV_DOAJ GBV_ILN_20 GBV_ILN_22 GBV_ILN_23 GBV_ILN_24 GBV_ILN_39 GBV_ILN_40 GBV_ILN_60 GBV_ILN_62 GBV_ILN_63 GBV_ILN_65 GBV_ILN_69 GBV_ILN_70 GBV_ILN_73 GBV_ILN_95 GBV_ILN_105 GBV_ILN_110 GBV_ILN_151 GBV_ILN_161 GBV_ILN_170 GBV_ILN_206 GBV_ILN_213 GBV_ILN_230 GBV_ILN_285 GBV_ILN_293 GBV_ILN_370 GBV_ILN_602 GBV_ILN_2005 GBV_ILN_2009 GBV_ILN_2011 GBV_ILN_2014 GBV_ILN_2055 GBV_ILN_2108 GBV_ILN_2111 GBV_ILN_2119 GBV_ILN_4012 GBV_ILN_4037 GBV_ILN_4112 GBV_ILN_4125 GBV_ILN_4126 GBV_ILN_4249 GBV_ILN_4305 GBV_ILN_4306 GBV_ILN_4307 GBV_ILN_4313 GBV_ILN_4322 GBV_ILN_4323 GBV_ILN_4324 GBV_ILN_4325 GBV_ILN_4335 GBV_ILN_4338 GBV_ILN_4367 GBV_ILN_4392 GBV_ILN_4700 AR 15 2023 3, p 553 |
spelling |
10.3390/rs15030553 doi (DE-627)DOAJ080599451 (DE-599)DOAJ9d3db32e9cf04c9893815c6d21dca50c DE-627 ger DE-627 rakwb eng Zengliang Zang verfasserin aut A Modified RNN-Based Deep Learning Method for Prediction of Atmospheric Visibility 2023 Text txt rdacontent Computermedien c rdamedia Online-Ressource cr rdacarrier Accurate atmospheric visibility prediction is of great significance to public transport safety. However, since it is affected by multiple factors, there still remains difficulties in predicting its heterogenous spatial distribution and rapid temporal variation. In this paper, a recursive neural network (RNN) prediction model modified with the frame-hopping transmission gate (FHTG), feature fusion module (FFM) and reverse scheduled sampling (RSS), named SwiftRNN, is developed. The new FHTG is used to accelerate training, the FFM is used for extraction and fusion of global and local features, and the RSS is employed to learn spatial details and improve prediction accuracy. Based on the ground-based monitoring data of atmospheric visibility from the China Meteorological Information Center during 1 January 2018 to 31 December 2020, the SwiftRNN model and two traditional ConvLSTM and PredRNN models are performed to predict hourly atmospheric visibility in central and eastern China at a lead of 12 h. The results show that the SwiftRNN model has better performance in the skill scores of visibility prediction than those of the ConvLSTM and PredRNN model. The averaged structural similarity (SSIM) of predictions at a lead up to 12 h is 0.444, 0.425 and 0.399 for the SwiftRNN, PredRNN and ConvLSTM model, respectively, and the averaged image perception similarity (LPIPS) is 0.289, 0.315 and 0.328, respectively. The averaged critical success index (CSI) of predictions over 1000 m fog area is 0.221, 0.205 and 0.194, respectively. Moreover, the training speed of the SwiftRNN model is 14.3% faster than the PredRNN model. It is also found that the prediction effect of the SwiftRNN model over 1000 m medium grade fog area is significantly improved along with lead times compared with the ConvLSTM and PredRNN model. All above results demonstrate the SwiftRNN model is a powerful tool in predicting atmospheric visibility. atmospheric visibility prediction spatiotemporal sequence prediction RNN improve accuracy feature fusion Science Q Xulun Bao verfasserin aut Yi Li verfasserin aut Youming Qu verfasserin aut Dan Niu verfasserin aut Ning Liu verfasserin aut Xisong Chen verfasserin aut In Remote Sensing MDPI AG, 2009 15(2023), 3, p 553 (DE-627)608937916 (DE-600)2513863-7 20724292 nnns volume:15 year:2023 number:3, p 553 https://doi.org/10.3390/rs15030553 kostenfrei https://doaj.org/article/9d3db32e9cf04c9893815c6d21dca50c kostenfrei https://www.mdpi.com/2072-4292/15/3/553 kostenfrei https://doaj.org/toc/2072-4292 Journal toc kostenfrei GBV_USEFLAG_A SYSFLAG_A GBV_DOAJ GBV_ILN_20 GBV_ILN_22 GBV_ILN_23 GBV_ILN_24 GBV_ILN_39 GBV_ILN_40 GBV_ILN_60 GBV_ILN_62 GBV_ILN_63 GBV_ILN_65 GBV_ILN_69 GBV_ILN_70 GBV_ILN_73 GBV_ILN_95 GBV_ILN_105 GBV_ILN_110 GBV_ILN_151 GBV_ILN_161 GBV_ILN_170 GBV_ILN_206 GBV_ILN_213 GBV_ILN_230 GBV_ILN_285 GBV_ILN_293 GBV_ILN_370 GBV_ILN_602 GBV_ILN_2005 GBV_ILN_2009 GBV_ILN_2011 GBV_ILN_2014 GBV_ILN_2055 GBV_ILN_2108 GBV_ILN_2111 GBV_ILN_2119 GBV_ILN_4012 GBV_ILN_4037 GBV_ILN_4112 GBV_ILN_4125 GBV_ILN_4126 GBV_ILN_4249 GBV_ILN_4305 GBV_ILN_4306 GBV_ILN_4307 GBV_ILN_4313 GBV_ILN_4322 GBV_ILN_4323 GBV_ILN_4324 GBV_ILN_4325 GBV_ILN_4335 GBV_ILN_4338 GBV_ILN_4367 GBV_ILN_4392 GBV_ILN_4700 AR 15 2023 3, p 553 |
allfields_unstemmed |
10.3390/rs15030553 doi (DE-627)DOAJ080599451 (DE-599)DOAJ9d3db32e9cf04c9893815c6d21dca50c DE-627 ger DE-627 rakwb eng Zengliang Zang verfasserin aut A Modified RNN-Based Deep Learning Method for Prediction of Atmospheric Visibility 2023 Text txt rdacontent Computermedien c rdamedia Online-Ressource cr rdacarrier Accurate atmospheric visibility prediction is of great significance to public transport safety. However, since it is affected by multiple factors, there still remains difficulties in predicting its heterogenous spatial distribution and rapid temporal variation. In this paper, a recursive neural network (RNN) prediction model modified with the frame-hopping transmission gate (FHTG), feature fusion module (FFM) and reverse scheduled sampling (RSS), named SwiftRNN, is developed. The new FHTG is used to accelerate training, the FFM is used for extraction and fusion of global and local features, and the RSS is employed to learn spatial details and improve prediction accuracy. Based on the ground-based monitoring data of atmospheric visibility from the China Meteorological Information Center during 1 January 2018 to 31 December 2020, the SwiftRNN model and two traditional ConvLSTM and PredRNN models are performed to predict hourly atmospheric visibility in central and eastern China at a lead of 12 h. The results show that the SwiftRNN model has better performance in the skill scores of visibility prediction than those of the ConvLSTM and PredRNN model. The averaged structural similarity (SSIM) of predictions at a lead up to 12 h is 0.444, 0.425 and 0.399 for the SwiftRNN, PredRNN and ConvLSTM model, respectively, and the averaged image perception similarity (LPIPS) is 0.289, 0.315 and 0.328, respectively. The averaged critical success index (CSI) of predictions over 1000 m fog area is 0.221, 0.205 and 0.194, respectively. Moreover, the training speed of the SwiftRNN model is 14.3% faster than the PredRNN model. It is also found that the prediction effect of the SwiftRNN model over 1000 m medium grade fog area is significantly improved along with lead times compared with the ConvLSTM and PredRNN model. All above results demonstrate the SwiftRNN model is a powerful tool in predicting atmospheric visibility. atmospheric visibility prediction spatiotemporal sequence prediction RNN improve accuracy feature fusion Science Q Xulun Bao verfasserin aut Yi Li verfasserin aut Youming Qu verfasserin aut Dan Niu verfasserin aut Ning Liu verfasserin aut Xisong Chen verfasserin aut In Remote Sensing MDPI AG, 2009 15(2023), 3, p 553 (DE-627)608937916 (DE-600)2513863-7 20724292 nnns volume:15 year:2023 number:3, p 553 https://doi.org/10.3390/rs15030553 kostenfrei https://doaj.org/article/9d3db32e9cf04c9893815c6d21dca50c kostenfrei https://www.mdpi.com/2072-4292/15/3/553 kostenfrei https://doaj.org/toc/2072-4292 Journal toc kostenfrei GBV_USEFLAG_A SYSFLAG_A GBV_DOAJ GBV_ILN_20 GBV_ILN_22 GBV_ILN_23 GBV_ILN_24 GBV_ILN_39 GBV_ILN_40 GBV_ILN_60 GBV_ILN_62 GBV_ILN_63 GBV_ILN_65 GBV_ILN_69 GBV_ILN_70 GBV_ILN_73 GBV_ILN_95 GBV_ILN_105 GBV_ILN_110 GBV_ILN_151 GBV_ILN_161 GBV_ILN_170 GBV_ILN_206 GBV_ILN_213 GBV_ILN_230 GBV_ILN_285 GBV_ILN_293 GBV_ILN_370 GBV_ILN_602 GBV_ILN_2005 GBV_ILN_2009 GBV_ILN_2011 GBV_ILN_2014 GBV_ILN_2055 GBV_ILN_2108 GBV_ILN_2111 GBV_ILN_2119 GBV_ILN_4012 GBV_ILN_4037 GBV_ILN_4112 GBV_ILN_4125 GBV_ILN_4126 GBV_ILN_4249 GBV_ILN_4305 GBV_ILN_4306 GBV_ILN_4307 GBV_ILN_4313 GBV_ILN_4322 GBV_ILN_4323 GBV_ILN_4324 GBV_ILN_4325 GBV_ILN_4335 GBV_ILN_4338 GBV_ILN_4367 GBV_ILN_4392 GBV_ILN_4700 AR 15 2023 3, p 553 |
allfieldsGer |
10.3390/rs15030553 doi (DE-627)DOAJ080599451 (DE-599)DOAJ9d3db32e9cf04c9893815c6d21dca50c DE-627 ger DE-627 rakwb eng Zengliang Zang verfasserin aut A Modified RNN-Based Deep Learning Method for Prediction of Atmospheric Visibility 2023 Text txt rdacontent Computermedien c rdamedia Online-Ressource cr rdacarrier Accurate atmospheric visibility prediction is of great significance to public transport safety. However, since it is affected by multiple factors, there still remains difficulties in predicting its heterogenous spatial distribution and rapid temporal variation. In this paper, a recursive neural network (RNN) prediction model modified with the frame-hopping transmission gate (FHTG), feature fusion module (FFM) and reverse scheduled sampling (RSS), named SwiftRNN, is developed. The new FHTG is used to accelerate training, the FFM is used for extraction and fusion of global and local features, and the RSS is employed to learn spatial details and improve prediction accuracy. Based on the ground-based monitoring data of atmospheric visibility from the China Meteorological Information Center during 1 January 2018 to 31 December 2020, the SwiftRNN model and two traditional ConvLSTM and PredRNN models are performed to predict hourly atmospheric visibility in central and eastern China at a lead of 12 h. The results show that the SwiftRNN model has better performance in the skill scores of visibility prediction than those of the ConvLSTM and PredRNN model. The averaged structural similarity (SSIM) of predictions at a lead up to 12 h is 0.444, 0.425 and 0.399 for the SwiftRNN, PredRNN and ConvLSTM model, respectively, and the averaged image perception similarity (LPIPS) is 0.289, 0.315 and 0.328, respectively. The averaged critical success index (CSI) of predictions over 1000 m fog area is 0.221, 0.205 and 0.194, respectively. Moreover, the training speed of the SwiftRNN model is 14.3% faster than the PredRNN model. It is also found that the prediction effect of the SwiftRNN model over 1000 m medium grade fog area is significantly improved along with lead times compared with the ConvLSTM and PredRNN model. All above results demonstrate the SwiftRNN model is a powerful tool in predicting atmospheric visibility. atmospheric visibility prediction spatiotemporal sequence prediction RNN improve accuracy feature fusion Science Q Xulun Bao verfasserin aut Yi Li verfasserin aut Youming Qu verfasserin aut Dan Niu verfasserin aut Ning Liu verfasserin aut Xisong Chen verfasserin aut In Remote Sensing MDPI AG, 2009 15(2023), 3, p 553 (DE-627)608937916 (DE-600)2513863-7 20724292 nnns volume:15 year:2023 number:3, p 553 https://doi.org/10.3390/rs15030553 kostenfrei https://doaj.org/article/9d3db32e9cf04c9893815c6d21dca50c kostenfrei https://www.mdpi.com/2072-4292/15/3/553 kostenfrei https://doaj.org/toc/2072-4292 Journal toc kostenfrei GBV_USEFLAG_A SYSFLAG_A GBV_DOAJ GBV_ILN_20 GBV_ILN_22 GBV_ILN_23 GBV_ILN_24 GBV_ILN_39 GBV_ILN_40 GBV_ILN_60 GBV_ILN_62 GBV_ILN_63 GBV_ILN_65 GBV_ILN_69 GBV_ILN_70 GBV_ILN_73 GBV_ILN_95 GBV_ILN_105 GBV_ILN_110 GBV_ILN_151 GBV_ILN_161 GBV_ILN_170 GBV_ILN_206 GBV_ILN_213 GBV_ILN_230 GBV_ILN_285 GBV_ILN_293 GBV_ILN_370 GBV_ILN_602 GBV_ILN_2005 GBV_ILN_2009 GBV_ILN_2011 GBV_ILN_2014 GBV_ILN_2055 GBV_ILN_2108 GBV_ILN_2111 GBV_ILN_2119 GBV_ILN_4012 GBV_ILN_4037 GBV_ILN_4112 GBV_ILN_4125 GBV_ILN_4126 GBV_ILN_4249 GBV_ILN_4305 GBV_ILN_4306 GBV_ILN_4307 GBV_ILN_4313 GBV_ILN_4322 GBV_ILN_4323 GBV_ILN_4324 GBV_ILN_4325 GBV_ILN_4335 GBV_ILN_4338 GBV_ILN_4367 GBV_ILN_4392 GBV_ILN_4700 AR 15 2023 3, p 553 |
allfieldsSound |
10.3390/rs15030553 doi (DE-627)DOAJ080599451 (DE-599)DOAJ9d3db32e9cf04c9893815c6d21dca50c DE-627 ger DE-627 rakwb eng Zengliang Zang verfasserin aut A Modified RNN-Based Deep Learning Method for Prediction of Atmospheric Visibility 2023 Text txt rdacontent Computermedien c rdamedia Online-Ressource cr rdacarrier Accurate atmospheric visibility prediction is of great significance to public transport safety. However, since it is affected by multiple factors, there still remains difficulties in predicting its heterogenous spatial distribution and rapid temporal variation. In this paper, a recursive neural network (RNN) prediction model modified with the frame-hopping transmission gate (FHTG), feature fusion module (FFM) and reverse scheduled sampling (RSS), named SwiftRNN, is developed. The new FHTG is used to accelerate training, the FFM is used for extraction and fusion of global and local features, and the RSS is employed to learn spatial details and improve prediction accuracy. Based on the ground-based monitoring data of atmospheric visibility from the China Meteorological Information Center during 1 January 2018 to 31 December 2020, the SwiftRNN model and two traditional ConvLSTM and PredRNN models are performed to predict hourly atmospheric visibility in central and eastern China at a lead of 12 h. The results show that the SwiftRNN model has better performance in the skill scores of visibility prediction than those of the ConvLSTM and PredRNN model. The averaged structural similarity (SSIM) of predictions at a lead up to 12 h is 0.444, 0.425 and 0.399 for the SwiftRNN, PredRNN and ConvLSTM model, respectively, and the averaged image perception similarity (LPIPS) is 0.289, 0.315 and 0.328, respectively. The averaged critical success index (CSI) of predictions over 1000 m fog area is 0.221, 0.205 and 0.194, respectively. Moreover, the training speed of the SwiftRNN model is 14.3% faster than the PredRNN model. It is also found that the prediction effect of the SwiftRNN model over 1000 m medium grade fog area is significantly improved along with lead times compared with the ConvLSTM and PredRNN model. All above results demonstrate the SwiftRNN model is a powerful tool in predicting atmospheric visibility. atmospheric visibility prediction spatiotemporal sequence prediction RNN improve accuracy feature fusion Science Q Xulun Bao verfasserin aut Yi Li verfasserin aut Youming Qu verfasserin aut Dan Niu verfasserin aut Ning Liu verfasserin aut Xisong Chen verfasserin aut In Remote Sensing MDPI AG, 2009 15(2023), 3, p 553 (DE-627)608937916 (DE-600)2513863-7 20724292 nnns volume:15 year:2023 number:3, p 553 https://doi.org/10.3390/rs15030553 kostenfrei https://doaj.org/article/9d3db32e9cf04c9893815c6d21dca50c kostenfrei https://www.mdpi.com/2072-4292/15/3/553 kostenfrei https://doaj.org/toc/2072-4292 Journal toc kostenfrei GBV_USEFLAG_A SYSFLAG_A GBV_DOAJ GBV_ILN_20 GBV_ILN_22 GBV_ILN_23 GBV_ILN_24 GBV_ILN_39 GBV_ILN_40 GBV_ILN_60 GBV_ILN_62 GBV_ILN_63 GBV_ILN_65 GBV_ILN_69 GBV_ILN_70 GBV_ILN_73 GBV_ILN_95 GBV_ILN_105 GBV_ILN_110 GBV_ILN_151 GBV_ILN_161 GBV_ILN_170 GBV_ILN_206 GBV_ILN_213 GBV_ILN_230 GBV_ILN_285 GBV_ILN_293 GBV_ILN_370 GBV_ILN_602 GBV_ILN_2005 GBV_ILN_2009 GBV_ILN_2011 GBV_ILN_2014 GBV_ILN_2055 GBV_ILN_2108 GBV_ILN_2111 GBV_ILN_2119 GBV_ILN_4012 GBV_ILN_4037 GBV_ILN_4112 GBV_ILN_4125 GBV_ILN_4126 GBV_ILN_4249 GBV_ILN_4305 GBV_ILN_4306 GBV_ILN_4307 GBV_ILN_4313 GBV_ILN_4322 GBV_ILN_4323 GBV_ILN_4324 GBV_ILN_4325 GBV_ILN_4335 GBV_ILN_4338 GBV_ILN_4367 GBV_ILN_4392 GBV_ILN_4700 AR 15 2023 3, p 553 |
language |
English |
source |
In Remote Sensing 15(2023), 3, p 553 volume:15 year:2023 number:3, p 553 |
sourceStr |
In Remote Sensing 15(2023), 3, p 553 volume:15 year:2023 number:3, p 553 |
format_phy_str_mv |
Article |
institution |
findex.gbv.de |
topic_facet |
atmospheric visibility prediction spatiotemporal sequence prediction RNN improve accuracy feature fusion Science Q |
isfreeaccess_bool |
true |
container_title |
Remote Sensing |
authorswithroles_txt_mv |
Zengliang Zang @@aut@@ Xulun Bao @@aut@@ Yi Li @@aut@@ Youming Qu @@aut@@ Dan Niu @@aut@@ Ning Liu @@aut@@ Xisong Chen @@aut@@ |
publishDateDaySort_date |
2023-01-01T00:00:00Z |
hierarchy_top_id |
608937916 |
id |
DOAJ080599451 |
language_de |
englisch |
fullrecord |
<?xml version="1.0" encoding="UTF-8"?><collection xmlns="http://www.loc.gov/MARC21/slim"><record><leader>01000caa a22002652 4500</leader><controlfield tag="001">DOAJ080599451</controlfield><controlfield tag="003">DE-627</controlfield><controlfield tag="005">20240413065351.0</controlfield><controlfield tag="007">cr uuu---uuuuu</controlfield><controlfield tag="008">230310s2023 xx |||||o 00| ||eng c</controlfield><datafield tag="024" ind1="7" ind2=" "><subfield code="a">10.3390/rs15030553</subfield><subfield code="2">doi</subfield></datafield><datafield tag="035" ind1=" " ind2=" "><subfield code="a">(DE-627)DOAJ080599451</subfield></datafield><datafield tag="035" ind1=" " ind2=" "><subfield code="a">(DE-599)DOAJ9d3db32e9cf04c9893815c6d21dca50c</subfield></datafield><datafield tag="040" ind1=" " ind2=" "><subfield code="a">DE-627</subfield><subfield code="b">ger</subfield><subfield code="c">DE-627</subfield><subfield code="e">rakwb</subfield></datafield><datafield tag="041" ind1=" " ind2=" "><subfield code="a">eng</subfield></datafield><datafield tag="100" ind1="0" ind2=" "><subfield code="a">Zengliang Zang</subfield><subfield code="e">verfasserin</subfield><subfield code="4">aut</subfield></datafield><datafield tag="245" ind1="1" ind2="2"><subfield code="a">A Modified RNN-Based Deep Learning Method for Prediction of Atmospheric Visibility</subfield></datafield><datafield tag="264" ind1=" " ind2="1"><subfield code="c">2023</subfield></datafield><datafield tag="336" ind1=" " ind2=" "><subfield code="a">Text</subfield><subfield code="b">txt</subfield><subfield code="2">rdacontent</subfield></datafield><datafield tag="337" ind1=" " ind2=" "><subfield code="a">Computermedien</subfield><subfield code="b">c</subfield><subfield code="2">rdamedia</subfield></datafield><datafield tag="338" ind1=" " ind2=" "><subfield code="a">Online-Ressource</subfield><subfield code="b">cr</subfield><subfield code="2">rdacarrier</subfield></datafield><datafield tag="520" ind1=" " ind2=" "><subfield code="a">Accurate atmospheric visibility prediction is of great significance to public transport safety. However, since it is affected by multiple factors, there still remains difficulties in predicting its heterogenous spatial distribution and rapid temporal variation. In this paper, a recursive neural network (RNN) prediction model modified with the frame-hopping transmission gate (FHTG), feature fusion module (FFM) and reverse scheduled sampling (RSS), named SwiftRNN, is developed. The new FHTG is used to accelerate training, the FFM is used for extraction and fusion of global and local features, and the RSS is employed to learn spatial details and improve prediction accuracy. Based on the ground-based monitoring data of atmospheric visibility from the China Meteorological Information Center during 1 January 2018 to 31 December 2020, the SwiftRNN model and two traditional ConvLSTM and PredRNN models are performed to predict hourly atmospheric visibility in central and eastern China at a lead of 12 h. The results show that the SwiftRNN model has better performance in the skill scores of visibility prediction than those of the ConvLSTM and PredRNN model. The averaged structural similarity (SSIM) of predictions at a lead up to 12 h is 0.444, 0.425 and 0.399 for the SwiftRNN, PredRNN and ConvLSTM model, respectively, and the averaged image perception similarity (LPIPS) is 0.289, 0.315 and 0.328, respectively. The averaged critical success index (CSI) of predictions over 1000 m fog area is 0.221, 0.205 and 0.194, respectively. Moreover, the training speed of the SwiftRNN model is 14.3% faster than the PredRNN model. It is also found that the prediction effect of the SwiftRNN model over 1000 m medium grade fog area is significantly improved along with lead times compared with the ConvLSTM and PredRNN model. All above results demonstrate the SwiftRNN model is a powerful tool in predicting atmospheric visibility.</subfield></datafield><datafield tag="650" ind1=" " ind2="4"><subfield code="a">atmospheric visibility prediction</subfield></datafield><datafield tag="650" ind1=" " ind2="4"><subfield code="a">spatiotemporal sequence prediction</subfield></datafield><datafield tag="650" ind1=" " ind2="4"><subfield code="a">RNN</subfield></datafield><datafield tag="650" ind1=" " ind2="4"><subfield code="a">improve accuracy</subfield></datafield><datafield tag="650" ind1=" " ind2="4"><subfield code="a">feature fusion</subfield></datafield><datafield tag="653" ind1=" " ind2="0"><subfield code="a">Science</subfield></datafield><datafield tag="653" ind1=" " ind2="0"><subfield code="a">Q</subfield></datafield><datafield tag="700" ind1="0" ind2=" "><subfield code="a">Xulun Bao</subfield><subfield code="e">verfasserin</subfield><subfield code="4">aut</subfield></datafield><datafield tag="700" ind1="0" ind2=" "><subfield code="a">Yi Li</subfield><subfield code="e">verfasserin</subfield><subfield code="4">aut</subfield></datafield><datafield tag="700" ind1="0" ind2=" "><subfield code="a">Youming Qu</subfield><subfield code="e">verfasserin</subfield><subfield code="4">aut</subfield></datafield><datafield tag="700" ind1="0" ind2=" "><subfield code="a">Dan Niu</subfield><subfield code="e">verfasserin</subfield><subfield code="4">aut</subfield></datafield><datafield tag="700" ind1="0" ind2=" "><subfield code="a">Ning Liu</subfield><subfield code="e">verfasserin</subfield><subfield code="4">aut</subfield></datafield><datafield tag="700" ind1="0" ind2=" "><subfield code="a">Xisong Chen</subfield><subfield code="e">verfasserin</subfield><subfield code="4">aut</subfield></datafield><datafield tag="773" ind1="0" ind2="8"><subfield code="i">In</subfield><subfield code="t">Remote Sensing</subfield><subfield code="d">MDPI AG, 2009</subfield><subfield code="g">15(2023), 3, p 553</subfield><subfield code="w">(DE-627)608937916</subfield><subfield code="w">(DE-600)2513863-7</subfield><subfield code="x">20724292</subfield><subfield code="7">nnns</subfield></datafield><datafield tag="773" ind1="1" ind2="8"><subfield code="g">volume:15</subfield><subfield code="g">year:2023</subfield><subfield code="g">number:3, p 553</subfield></datafield><datafield tag="856" ind1="4" ind2="0"><subfield code="u">https://doi.org/10.3390/rs15030553</subfield><subfield code="z">kostenfrei</subfield></datafield><datafield tag="856" ind1="4" ind2="0"><subfield code="u">https://doaj.org/article/9d3db32e9cf04c9893815c6d21dca50c</subfield><subfield code="z">kostenfrei</subfield></datafield><datafield tag="856" ind1="4" ind2="0"><subfield code="u">https://www.mdpi.com/2072-4292/15/3/553</subfield><subfield code="z">kostenfrei</subfield></datafield><datafield tag="856" ind1="4" ind2="2"><subfield code="u">https://doaj.org/toc/2072-4292</subfield><subfield code="y">Journal toc</subfield><subfield code="z">kostenfrei</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_USEFLAG_A</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">SYSFLAG_A</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_DOAJ</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_20</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_22</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_23</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_24</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_39</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_40</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_60</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_62</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_63</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_65</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_69</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_70</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_73</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_95</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_105</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_110</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_151</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_161</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_170</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_206</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_213</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_230</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_285</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_293</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_370</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_602</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2005</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2009</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2011</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2014</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2055</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2108</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2111</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2119</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4012</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4037</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4112</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4125</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4126</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4249</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4305</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4306</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4307</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4313</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4322</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4323</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4324</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4325</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4335</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4338</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4367</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4392</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4700</subfield></datafield><datafield tag="951" ind1=" " ind2=" "><subfield code="a">AR</subfield></datafield><datafield tag="952" ind1=" " ind2=" "><subfield code="d">15</subfield><subfield code="j">2023</subfield><subfield code="e">3, p 553</subfield></datafield></record></collection>
|
author |
Zengliang Zang |
spellingShingle |
Zengliang Zang misc atmospheric visibility prediction misc spatiotemporal sequence prediction misc RNN misc improve accuracy misc feature fusion misc Science misc Q A Modified RNN-Based Deep Learning Method for Prediction of Atmospheric Visibility |
authorStr |
Zengliang Zang |
ppnlink_with_tag_str_mv |
@@773@@(DE-627)608937916 |
format |
electronic Article |
delete_txt_mv |
keep |
author_role |
aut aut aut aut aut aut aut |
collection |
DOAJ |
remote_str |
true |
illustrated |
Not Illustrated |
issn |
20724292 |
topic_title |
A Modified RNN-Based Deep Learning Method for Prediction of Atmospheric Visibility atmospheric visibility prediction spatiotemporal sequence prediction RNN improve accuracy feature fusion |
topic |
misc atmospheric visibility prediction misc spatiotemporal sequence prediction misc RNN misc improve accuracy misc feature fusion misc Science misc Q |
topic_unstemmed |
misc atmospheric visibility prediction misc spatiotemporal sequence prediction misc RNN misc improve accuracy misc feature fusion misc Science misc Q |
topic_browse |
misc atmospheric visibility prediction misc spatiotemporal sequence prediction misc RNN misc improve accuracy misc feature fusion misc Science misc Q |
format_facet |
Elektronische Aufsätze Aufsätze Elektronische Ressource |
format_main_str_mv |
Text Zeitschrift/Artikel |
carriertype_str_mv |
cr |
hierarchy_parent_title |
Remote Sensing |
hierarchy_parent_id |
608937916 |
hierarchy_top_title |
Remote Sensing |
isfreeaccess_txt |
true |
familylinks_str_mv |
(DE-627)608937916 (DE-600)2513863-7 |
title |
A Modified RNN-Based Deep Learning Method for Prediction of Atmospheric Visibility |
ctrlnum |
(DE-627)DOAJ080599451 (DE-599)DOAJ9d3db32e9cf04c9893815c6d21dca50c |
title_full |
A Modified RNN-Based Deep Learning Method for Prediction of Atmospheric Visibility |
author_sort |
Zengliang Zang |
journal |
Remote Sensing |
journalStr |
Remote Sensing |
lang_code |
eng |
isOA_bool |
true |
recordtype |
marc |
publishDateSort |
2023 |
contenttype_str_mv |
txt |
author_browse |
Zengliang Zang Xulun Bao Yi Li Youming Qu Dan Niu Ning Liu Xisong Chen |
container_volume |
15 |
format_se |
Elektronische Aufsätze |
author-letter |
Zengliang Zang |
doi_str_mv |
10.3390/rs15030553 |
author2-role |
verfasserin |
title_sort |
modified rnn-based deep learning method for prediction of atmospheric visibility |
title_auth |
A Modified RNN-Based Deep Learning Method for Prediction of Atmospheric Visibility |
abstract |
Accurate atmospheric visibility prediction is of great significance to public transport safety. However, since it is affected by multiple factors, there still remains difficulties in predicting its heterogenous spatial distribution and rapid temporal variation. In this paper, a recursive neural network (RNN) prediction model modified with the frame-hopping transmission gate (FHTG), feature fusion module (FFM) and reverse scheduled sampling (RSS), named SwiftRNN, is developed. The new FHTG is used to accelerate training, the FFM is used for extraction and fusion of global and local features, and the RSS is employed to learn spatial details and improve prediction accuracy. Based on the ground-based monitoring data of atmospheric visibility from the China Meteorological Information Center during 1 January 2018 to 31 December 2020, the SwiftRNN model and two traditional ConvLSTM and PredRNN models are performed to predict hourly atmospheric visibility in central and eastern China at a lead of 12 h. The results show that the SwiftRNN model has better performance in the skill scores of visibility prediction than those of the ConvLSTM and PredRNN model. The averaged structural similarity (SSIM) of predictions at a lead up to 12 h is 0.444, 0.425 and 0.399 for the SwiftRNN, PredRNN and ConvLSTM model, respectively, and the averaged image perception similarity (LPIPS) is 0.289, 0.315 and 0.328, respectively. The averaged critical success index (CSI) of predictions over 1000 m fog area is 0.221, 0.205 and 0.194, respectively. Moreover, the training speed of the SwiftRNN model is 14.3% faster than the PredRNN model. It is also found that the prediction effect of the SwiftRNN model over 1000 m medium grade fog area is significantly improved along with lead times compared with the ConvLSTM and PredRNN model. All above results demonstrate the SwiftRNN model is a powerful tool in predicting atmospheric visibility. |
abstractGer |
Accurate atmospheric visibility prediction is of great significance to public transport safety. However, since it is affected by multiple factors, there still remains difficulties in predicting its heterogenous spatial distribution and rapid temporal variation. In this paper, a recursive neural network (RNN) prediction model modified with the frame-hopping transmission gate (FHTG), feature fusion module (FFM) and reverse scheduled sampling (RSS), named SwiftRNN, is developed. The new FHTG is used to accelerate training, the FFM is used for extraction and fusion of global and local features, and the RSS is employed to learn spatial details and improve prediction accuracy. Based on the ground-based monitoring data of atmospheric visibility from the China Meteorological Information Center during 1 January 2018 to 31 December 2020, the SwiftRNN model and two traditional ConvLSTM and PredRNN models are performed to predict hourly atmospheric visibility in central and eastern China at a lead of 12 h. The results show that the SwiftRNN model has better performance in the skill scores of visibility prediction than those of the ConvLSTM and PredRNN model. The averaged structural similarity (SSIM) of predictions at a lead up to 12 h is 0.444, 0.425 and 0.399 for the SwiftRNN, PredRNN and ConvLSTM model, respectively, and the averaged image perception similarity (LPIPS) is 0.289, 0.315 and 0.328, respectively. The averaged critical success index (CSI) of predictions over 1000 m fog area is 0.221, 0.205 and 0.194, respectively. Moreover, the training speed of the SwiftRNN model is 14.3% faster than the PredRNN model. It is also found that the prediction effect of the SwiftRNN model over 1000 m medium grade fog area is significantly improved along with lead times compared with the ConvLSTM and PredRNN model. All above results demonstrate the SwiftRNN model is a powerful tool in predicting atmospheric visibility. |
abstract_unstemmed |
Accurate atmospheric visibility prediction is of great significance to public transport safety. However, since it is affected by multiple factors, there still remains difficulties in predicting its heterogenous spatial distribution and rapid temporal variation. In this paper, a recursive neural network (RNN) prediction model modified with the frame-hopping transmission gate (FHTG), feature fusion module (FFM) and reverse scheduled sampling (RSS), named SwiftRNN, is developed. The new FHTG is used to accelerate training, the FFM is used for extraction and fusion of global and local features, and the RSS is employed to learn spatial details and improve prediction accuracy. Based on the ground-based monitoring data of atmospheric visibility from the China Meteorological Information Center during 1 January 2018 to 31 December 2020, the SwiftRNN model and two traditional ConvLSTM and PredRNN models are performed to predict hourly atmospheric visibility in central and eastern China at a lead of 12 h. The results show that the SwiftRNN model has better performance in the skill scores of visibility prediction than those of the ConvLSTM and PredRNN model. The averaged structural similarity (SSIM) of predictions at a lead up to 12 h is 0.444, 0.425 and 0.399 for the SwiftRNN, PredRNN and ConvLSTM model, respectively, and the averaged image perception similarity (LPIPS) is 0.289, 0.315 and 0.328, respectively. The averaged critical success index (CSI) of predictions over 1000 m fog area is 0.221, 0.205 and 0.194, respectively. Moreover, the training speed of the SwiftRNN model is 14.3% faster than the PredRNN model. It is also found that the prediction effect of the SwiftRNN model over 1000 m medium grade fog area is significantly improved along with lead times compared with the ConvLSTM and PredRNN model. All above results demonstrate the SwiftRNN model is a powerful tool in predicting atmospheric visibility. |
collection_details |
GBV_USEFLAG_A SYSFLAG_A GBV_DOAJ GBV_ILN_20 GBV_ILN_22 GBV_ILN_23 GBV_ILN_24 GBV_ILN_39 GBV_ILN_40 GBV_ILN_60 GBV_ILN_62 GBV_ILN_63 GBV_ILN_65 GBV_ILN_69 GBV_ILN_70 GBV_ILN_73 GBV_ILN_95 GBV_ILN_105 GBV_ILN_110 GBV_ILN_151 GBV_ILN_161 GBV_ILN_170 GBV_ILN_206 GBV_ILN_213 GBV_ILN_230 GBV_ILN_285 GBV_ILN_293 GBV_ILN_370 GBV_ILN_602 GBV_ILN_2005 GBV_ILN_2009 GBV_ILN_2011 GBV_ILN_2014 GBV_ILN_2055 GBV_ILN_2108 GBV_ILN_2111 GBV_ILN_2119 GBV_ILN_4012 GBV_ILN_4037 GBV_ILN_4112 GBV_ILN_4125 GBV_ILN_4126 GBV_ILN_4249 GBV_ILN_4305 GBV_ILN_4306 GBV_ILN_4307 GBV_ILN_4313 GBV_ILN_4322 GBV_ILN_4323 GBV_ILN_4324 GBV_ILN_4325 GBV_ILN_4335 GBV_ILN_4338 GBV_ILN_4367 GBV_ILN_4392 GBV_ILN_4700 |
container_issue |
3, p 553 |
title_short |
A Modified RNN-Based Deep Learning Method for Prediction of Atmospheric Visibility |
url |
https://doi.org/10.3390/rs15030553 https://doaj.org/article/9d3db32e9cf04c9893815c6d21dca50c https://www.mdpi.com/2072-4292/15/3/553 https://doaj.org/toc/2072-4292 |
remote_bool |
true |
author2 |
Xulun Bao Yi Li Youming Qu Dan Niu Ning Liu Xisong Chen |
author2Str |
Xulun Bao Yi Li Youming Qu Dan Niu Ning Liu Xisong Chen |
ppnlink |
608937916 |
mediatype_str_mv |
c |
isOA_txt |
true |
hochschulschrift_bool |
false |
doi_str |
10.3390/rs15030553 |
up_date |
2024-07-03T15:31:11.120Z |
_version_ |
1803572395252908032 |
fullrecord_marcxml |
<?xml version="1.0" encoding="UTF-8"?><collection xmlns="http://www.loc.gov/MARC21/slim"><record><leader>01000caa a22002652 4500</leader><controlfield tag="001">DOAJ080599451</controlfield><controlfield tag="003">DE-627</controlfield><controlfield tag="005">20240413065351.0</controlfield><controlfield tag="007">cr uuu---uuuuu</controlfield><controlfield tag="008">230310s2023 xx |||||o 00| ||eng c</controlfield><datafield tag="024" ind1="7" ind2=" "><subfield code="a">10.3390/rs15030553</subfield><subfield code="2">doi</subfield></datafield><datafield tag="035" ind1=" " ind2=" "><subfield code="a">(DE-627)DOAJ080599451</subfield></datafield><datafield tag="035" ind1=" " ind2=" "><subfield code="a">(DE-599)DOAJ9d3db32e9cf04c9893815c6d21dca50c</subfield></datafield><datafield tag="040" ind1=" " ind2=" "><subfield code="a">DE-627</subfield><subfield code="b">ger</subfield><subfield code="c">DE-627</subfield><subfield code="e">rakwb</subfield></datafield><datafield tag="041" ind1=" " ind2=" "><subfield code="a">eng</subfield></datafield><datafield tag="100" ind1="0" ind2=" "><subfield code="a">Zengliang Zang</subfield><subfield code="e">verfasserin</subfield><subfield code="4">aut</subfield></datafield><datafield tag="245" ind1="1" ind2="2"><subfield code="a">A Modified RNN-Based Deep Learning Method for Prediction of Atmospheric Visibility</subfield></datafield><datafield tag="264" ind1=" " ind2="1"><subfield code="c">2023</subfield></datafield><datafield tag="336" ind1=" " ind2=" "><subfield code="a">Text</subfield><subfield code="b">txt</subfield><subfield code="2">rdacontent</subfield></datafield><datafield tag="337" ind1=" " ind2=" "><subfield code="a">Computermedien</subfield><subfield code="b">c</subfield><subfield code="2">rdamedia</subfield></datafield><datafield tag="338" ind1=" " ind2=" "><subfield code="a">Online-Ressource</subfield><subfield code="b">cr</subfield><subfield code="2">rdacarrier</subfield></datafield><datafield tag="520" ind1=" " ind2=" "><subfield code="a">Accurate atmospheric visibility prediction is of great significance to public transport safety. However, since it is affected by multiple factors, there still remains difficulties in predicting its heterogenous spatial distribution and rapid temporal variation. In this paper, a recursive neural network (RNN) prediction model modified with the frame-hopping transmission gate (FHTG), feature fusion module (FFM) and reverse scheduled sampling (RSS), named SwiftRNN, is developed. The new FHTG is used to accelerate training, the FFM is used for extraction and fusion of global and local features, and the RSS is employed to learn spatial details and improve prediction accuracy. Based on the ground-based monitoring data of atmospheric visibility from the China Meteorological Information Center during 1 January 2018 to 31 December 2020, the SwiftRNN model and two traditional ConvLSTM and PredRNN models are performed to predict hourly atmospheric visibility in central and eastern China at a lead of 12 h. The results show that the SwiftRNN model has better performance in the skill scores of visibility prediction than those of the ConvLSTM and PredRNN model. The averaged structural similarity (SSIM) of predictions at a lead up to 12 h is 0.444, 0.425 and 0.399 for the SwiftRNN, PredRNN and ConvLSTM model, respectively, and the averaged image perception similarity (LPIPS) is 0.289, 0.315 and 0.328, respectively. The averaged critical success index (CSI) of predictions over 1000 m fog area is 0.221, 0.205 and 0.194, respectively. Moreover, the training speed of the SwiftRNN model is 14.3% faster than the PredRNN model. It is also found that the prediction effect of the SwiftRNN model over 1000 m medium grade fog area is significantly improved along with lead times compared with the ConvLSTM and PredRNN model. All above results demonstrate the SwiftRNN model is a powerful tool in predicting atmospheric visibility.</subfield></datafield><datafield tag="650" ind1=" " ind2="4"><subfield code="a">atmospheric visibility prediction</subfield></datafield><datafield tag="650" ind1=" " ind2="4"><subfield code="a">spatiotemporal sequence prediction</subfield></datafield><datafield tag="650" ind1=" " ind2="4"><subfield code="a">RNN</subfield></datafield><datafield tag="650" ind1=" " ind2="4"><subfield code="a">improve accuracy</subfield></datafield><datafield tag="650" ind1=" " ind2="4"><subfield code="a">feature fusion</subfield></datafield><datafield tag="653" ind1=" " ind2="0"><subfield code="a">Science</subfield></datafield><datafield tag="653" ind1=" " ind2="0"><subfield code="a">Q</subfield></datafield><datafield tag="700" ind1="0" ind2=" "><subfield code="a">Xulun Bao</subfield><subfield code="e">verfasserin</subfield><subfield code="4">aut</subfield></datafield><datafield tag="700" ind1="0" ind2=" "><subfield code="a">Yi Li</subfield><subfield code="e">verfasserin</subfield><subfield code="4">aut</subfield></datafield><datafield tag="700" ind1="0" ind2=" "><subfield code="a">Youming Qu</subfield><subfield code="e">verfasserin</subfield><subfield code="4">aut</subfield></datafield><datafield tag="700" ind1="0" ind2=" "><subfield code="a">Dan Niu</subfield><subfield code="e">verfasserin</subfield><subfield code="4">aut</subfield></datafield><datafield tag="700" ind1="0" ind2=" "><subfield code="a">Ning Liu</subfield><subfield code="e">verfasserin</subfield><subfield code="4">aut</subfield></datafield><datafield tag="700" ind1="0" ind2=" "><subfield code="a">Xisong Chen</subfield><subfield code="e">verfasserin</subfield><subfield code="4">aut</subfield></datafield><datafield tag="773" ind1="0" ind2="8"><subfield code="i">In</subfield><subfield code="t">Remote Sensing</subfield><subfield code="d">MDPI AG, 2009</subfield><subfield code="g">15(2023), 3, p 553</subfield><subfield code="w">(DE-627)608937916</subfield><subfield code="w">(DE-600)2513863-7</subfield><subfield code="x">20724292</subfield><subfield code="7">nnns</subfield></datafield><datafield tag="773" ind1="1" ind2="8"><subfield code="g">volume:15</subfield><subfield code="g">year:2023</subfield><subfield code="g">number:3, p 553</subfield></datafield><datafield tag="856" ind1="4" ind2="0"><subfield code="u">https://doi.org/10.3390/rs15030553</subfield><subfield code="z">kostenfrei</subfield></datafield><datafield tag="856" ind1="4" ind2="0"><subfield code="u">https://doaj.org/article/9d3db32e9cf04c9893815c6d21dca50c</subfield><subfield code="z">kostenfrei</subfield></datafield><datafield tag="856" ind1="4" ind2="0"><subfield code="u">https://www.mdpi.com/2072-4292/15/3/553</subfield><subfield code="z">kostenfrei</subfield></datafield><datafield tag="856" ind1="4" ind2="2"><subfield code="u">https://doaj.org/toc/2072-4292</subfield><subfield code="y">Journal toc</subfield><subfield code="z">kostenfrei</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_USEFLAG_A</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">SYSFLAG_A</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_DOAJ</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_20</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_22</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_23</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_24</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_39</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_40</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_60</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_62</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_63</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_65</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_69</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_70</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_73</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_95</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_105</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_110</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_151</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_161</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_170</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_206</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_213</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_230</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_285</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_293</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_370</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_602</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2005</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2009</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2011</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2014</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2055</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2108</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2111</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2119</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4012</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4037</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4112</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4125</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4126</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4249</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4305</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4306</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4307</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4313</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4322</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4323</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4324</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4325</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4335</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4338</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4367</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4392</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4700</subfield></datafield><datafield tag="951" ind1=" " ind2=" "><subfield code="a">AR</subfield></datafield><datafield tag="952" ind1=" " ind2=" "><subfield code="d">15</subfield><subfield code="j">2023</subfield><subfield code="e">3, p 553</subfield></datafield></record></collection>
|
score |
7.4024982 |