Prediction of Refractive Error Based on Ultrawide Field Images With Deep Learning Models in Myopia Patients
SummaryUltrawide field fundus images could be applied in deep learning models to predict the refractive error of myopic patients. The predicted error was related to the older age and greater spherical power.PurposeTo explore the possibility of predicting the refractive error of myopic patients by ap...
Ausführliche Beschreibung
Autor*in: |
Danjuan Yang [verfasserIn] Meiyan Li [verfasserIn] Weizhen Li [verfasserIn] Yunzhe Wang [verfasserIn] Lingling Niu [verfasserIn] Yang Shen [verfasserIn] Xiaoyu Zhang [verfasserIn] Bo Fu [verfasserIn] Xingtao Zhou [verfasserIn] |
---|
Format: |
E-Artikel |
---|---|
Sprache: |
Englisch |
Erschienen: |
2022 |
---|
Schlagwörter: |
---|
Übergeordnetes Werk: |
In: Frontiers in Medicine - Frontiers Media S.A., 2014, 9(2022) |
---|---|
Übergeordnetes Werk: |
volume:9 ; year:2022 |
Links: |
---|
DOI / URN: |
10.3389/fmed.2022.834281 |
---|
Katalog-ID: |
DOAJ064318281 |
---|
LEADER | 01000caa a22002652 4500 | ||
---|---|---|---|
001 | DOAJ064318281 | ||
003 | DE-627 | ||
005 | 20230309035331.0 | ||
007 | cr uuu---uuuuu | ||
008 | 230228s2022 xx |||||o 00| ||eng c | ||
024 | 7 | |a 10.3389/fmed.2022.834281 |2 doi | |
035 | |a (DE-627)DOAJ064318281 | ||
035 | |a (DE-599)DOAJb6f36567d3034f13a7f106a1f5f0046e | ||
040 | |a DE-627 |b ger |c DE-627 |e rakwb | ||
041 | |a eng | ||
050 | 0 | |a R5-920 | |
100 | 0 | |a Danjuan Yang |e verfasserin |4 aut | |
245 | 1 | 0 | |a Prediction of Refractive Error Based on Ultrawide Field Images With Deep Learning Models in Myopia Patients |
264 | 1 | |c 2022 | |
336 | |a Text |b txt |2 rdacontent | ||
337 | |a Computermedien |b c |2 rdamedia | ||
338 | |a Online-Ressource |b cr |2 rdacarrier | ||
520 | |a SummaryUltrawide field fundus images could be applied in deep learning models to predict the refractive error of myopic patients. The predicted error was related to the older age and greater spherical power.PurposeTo explore the possibility of predicting the refractive error of myopic patients by applying deep learning models trained with ultrawide field (UWF) images.MethodsUWF fundus images were collected from left eyes of 987 myopia patients of Eye and ENT Hospital, Fudan University between November 2015 and January 2019. The fundus images were all captured with Optomap Daytona, a 200° UWF imaging device. Three deep learning models (ResNet-50, Inception-v3, Inception-ResNet-v2) were trained with the UWF images for predicting refractive error. 133 UWF fundus images were also collected after January 2021 as an the external validation data set. The predicted refractive error was compared with the “true value” measured by subjective refraction. Mean absolute error (MAE), mean absolute percentage error (MAPE) and coefficient (R2) value were calculated in the test set. The Spearman rank correlation test was applied for univariate analysis and multivariate linear regression analysis on variables affecting MAE. The weighted heat map was generated by averaging the predicted weight of each pixel.ResultsResNet-50, Inception-v3 and Inception-ResNet-v2 models were trained with the UWF images for refractive error prediction with R2 of 0.9562, 0.9555, 0.9563 and MAE of 1.72(95%CI: 1.62–1.82), 1.75(95%CI: 1.65–1.86) and 1.76(95%CI: 1.66–1.86), respectively. 29.95%, 31.47% and 29.44% of the test set were within the predictive error of 0.75D in the three models. 64.97%, 64.97%, and 64.47% was within 2.00D predictive error. The predicted MAE was related to older age (P < 0.01) and greater spherical power(P < 0.01). The optic papilla and macular region had significant predictive power in the weighted heat map.ConclusionsIt was feasible to predict refractive error in myopic patients with deep learning models trained by UWF images with the accuracy to be improved. | ||
650 | 4 | |a refractive error prediction | |
650 | 4 | |a myopia | |
650 | 4 | |a deep learning | |
650 | 4 | |a ultrawide field imaging | |
650 | 4 | |a ResNet-50 | |
650 | 4 | |a Inception-V3 | |
653 | 0 | |a Medicine (General) | |
700 | 0 | |a Danjuan Yang |e verfasserin |4 aut | |
700 | 0 | |a Danjuan Yang |e verfasserin |4 aut | |
700 | 0 | |a Danjuan Yang |e verfasserin |4 aut | |
700 | 0 | |a Danjuan Yang |e verfasserin |4 aut | |
700 | 0 | |a Meiyan Li |e verfasserin |4 aut | |
700 | 0 | |a Meiyan Li |e verfasserin |4 aut | |
700 | 0 | |a Meiyan Li |e verfasserin |4 aut | |
700 | 0 | |a Meiyan Li |e verfasserin |4 aut | |
700 | 0 | |a Meiyan Li |e verfasserin |4 aut | |
700 | 0 | |a Weizhen Li |e verfasserin |4 aut | |
700 | 0 | |a Yunzhe Wang |e verfasserin |4 aut | |
700 | 0 | |a Lingling Niu |e verfasserin |4 aut | |
700 | 0 | |a Lingling Niu |e verfasserin |4 aut | |
700 | 0 | |a Lingling Niu |e verfasserin |4 aut | |
700 | 0 | |a Lingling Niu |e verfasserin |4 aut | |
700 | 0 | |a Lingling Niu |e verfasserin |4 aut | |
700 | 0 | |a Yang Shen |e verfasserin |4 aut | |
700 | 0 | |a Yang Shen |e verfasserin |4 aut | |
700 | 0 | |a Yang Shen |e verfasserin |4 aut | |
700 | 0 | |a Yang Shen |e verfasserin |4 aut | |
700 | 0 | |a Yang Shen |e verfasserin |4 aut | |
700 | 0 | |a Xiaoyu Zhang |e verfasserin |4 aut | |
700 | 0 | |a Xiaoyu Zhang |e verfasserin |4 aut | |
700 | 0 | |a Xiaoyu Zhang |e verfasserin |4 aut | |
700 | 0 | |a Xiaoyu Zhang |e verfasserin |4 aut | |
700 | 0 | |a Xiaoyu Zhang |e verfasserin |4 aut | |
700 | 0 | |a Bo Fu |e verfasserin |4 aut | |
700 | 0 | |a Xingtao Zhou |e verfasserin |4 aut | |
700 | 0 | |a Xingtao Zhou |e verfasserin |4 aut | |
700 | 0 | |a Xingtao Zhou |e verfasserin |4 aut | |
700 | 0 | |a Xingtao Zhou |e verfasserin |4 aut | |
700 | 0 | |a Xingtao Zhou |e verfasserin |4 aut | |
773 | 0 | 8 | |i In |t Frontiers in Medicine |d Frontiers Media S.A., 2014 |g 9(2022) |w (DE-627)789482991 |w (DE-600)2775999-4 |x 2296858X |7 nnns |
773 | 1 | 8 | |g volume:9 |g year:2022 |
856 | 4 | 0 | |u https://doi.org/10.3389/fmed.2022.834281 |z kostenfrei |
856 | 4 | 0 | |u https://doaj.org/article/b6f36567d3034f13a7f106a1f5f0046e |z kostenfrei |
856 | 4 | 0 | |u https://www.frontiersin.org/articles/10.3389/fmed.2022.834281/full |z kostenfrei |
856 | 4 | 2 | |u https://doaj.org/toc/2296-858X |y Journal toc |z kostenfrei |
912 | |a GBV_USEFLAG_A | ||
912 | |a SYSFLAG_A | ||
912 | |a GBV_DOAJ | ||
912 | |a GBV_ILN_11 | ||
912 | |a GBV_ILN_20 | ||
912 | |a GBV_ILN_22 | ||
912 | |a GBV_ILN_23 | ||
912 | |a GBV_ILN_24 | ||
912 | |a GBV_ILN_31 | ||
912 | |a GBV_ILN_39 | ||
912 | |a GBV_ILN_40 | ||
912 | |a GBV_ILN_60 | ||
912 | |a GBV_ILN_62 | ||
912 | |a GBV_ILN_63 | ||
912 | |a GBV_ILN_65 | ||
912 | |a GBV_ILN_69 | ||
912 | |a GBV_ILN_73 | ||
912 | |a GBV_ILN_74 | ||
912 | |a GBV_ILN_95 | ||
912 | |a GBV_ILN_105 | ||
912 | |a GBV_ILN_110 | ||
912 | |a GBV_ILN_151 | ||
912 | |a GBV_ILN_161 | ||
912 | |a GBV_ILN_170 | ||
912 | |a GBV_ILN_206 | ||
912 | |a GBV_ILN_213 | ||
912 | |a GBV_ILN_230 | ||
912 | |a GBV_ILN_285 | ||
912 | |a GBV_ILN_293 | ||
912 | |a GBV_ILN_602 | ||
912 | |a GBV_ILN_2003 | ||
912 | |a GBV_ILN_2014 | ||
912 | |a GBV_ILN_4012 | ||
912 | |a GBV_ILN_4037 | ||
912 | |a GBV_ILN_4112 | ||
912 | |a GBV_ILN_4125 | ||
912 | |a GBV_ILN_4126 | ||
912 | |a GBV_ILN_4249 | ||
912 | |a GBV_ILN_4305 | ||
912 | |a GBV_ILN_4306 | ||
912 | |a GBV_ILN_4307 | ||
912 | |a GBV_ILN_4313 | ||
912 | |a GBV_ILN_4322 | ||
912 | |a GBV_ILN_4323 | ||
912 | |a GBV_ILN_4324 | ||
912 | |a GBV_ILN_4325 | ||
912 | |a GBV_ILN_4338 | ||
912 | |a GBV_ILN_4367 | ||
912 | |a GBV_ILN_4700 | ||
951 | |a AR | ||
952 | |d 9 |j 2022 |
author_variant |
d y dy d y dy d y dy d y dy d y dy m l ml m l ml m l ml m l ml m l ml w l wl y w yw l n ln l n ln l n ln l n ln l n ln y s ys y s ys y s ys y s ys y s ys x z xz x z xz x z xz x z xz x z xz b f bf x z xz x z xz x z xz x z xz x z xz |
---|---|
matchkey_str |
article:2296858X:2022----::rdcinfercierobsdnlrwdfedmgsiheperi |
hierarchy_sort_str |
2022 |
callnumber-subject-code |
R |
publishDate |
2022 |
allfields |
10.3389/fmed.2022.834281 doi (DE-627)DOAJ064318281 (DE-599)DOAJb6f36567d3034f13a7f106a1f5f0046e DE-627 ger DE-627 rakwb eng R5-920 Danjuan Yang verfasserin aut Prediction of Refractive Error Based on Ultrawide Field Images With Deep Learning Models in Myopia Patients 2022 Text txt rdacontent Computermedien c rdamedia Online-Ressource cr rdacarrier SummaryUltrawide field fundus images could be applied in deep learning models to predict the refractive error of myopic patients. The predicted error was related to the older age and greater spherical power.PurposeTo explore the possibility of predicting the refractive error of myopic patients by applying deep learning models trained with ultrawide field (UWF) images.MethodsUWF fundus images were collected from left eyes of 987 myopia patients of Eye and ENT Hospital, Fudan University between November 2015 and January 2019. The fundus images were all captured with Optomap Daytona, a 200° UWF imaging device. Three deep learning models (ResNet-50, Inception-v3, Inception-ResNet-v2) were trained with the UWF images for predicting refractive error. 133 UWF fundus images were also collected after January 2021 as an the external validation data set. The predicted refractive error was compared with the “true value” measured by subjective refraction. Mean absolute error (MAE), mean absolute percentage error (MAPE) and coefficient (R2) value were calculated in the test set. The Spearman rank correlation test was applied for univariate analysis and multivariate linear regression analysis on variables affecting MAE. The weighted heat map was generated by averaging the predicted weight of each pixel.ResultsResNet-50, Inception-v3 and Inception-ResNet-v2 models were trained with the UWF images for refractive error prediction with R2 of 0.9562, 0.9555, 0.9563 and MAE of 1.72(95%CI: 1.62–1.82), 1.75(95%CI: 1.65–1.86) and 1.76(95%CI: 1.66–1.86), respectively. 29.95%, 31.47% and 29.44% of the test set were within the predictive error of 0.75D in the three models. 64.97%, 64.97%, and 64.47% was within 2.00D predictive error. The predicted MAE was related to older age (P < 0.01) and greater spherical power(P < 0.01). The optic papilla and macular region had significant predictive power in the weighted heat map.ConclusionsIt was feasible to predict refractive error in myopic patients with deep learning models trained by UWF images with the accuracy to be improved. refractive error prediction myopia deep learning ultrawide field imaging ResNet-50 Inception-V3 Medicine (General) Danjuan Yang verfasserin aut Danjuan Yang verfasserin aut Danjuan Yang verfasserin aut Danjuan Yang verfasserin aut Meiyan Li verfasserin aut Meiyan Li verfasserin aut Meiyan Li verfasserin aut Meiyan Li verfasserin aut Meiyan Li verfasserin aut Weizhen Li verfasserin aut Yunzhe Wang verfasserin aut Lingling Niu verfasserin aut Lingling Niu verfasserin aut Lingling Niu verfasserin aut Lingling Niu verfasserin aut Lingling Niu verfasserin aut Yang Shen verfasserin aut Yang Shen verfasserin aut Yang Shen verfasserin aut Yang Shen verfasserin aut Yang Shen verfasserin aut Xiaoyu Zhang verfasserin aut Xiaoyu Zhang verfasserin aut Xiaoyu Zhang verfasserin aut Xiaoyu Zhang verfasserin aut Xiaoyu Zhang verfasserin aut Bo Fu verfasserin aut Xingtao Zhou verfasserin aut Xingtao Zhou verfasserin aut Xingtao Zhou verfasserin aut Xingtao Zhou verfasserin aut Xingtao Zhou verfasserin aut In Frontiers in Medicine Frontiers Media S.A., 2014 9(2022) (DE-627)789482991 (DE-600)2775999-4 2296858X nnns volume:9 year:2022 https://doi.org/10.3389/fmed.2022.834281 kostenfrei https://doaj.org/article/b6f36567d3034f13a7f106a1f5f0046e kostenfrei https://www.frontiersin.org/articles/10.3389/fmed.2022.834281/full kostenfrei https://doaj.org/toc/2296-858X Journal toc kostenfrei GBV_USEFLAG_A SYSFLAG_A GBV_DOAJ GBV_ILN_11 GBV_ILN_20 GBV_ILN_22 GBV_ILN_23 GBV_ILN_24 GBV_ILN_31 GBV_ILN_39 GBV_ILN_40 GBV_ILN_60 GBV_ILN_62 GBV_ILN_63 GBV_ILN_65 GBV_ILN_69 GBV_ILN_73 GBV_ILN_74 GBV_ILN_95 GBV_ILN_105 GBV_ILN_110 GBV_ILN_151 GBV_ILN_161 GBV_ILN_170 GBV_ILN_206 GBV_ILN_213 GBV_ILN_230 GBV_ILN_285 GBV_ILN_293 GBV_ILN_602 GBV_ILN_2003 GBV_ILN_2014 GBV_ILN_4012 GBV_ILN_4037 GBV_ILN_4112 GBV_ILN_4125 GBV_ILN_4126 GBV_ILN_4249 GBV_ILN_4305 GBV_ILN_4306 GBV_ILN_4307 GBV_ILN_4313 GBV_ILN_4322 GBV_ILN_4323 GBV_ILN_4324 GBV_ILN_4325 GBV_ILN_4338 GBV_ILN_4367 GBV_ILN_4700 AR 9 2022 |
spelling |
10.3389/fmed.2022.834281 doi (DE-627)DOAJ064318281 (DE-599)DOAJb6f36567d3034f13a7f106a1f5f0046e DE-627 ger DE-627 rakwb eng R5-920 Danjuan Yang verfasserin aut Prediction of Refractive Error Based on Ultrawide Field Images With Deep Learning Models in Myopia Patients 2022 Text txt rdacontent Computermedien c rdamedia Online-Ressource cr rdacarrier SummaryUltrawide field fundus images could be applied in deep learning models to predict the refractive error of myopic patients. The predicted error was related to the older age and greater spherical power.PurposeTo explore the possibility of predicting the refractive error of myopic patients by applying deep learning models trained with ultrawide field (UWF) images.MethodsUWF fundus images were collected from left eyes of 987 myopia patients of Eye and ENT Hospital, Fudan University between November 2015 and January 2019. The fundus images were all captured with Optomap Daytona, a 200° UWF imaging device. Three deep learning models (ResNet-50, Inception-v3, Inception-ResNet-v2) were trained with the UWF images for predicting refractive error. 133 UWF fundus images were also collected after January 2021 as an the external validation data set. The predicted refractive error was compared with the “true value” measured by subjective refraction. Mean absolute error (MAE), mean absolute percentage error (MAPE) and coefficient (R2) value were calculated in the test set. The Spearman rank correlation test was applied for univariate analysis and multivariate linear regression analysis on variables affecting MAE. The weighted heat map was generated by averaging the predicted weight of each pixel.ResultsResNet-50, Inception-v3 and Inception-ResNet-v2 models were trained with the UWF images for refractive error prediction with R2 of 0.9562, 0.9555, 0.9563 and MAE of 1.72(95%CI: 1.62–1.82), 1.75(95%CI: 1.65–1.86) and 1.76(95%CI: 1.66–1.86), respectively. 29.95%, 31.47% and 29.44% of the test set were within the predictive error of 0.75D in the three models. 64.97%, 64.97%, and 64.47% was within 2.00D predictive error. The predicted MAE was related to older age (P < 0.01) and greater spherical power(P < 0.01). The optic papilla and macular region had significant predictive power in the weighted heat map.ConclusionsIt was feasible to predict refractive error in myopic patients with deep learning models trained by UWF images with the accuracy to be improved. refractive error prediction myopia deep learning ultrawide field imaging ResNet-50 Inception-V3 Medicine (General) Danjuan Yang verfasserin aut Danjuan Yang verfasserin aut Danjuan Yang verfasserin aut Danjuan Yang verfasserin aut Meiyan Li verfasserin aut Meiyan Li verfasserin aut Meiyan Li verfasserin aut Meiyan Li verfasserin aut Meiyan Li verfasserin aut Weizhen Li verfasserin aut Yunzhe Wang verfasserin aut Lingling Niu verfasserin aut Lingling Niu verfasserin aut Lingling Niu verfasserin aut Lingling Niu verfasserin aut Lingling Niu verfasserin aut Yang Shen verfasserin aut Yang Shen verfasserin aut Yang Shen verfasserin aut Yang Shen verfasserin aut Yang Shen verfasserin aut Xiaoyu Zhang verfasserin aut Xiaoyu Zhang verfasserin aut Xiaoyu Zhang verfasserin aut Xiaoyu Zhang verfasserin aut Xiaoyu Zhang verfasserin aut Bo Fu verfasserin aut Xingtao Zhou verfasserin aut Xingtao Zhou verfasserin aut Xingtao Zhou verfasserin aut Xingtao Zhou verfasserin aut Xingtao Zhou verfasserin aut In Frontiers in Medicine Frontiers Media S.A., 2014 9(2022) (DE-627)789482991 (DE-600)2775999-4 2296858X nnns volume:9 year:2022 https://doi.org/10.3389/fmed.2022.834281 kostenfrei https://doaj.org/article/b6f36567d3034f13a7f106a1f5f0046e kostenfrei https://www.frontiersin.org/articles/10.3389/fmed.2022.834281/full kostenfrei https://doaj.org/toc/2296-858X Journal toc kostenfrei GBV_USEFLAG_A SYSFLAG_A GBV_DOAJ GBV_ILN_11 GBV_ILN_20 GBV_ILN_22 GBV_ILN_23 GBV_ILN_24 GBV_ILN_31 GBV_ILN_39 GBV_ILN_40 GBV_ILN_60 GBV_ILN_62 GBV_ILN_63 GBV_ILN_65 GBV_ILN_69 GBV_ILN_73 GBV_ILN_74 GBV_ILN_95 GBV_ILN_105 GBV_ILN_110 GBV_ILN_151 GBV_ILN_161 GBV_ILN_170 GBV_ILN_206 GBV_ILN_213 GBV_ILN_230 GBV_ILN_285 GBV_ILN_293 GBV_ILN_602 GBV_ILN_2003 GBV_ILN_2014 GBV_ILN_4012 GBV_ILN_4037 GBV_ILN_4112 GBV_ILN_4125 GBV_ILN_4126 GBV_ILN_4249 GBV_ILN_4305 GBV_ILN_4306 GBV_ILN_4307 GBV_ILN_4313 GBV_ILN_4322 GBV_ILN_4323 GBV_ILN_4324 GBV_ILN_4325 GBV_ILN_4338 GBV_ILN_4367 GBV_ILN_4700 AR 9 2022 |
allfields_unstemmed |
10.3389/fmed.2022.834281 doi (DE-627)DOAJ064318281 (DE-599)DOAJb6f36567d3034f13a7f106a1f5f0046e DE-627 ger DE-627 rakwb eng R5-920 Danjuan Yang verfasserin aut Prediction of Refractive Error Based on Ultrawide Field Images With Deep Learning Models in Myopia Patients 2022 Text txt rdacontent Computermedien c rdamedia Online-Ressource cr rdacarrier SummaryUltrawide field fundus images could be applied in deep learning models to predict the refractive error of myopic patients. The predicted error was related to the older age and greater spherical power.PurposeTo explore the possibility of predicting the refractive error of myopic patients by applying deep learning models trained with ultrawide field (UWF) images.MethodsUWF fundus images were collected from left eyes of 987 myopia patients of Eye and ENT Hospital, Fudan University between November 2015 and January 2019. The fundus images were all captured with Optomap Daytona, a 200° UWF imaging device. Three deep learning models (ResNet-50, Inception-v3, Inception-ResNet-v2) were trained with the UWF images for predicting refractive error. 133 UWF fundus images were also collected after January 2021 as an the external validation data set. The predicted refractive error was compared with the “true value” measured by subjective refraction. Mean absolute error (MAE), mean absolute percentage error (MAPE) and coefficient (R2) value were calculated in the test set. The Spearman rank correlation test was applied for univariate analysis and multivariate linear regression analysis on variables affecting MAE. The weighted heat map was generated by averaging the predicted weight of each pixel.ResultsResNet-50, Inception-v3 and Inception-ResNet-v2 models were trained with the UWF images for refractive error prediction with R2 of 0.9562, 0.9555, 0.9563 and MAE of 1.72(95%CI: 1.62–1.82), 1.75(95%CI: 1.65–1.86) and 1.76(95%CI: 1.66–1.86), respectively. 29.95%, 31.47% and 29.44% of the test set were within the predictive error of 0.75D in the three models. 64.97%, 64.97%, and 64.47% was within 2.00D predictive error. The predicted MAE was related to older age (P < 0.01) and greater spherical power(P < 0.01). The optic papilla and macular region had significant predictive power in the weighted heat map.ConclusionsIt was feasible to predict refractive error in myopic patients with deep learning models trained by UWF images with the accuracy to be improved. refractive error prediction myopia deep learning ultrawide field imaging ResNet-50 Inception-V3 Medicine (General) Danjuan Yang verfasserin aut Danjuan Yang verfasserin aut Danjuan Yang verfasserin aut Danjuan Yang verfasserin aut Meiyan Li verfasserin aut Meiyan Li verfasserin aut Meiyan Li verfasserin aut Meiyan Li verfasserin aut Meiyan Li verfasserin aut Weizhen Li verfasserin aut Yunzhe Wang verfasserin aut Lingling Niu verfasserin aut Lingling Niu verfasserin aut Lingling Niu verfasserin aut Lingling Niu verfasserin aut Lingling Niu verfasserin aut Yang Shen verfasserin aut Yang Shen verfasserin aut Yang Shen verfasserin aut Yang Shen verfasserin aut Yang Shen verfasserin aut Xiaoyu Zhang verfasserin aut Xiaoyu Zhang verfasserin aut Xiaoyu Zhang verfasserin aut Xiaoyu Zhang verfasserin aut Xiaoyu Zhang verfasserin aut Bo Fu verfasserin aut Xingtao Zhou verfasserin aut Xingtao Zhou verfasserin aut Xingtao Zhou verfasserin aut Xingtao Zhou verfasserin aut Xingtao Zhou verfasserin aut In Frontiers in Medicine Frontiers Media S.A., 2014 9(2022) (DE-627)789482991 (DE-600)2775999-4 2296858X nnns volume:9 year:2022 https://doi.org/10.3389/fmed.2022.834281 kostenfrei https://doaj.org/article/b6f36567d3034f13a7f106a1f5f0046e kostenfrei https://www.frontiersin.org/articles/10.3389/fmed.2022.834281/full kostenfrei https://doaj.org/toc/2296-858X Journal toc kostenfrei GBV_USEFLAG_A SYSFLAG_A GBV_DOAJ GBV_ILN_11 GBV_ILN_20 GBV_ILN_22 GBV_ILN_23 GBV_ILN_24 GBV_ILN_31 GBV_ILN_39 GBV_ILN_40 GBV_ILN_60 GBV_ILN_62 GBV_ILN_63 GBV_ILN_65 GBV_ILN_69 GBV_ILN_73 GBV_ILN_74 GBV_ILN_95 GBV_ILN_105 GBV_ILN_110 GBV_ILN_151 GBV_ILN_161 GBV_ILN_170 GBV_ILN_206 GBV_ILN_213 GBV_ILN_230 GBV_ILN_285 GBV_ILN_293 GBV_ILN_602 GBV_ILN_2003 GBV_ILN_2014 GBV_ILN_4012 GBV_ILN_4037 GBV_ILN_4112 GBV_ILN_4125 GBV_ILN_4126 GBV_ILN_4249 GBV_ILN_4305 GBV_ILN_4306 GBV_ILN_4307 GBV_ILN_4313 GBV_ILN_4322 GBV_ILN_4323 GBV_ILN_4324 GBV_ILN_4325 GBV_ILN_4338 GBV_ILN_4367 GBV_ILN_4700 AR 9 2022 |
allfieldsGer |
10.3389/fmed.2022.834281 doi (DE-627)DOAJ064318281 (DE-599)DOAJb6f36567d3034f13a7f106a1f5f0046e DE-627 ger DE-627 rakwb eng R5-920 Danjuan Yang verfasserin aut Prediction of Refractive Error Based on Ultrawide Field Images With Deep Learning Models in Myopia Patients 2022 Text txt rdacontent Computermedien c rdamedia Online-Ressource cr rdacarrier SummaryUltrawide field fundus images could be applied in deep learning models to predict the refractive error of myopic patients. The predicted error was related to the older age and greater spherical power.PurposeTo explore the possibility of predicting the refractive error of myopic patients by applying deep learning models trained with ultrawide field (UWF) images.MethodsUWF fundus images were collected from left eyes of 987 myopia patients of Eye and ENT Hospital, Fudan University between November 2015 and January 2019. The fundus images were all captured with Optomap Daytona, a 200° UWF imaging device. Three deep learning models (ResNet-50, Inception-v3, Inception-ResNet-v2) were trained with the UWF images for predicting refractive error. 133 UWF fundus images were also collected after January 2021 as an the external validation data set. The predicted refractive error was compared with the “true value” measured by subjective refraction. Mean absolute error (MAE), mean absolute percentage error (MAPE) and coefficient (R2) value were calculated in the test set. The Spearman rank correlation test was applied for univariate analysis and multivariate linear regression analysis on variables affecting MAE. The weighted heat map was generated by averaging the predicted weight of each pixel.ResultsResNet-50, Inception-v3 and Inception-ResNet-v2 models were trained with the UWF images for refractive error prediction with R2 of 0.9562, 0.9555, 0.9563 and MAE of 1.72(95%CI: 1.62–1.82), 1.75(95%CI: 1.65–1.86) and 1.76(95%CI: 1.66–1.86), respectively. 29.95%, 31.47% and 29.44% of the test set were within the predictive error of 0.75D in the three models. 64.97%, 64.97%, and 64.47% was within 2.00D predictive error. The predicted MAE was related to older age (P < 0.01) and greater spherical power(P < 0.01). The optic papilla and macular region had significant predictive power in the weighted heat map.ConclusionsIt was feasible to predict refractive error in myopic patients with deep learning models trained by UWF images with the accuracy to be improved. refractive error prediction myopia deep learning ultrawide field imaging ResNet-50 Inception-V3 Medicine (General) Danjuan Yang verfasserin aut Danjuan Yang verfasserin aut Danjuan Yang verfasserin aut Danjuan Yang verfasserin aut Meiyan Li verfasserin aut Meiyan Li verfasserin aut Meiyan Li verfasserin aut Meiyan Li verfasserin aut Meiyan Li verfasserin aut Weizhen Li verfasserin aut Yunzhe Wang verfasserin aut Lingling Niu verfasserin aut Lingling Niu verfasserin aut Lingling Niu verfasserin aut Lingling Niu verfasserin aut Lingling Niu verfasserin aut Yang Shen verfasserin aut Yang Shen verfasserin aut Yang Shen verfasserin aut Yang Shen verfasserin aut Yang Shen verfasserin aut Xiaoyu Zhang verfasserin aut Xiaoyu Zhang verfasserin aut Xiaoyu Zhang verfasserin aut Xiaoyu Zhang verfasserin aut Xiaoyu Zhang verfasserin aut Bo Fu verfasserin aut Xingtao Zhou verfasserin aut Xingtao Zhou verfasserin aut Xingtao Zhou verfasserin aut Xingtao Zhou verfasserin aut Xingtao Zhou verfasserin aut In Frontiers in Medicine Frontiers Media S.A., 2014 9(2022) (DE-627)789482991 (DE-600)2775999-4 2296858X nnns volume:9 year:2022 https://doi.org/10.3389/fmed.2022.834281 kostenfrei https://doaj.org/article/b6f36567d3034f13a7f106a1f5f0046e kostenfrei https://www.frontiersin.org/articles/10.3389/fmed.2022.834281/full kostenfrei https://doaj.org/toc/2296-858X Journal toc kostenfrei GBV_USEFLAG_A SYSFLAG_A GBV_DOAJ GBV_ILN_11 GBV_ILN_20 GBV_ILN_22 GBV_ILN_23 GBV_ILN_24 GBV_ILN_31 GBV_ILN_39 GBV_ILN_40 GBV_ILN_60 GBV_ILN_62 GBV_ILN_63 GBV_ILN_65 GBV_ILN_69 GBV_ILN_73 GBV_ILN_74 GBV_ILN_95 GBV_ILN_105 GBV_ILN_110 GBV_ILN_151 GBV_ILN_161 GBV_ILN_170 GBV_ILN_206 GBV_ILN_213 GBV_ILN_230 GBV_ILN_285 GBV_ILN_293 GBV_ILN_602 GBV_ILN_2003 GBV_ILN_2014 GBV_ILN_4012 GBV_ILN_4037 GBV_ILN_4112 GBV_ILN_4125 GBV_ILN_4126 GBV_ILN_4249 GBV_ILN_4305 GBV_ILN_4306 GBV_ILN_4307 GBV_ILN_4313 GBV_ILN_4322 GBV_ILN_4323 GBV_ILN_4324 GBV_ILN_4325 GBV_ILN_4338 GBV_ILN_4367 GBV_ILN_4700 AR 9 2022 |
allfieldsSound |
10.3389/fmed.2022.834281 doi (DE-627)DOAJ064318281 (DE-599)DOAJb6f36567d3034f13a7f106a1f5f0046e DE-627 ger DE-627 rakwb eng R5-920 Danjuan Yang verfasserin aut Prediction of Refractive Error Based on Ultrawide Field Images With Deep Learning Models in Myopia Patients 2022 Text txt rdacontent Computermedien c rdamedia Online-Ressource cr rdacarrier SummaryUltrawide field fundus images could be applied in deep learning models to predict the refractive error of myopic patients. The predicted error was related to the older age and greater spherical power.PurposeTo explore the possibility of predicting the refractive error of myopic patients by applying deep learning models trained with ultrawide field (UWF) images.MethodsUWF fundus images were collected from left eyes of 987 myopia patients of Eye and ENT Hospital, Fudan University between November 2015 and January 2019. The fundus images were all captured with Optomap Daytona, a 200° UWF imaging device. Three deep learning models (ResNet-50, Inception-v3, Inception-ResNet-v2) were trained with the UWF images for predicting refractive error. 133 UWF fundus images were also collected after January 2021 as an the external validation data set. The predicted refractive error was compared with the “true value” measured by subjective refraction. Mean absolute error (MAE), mean absolute percentage error (MAPE) and coefficient (R2) value were calculated in the test set. The Spearman rank correlation test was applied for univariate analysis and multivariate linear regression analysis on variables affecting MAE. The weighted heat map was generated by averaging the predicted weight of each pixel.ResultsResNet-50, Inception-v3 and Inception-ResNet-v2 models were trained with the UWF images for refractive error prediction with R2 of 0.9562, 0.9555, 0.9563 and MAE of 1.72(95%CI: 1.62–1.82), 1.75(95%CI: 1.65–1.86) and 1.76(95%CI: 1.66–1.86), respectively. 29.95%, 31.47% and 29.44% of the test set were within the predictive error of 0.75D in the three models. 64.97%, 64.97%, and 64.47% was within 2.00D predictive error. The predicted MAE was related to older age (P < 0.01) and greater spherical power(P < 0.01). The optic papilla and macular region had significant predictive power in the weighted heat map.ConclusionsIt was feasible to predict refractive error in myopic patients with deep learning models trained by UWF images with the accuracy to be improved. refractive error prediction myopia deep learning ultrawide field imaging ResNet-50 Inception-V3 Medicine (General) Danjuan Yang verfasserin aut Danjuan Yang verfasserin aut Danjuan Yang verfasserin aut Danjuan Yang verfasserin aut Meiyan Li verfasserin aut Meiyan Li verfasserin aut Meiyan Li verfasserin aut Meiyan Li verfasserin aut Meiyan Li verfasserin aut Weizhen Li verfasserin aut Yunzhe Wang verfasserin aut Lingling Niu verfasserin aut Lingling Niu verfasserin aut Lingling Niu verfasserin aut Lingling Niu verfasserin aut Lingling Niu verfasserin aut Yang Shen verfasserin aut Yang Shen verfasserin aut Yang Shen verfasserin aut Yang Shen verfasserin aut Yang Shen verfasserin aut Xiaoyu Zhang verfasserin aut Xiaoyu Zhang verfasserin aut Xiaoyu Zhang verfasserin aut Xiaoyu Zhang verfasserin aut Xiaoyu Zhang verfasserin aut Bo Fu verfasserin aut Xingtao Zhou verfasserin aut Xingtao Zhou verfasserin aut Xingtao Zhou verfasserin aut Xingtao Zhou verfasserin aut Xingtao Zhou verfasserin aut In Frontiers in Medicine Frontiers Media S.A., 2014 9(2022) (DE-627)789482991 (DE-600)2775999-4 2296858X nnns volume:9 year:2022 https://doi.org/10.3389/fmed.2022.834281 kostenfrei https://doaj.org/article/b6f36567d3034f13a7f106a1f5f0046e kostenfrei https://www.frontiersin.org/articles/10.3389/fmed.2022.834281/full kostenfrei https://doaj.org/toc/2296-858X Journal toc kostenfrei GBV_USEFLAG_A SYSFLAG_A GBV_DOAJ GBV_ILN_11 GBV_ILN_20 GBV_ILN_22 GBV_ILN_23 GBV_ILN_24 GBV_ILN_31 GBV_ILN_39 GBV_ILN_40 GBV_ILN_60 GBV_ILN_62 GBV_ILN_63 GBV_ILN_65 GBV_ILN_69 GBV_ILN_73 GBV_ILN_74 GBV_ILN_95 GBV_ILN_105 GBV_ILN_110 GBV_ILN_151 GBV_ILN_161 GBV_ILN_170 GBV_ILN_206 GBV_ILN_213 GBV_ILN_230 GBV_ILN_285 GBV_ILN_293 GBV_ILN_602 GBV_ILN_2003 GBV_ILN_2014 GBV_ILN_4012 GBV_ILN_4037 GBV_ILN_4112 GBV_ILN_4125 GBV_ILN_4126 GBV_ILN_4249 GBV_ILN_4305 GBV_ILN_4306 GBV_ILN_4307 GBV_ILN_4313 GBV_ILN_4322 GBV_ILN_4323 GBV_ILN_4324 GBV_ILN_4325 GBV_ILN_4338 GBV_ILN_4367 GBV_ILN_4700 AR 9 2022 |
language |
English |
source |
In Frontiers in Medicine 9(2022) volume:9 year:2022 |
sourceStr |
In Frontiers in Medicine 9(2022) volume:9 year:2022 |
format_phy_str_mv |
Article |
institution |
findex.gbv.de |
topic_facet |
refractive error prediction myopia deep learning ultrawide field imaging ResNet-50 Inception-V3 Medicine (General) |
isfreeaccess_bool |
true |
container_title |
Frontiers in Medicine |
authorswithroles_txt_mv |
Danjuan Yang @@aut@@ Meiyan Li @@aut@@ Weizhen Li @@aut@@ Yunzhe Wang @@aut@@ Lingling Niu @@aut@@ Yang Shen @@aut@@ Xiaoyu Zhang @@aut@@ Bo Fu @@aut@@ Xingtao Zhou @@aut@@ |
publishDateDaySort_date |
2022-01-01T00:00:00Z |
hierarchy_top_id |
789482991 |
id |
DOAJ064318281 |
language_de |
englisch |
fullrecord |
<?xml version="1.0" encoding="UTF-8"?><collection xmlns="http://www.loc.gov/MARC21/slim"><record><leader>01000caa a22002652 4500</leader><controlfield tag="001">DOAJ064318281</controlfield><controlfield tag="003">DE-627</controlfield><controlfield tag="005">20230309035331.0</controlfield><controlfield tag="007">cr uuu---uuuuu</controlfield><controlfield tag="008">230228s2022 xx |||||o 00| ||eng c</controlfield><datafield tag="024" ind1="7" ind2=" "><subfield code="a">10.3389/fmed.2022.834281</subfield><subfield code="2">doi</subfield></datafield><datafield tag="035" ind1=" " ind2=" "><subfield code="a">(DE-627)DOAJ064318281</subfield></datafield><datafield tag="035" ind1=" " ind2=" "><subfield code="a">(DE-599)DOAJb6f36567d3034f13a7f106a1f5f0046e</subfield></datafield><datafield tag="040" ind1=" " ind2=" "><subfield code="a">DE-627</subfield><subfield code="b">ger</subfield><subfield code="c">DE-627</subfield><subfield code="e">rakwb</subfield></datafield><datafield tag="041" ind1=" " ind2=" "><subfield code="a">eng</subfield></datafield><datafield tag="050" ind1=" " ind2="0"><subfield code="a">R5-920</subfield></datafield><datafield tag="100" ind1="0" ind2=" "><subfield code="a">Danjuan Yang</subfield><subfield code="e">verfasserin</subfield><subfield code="4">aut</subfield></datafield><datafield tag="245" ind1="1" ind2="0"><subfield code="a">Prediction of Refractive Error Based on Ultrawide Field Images With Deep Learning Models in Myopia Patients</subfield></datafield><datafield tag="264" ind1=" " ind2="1"><subfield code="c">2022</subfield></datafield><datafield tag="336" ind1=" " ind2=" "><subfield code="a">Text</subfield><subfield code="b">txt</subfield><subfield code="2">rdacontent</subfield></datafield><datafield tag="337" ind1=" " ind2=" "><subfield code="a">Computermedien</subfield><subfield code="b">c</subfield><subfield code="2">rdamedia</subfield></datafield><datafield tag="338" ind1=" " ind2=" "><subfield code="a">Online-Ressource</subfield><subfield code="b">cr</subfield><subfield code="2">rdacarrier</subfield></datafield><datafield tag="520" ind1=" " ind2=" "><subfield code="a">SummaryUltrawide field fundus images could be applied in deep learning models to predict the refractive error of myopic patients. The predicted error was related to the older age and greater spherical power.PurposeTo explore the possibility of predicting the refractive error of myopic patients by applying deep learning models trained with ultrawide field (UWF) images.MethodsUWF fundus images were collected from left eyes of 987 myopia patients of Eye and ENT Hospital, Fudan University between November 2015 and January 2019. The fundus images were all captured with Optomap Daytona, a 200° UWF imaging device. Three deep learning models (ResNet-50, Inception-v3, Inception-ResNet-v2) were trained with the UWF images for predicting refractive error. 133 UWF fundus images were also collected after January 2021 as an the external validation data set. The predicted refractive error was compared with the “true value” measured by subjective refraction. Mean absolute error (MAE), mean absolute percentage error (MAPE) and coefficient (R2) value were calculated in the test set. The Spearman rank correlation test was applied for univariate analysis and multivariate linear regression analysis on variables affecting MAE. The weighted heat map was generated by averaging the predicted weight of each pixel.ResultsResNet-50, Inception-v3 and Inception-ResNet-v2 models were trained with the UWF images for refractive error prediction with R2 of 0.9562, 0.9555, 0.9563 and MAE of 1.72(95%CI: 1.62–1.82), 1.75(95%CI: 1.65–1.86) and 1.76(95%CI: 1.66–1.86), respectively. 29.95%, 31.47% and 29.44% of the test set were within the predictive error of 0.75D in the three models. 64.97%, 64.97%, and 64.47% was within 2.00D predictive error. The predicted MAE was related to older age (P &lt; 0.01) and greater spherical power(P &lt; 0.01). The optic papilla and macular region had significant predictive power in the weighted heat map.ConclusionsIt was feasible to predict refractive error in myopic patients with deep learning models trained by UWF images with the accuracy to be improved.</subfield></datafield><datafield tag="650" ind1=" " ind2="4"><subfield code="a">refractive error prediction</subfield></datafield><datafield tag="650" ind1=" " ind2="4"><subfield code="a">myopia</subfield></datafield><datafield tag="650" ind1=" " ind2="4"><subfield code="a">deep learning</subfield></datafield><datafield tag="650" ind1=" " ind2="4"><subfield code="a">ultrawide field imaging</subfield></datafield><datafield tag="650" ind1=" " ind2="4"><subfield code="a">ResNet-50</subfield></datafield><datafield tag="650" ind1=" " ind2="4"><subfield code="a">Inception-V3</subfield></datafield><datafield tag="653" ind1=" " ind2="0"><subfield code="a">Medicine (General)</subfield></datafield><datafield tag="700" ind1="0" ind2=" "><subfield code="a">Danjuan Yang</subfield><subfield code="e">verfasserin</subfield><subfield code="4">aut</subfield></datafield><datafield tag="700" ind1="0" ind2=" "><subfield code="a">Danjuan Yang</subfield><subfield code="e">verfasserin</subfield><subfield code="4">aut</subfield></datafield><datafield tag="700" ind1="0" ind2=" "><subfield code="a">Danjuan Yang</subfield><subfield code="e">verfasserin</subfield><subfield code="4">aut</subfield></datafield><datafield tag="700" ind1="0" ind2=" "><subfield code="a">Danjuan Yang</subfield><subfield code="e">verfasserin</subfield><subfield code="4">aut</subfield></datafield><datafield tag="700" ind1="0" ind2=" "><subfield code="a">Meiyan Li</subfield><subfield code="e">verfasserin</subfield><subfield code="4">aut</subfield></datafield><datafield tag="700" ind1="0" ind2=" "><subfield code="a">Meiyan Li</subfield><subfield code="e">verfasserin</subfield><subfield code="4">aut</subfield></datafield><datafield tag="700" ind1="0" ind2=" "><subfield code="a">Meiyan Li</subfield><subfield code="e">verfasserin</subfield><subfield code="4">aut</subfield></datafield><datafield tag="700" ind1="0" ind2=" "><subfield code="a">Meiyan Li</subfield><subfield code="e">verfasserin</subfield><subfield code="4">aut</subfield></datafield><datafield tag="700" ind1="0" ind2=" "><subfield code="a">Meiyan Li</subfield><subfield code="e">verfasserin</subfield><subfield code="4">aut</subfield></datafield><datafield tag="700" ind1="0" ind2=" "><subfield code="a">Weizhen Li</subfield><subfield code="e">verfasserin</subfield><subfield code="4">aut</subfield></datafield><datafield tag="700" ind1="0" ind2=" "><subfield code="a">Yunzhe Wang</subfield><subfield code="e">verfasserin</subfield><subfield code="4">aut</subfield></datafield><datafield tag="700" ind1="0" ind2=" "><subfield code="a">Lingling Niu</subfield><subfield code="e">verfasserin</subfield><subfield code="4">aut</subfield></datafield><datafield tag="700" ind1="0" ind2=" "><subfield code="a">Lingling Niu</subfield><subfield code="e">verfasserin</subfield><subfield code="4">aut</subfield></datafield><datafield tag="700" ind1="0" ind2=" "><subfield code="a">Lingling Niu</subfield><subfield code="e">verfasserin</subfield><subfield code="4">aut</subfield></datafield><datafield tag="700" ind1="0" ind2=" "><subfield code="a">Lingling Niu</subfield><subfield code="e">verfasserin</subfield><subfield code="4">aut</subfield></datafield><datafield tag="700" ind1="0" ind2=" "><subfield code="a">Lingling Niu</subfield><subfield code="e">verfasserin</subfield><subfield code="4">aut</subfield></datafield><datafield tag="700" ind1="0" ind2=" "><subfield code="a">Yang Shen</subfield><subfield code="e">verfasserin</subfield><subfield code="4">aut</subfield></datafield><datafield tag="700" ind1="0" ind2=" "><subfield code="a">Yang Shen</subfield><subfield code="e">verfasserin</subfield><subfield code="4">aut</subfield></datafield><datafield tag="700" ind1="0" ind2=" "><subfield code="a">Yang Shen</subfield><subfield code="e">verfasserin</subfield><subfield code="4">aut</subfield></datafield><datafield tag="700" ind1="0" ind2=" "><subfield code="a">Yang Shen</subfield><subfield code="e">verfasserin</subfield><subfield code="4">aut</subfield></datafield><datafield tag="700" ind1="0" ind2=" "><subfield code="a">Yang Shen</subfield><subfield code="e">verfasserin</subfield><subfield code="4">aut</subfield></datafield><datafield tag="700" ind1="0" ind2=" "><subfield code="a">Xiaoyu Zhang</subfield><subfield code="e">verfasserin</subfield><subfield code="4">aut</subfield></datafield><datafield tag="700" ind1="0" ind2=" "><subfield code="a">Xiaoyu Zhang</subfield><subfield code="e">verfasserin</subfield><subfield code="4">aut</subfield></datafield><datafield tag="700" ind1="0" ind2=" "><subfield code="a">Xiaoyu Zhang</subfield><subfield code="e">verfasserin</subfield><subfield code="4">aut</subfield></datafield><datafield tag="700" ind1="0" ind2=" "><subfield code="a">Xiaoyu Zhang</subfield><subfield code="e">verfasserin</subfield><subfield code="4">aut</subfield></datafield><datafield tag="700" ind1="0" ind2=" "><subfield code="a">Xiaoyu Zhang</subfield><subfield code="e">verfasserin</subfield><subfield code="4">aut</subfield></datafield><datafield tag="700" ind1="0" ind2=" "><subfield code="a">Bo Fu</subfield><subfield code="e">verfasserin</subfield><subfield code="4">aut</subfield></datafield><datafield tag="700" ind1="0" ind2=" "><subfield code="a">Xingtao Zhou</subfield><subfield code="e">verfasserin</subfield><subfield code="4">aut</subfield></datafield><datafield tag="700" ind1="0" ind2=" "><subfield code="a">Xingtao Zhou</subfield><subfield code="e">verfasserin</subfield><subfield code="4">aut</subfield></datafield><datafield tag="700" ind1="0" ind2=" "><subfield code="a">Xingtao Zhou</subfield><subfield code="e">verfasserin</subfield><subfield code="4">aut</subfield></datafield><datafield tag="700" ind1="0" ind2=" "><subfield code="a">Xingtao Zhou</subfield><subfield code="e">verfasserin</subfield><subfield code="4">aut</subfield></datafield><datafield tag="700" ind1="0" ind2=" "><subfield code="a">Xingtao Zhou</subfield><subfield code="e">verfasserin</subfield><subfield code="4">aut</subfield></datafield><datafield tag="773" ind1="0" ind2="8"><subfield code="i">In</subfield><subfield code="t">Frontiers in Medicine</subfield><subfield code="d">Frontiers Media S.A., 2014</subfield><subfield code="g">9(2022)</subfield><subfield code="w">(DE-627)789482991</subfield><subfield code="w">(DE-600)2775999-4</subfield><subfield code="x">2296858X</subfield><subfield code="7">nnns</subfield></datafield><datafield tag="773" ind1="1" ind2="8"><subfield code="g">volume:9</subfield><subfield code="g">year:2022</subfield></datafield><datafield tag="856" ind1="4" ind2="0"><subfield code="u">https://doi.org/10.3389/fmed.2022.834281</subfield><subfield code="z">kostenfrei</subfield></datafield><datafield tag="856" ind1="4" ind2="0"><subfield code="u">https://doaj.org/article/b6f36567d3034f13a7f106a1f5f0046e</subfield><subfield code="z">kostenfrei</subfield></datafield><datafield tag="856" ind1="4" ind2="0"><subfield code="u">https://www.frontiersin.org/articles/10.3389/fmed.2022.834281/full</subfield><subfield code="z">kostenfrei</subfield></datafield><datafield tag="856" ind1="4" ind2="2"><subfield code="u">https://doaj.org/toc/2296-858X</subfield><subfield code="y">Journal toc</subfield><subfield code="z">kostenfrei</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_USEFLAG_A</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">SYSFLAG_A</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_DOAJ</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_11</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_20</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_22</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_23</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_24</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_31</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_39</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_40</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_60</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_62</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_63</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_65</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_69</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_73</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_74</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_95</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_105</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_110</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_151</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_161</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_170</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_206</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_213</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_230</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_285</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_293</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_602</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2003</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2014</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4012</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4037</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4112</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4125</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4126</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4249</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4305</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4306</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4307</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4313</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4322</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4323</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4324</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4325</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4338</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4367</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4700</subfield></datafield><datafield tag="951" ind1=" " ind2=" "><subfield code="a">AR</subfield></datafield><datafield tag="952" ind1=" " ind2=" "><subfield code="d">9</subfield><subfield code="j">2022</subfield></datafield></record></collection>
|
callnumber-first |
R - Medicine |
author |
Danjuan Yang |
spellingShingle |
Danjuan Yang misc R5-920 misc refractive error prediction misc myopia misc deep learning misc ultrawide field imaging misc ResNet-50 misc Inception-V3 misc Medicine (General) Prediction of Refractive Error Based on Ultrawide Field Images With Deep Learning Models in Myopia Patients |
authorStr |
Danjuan Yang |
ppnlink_with_tag_str_mv |
@@773@@(DE-627)789482991 |
format |
electronic Article |
delete_txt_mv |
keep |
author_role |
aut aut aut aut aut aut aut aut aut aut aut aut aut aut aut aut aut aut aut aut aut aut aut aut aut aut aut aut aut aut aut aut aut |
collection |
DOAJ |
remote_str |
true |
callnumber-label |
R5-920 |
illustrated |
Not Illustrated |
issn |
2296858X |
topic_title |
R5-920 Prediction of Refractive Error Based on Ultrawide Field Images With Deep Learning Models in Myopia Patients refractive error prediction myopia deep learning ultrawide field imaging ResNet-50 Inception-V3 |
topic |
misc R5-920 misc refractive error prediction misc myopia misc deep learning misc ultrawide field imaging misc ResNet-50 misc Inception-V3 misc Medicine (General) |
topic_unstemmed |
misc R5-920 misc refractive error prediction misc myopia misc deep learning misc ultrawide field imaging misc ResNet-50 misc Inception-V3 misc Medicine (General) |
topic_browse |
misc R5-920 misc refractive error prediction misc myopia misc deep learning misc ultrawide field imaging misc ResNet-50 misc Inception-V3 misc Medicine (General) |
format_facet |
Elektronische Aufsätze Aufsätze Elektronische Ressource |
format_main_str_mv |
Text Zeitschrift/Artikel |
carriertype_str_mv |
cr |
hierarchy_parent_title |
Frontiers in Medicine |
hierarchy_parent_id |
789482991 |
hierarchy_top_title |
Frontiers in Medicine |
isfreeaccess_txt |
true |
familylinks_str_mv |
(DE-627)789482991 (DE-600)2775999-4 |
title |
Prediction of Refractive Error Based on Ultrawide Field Images With Deep Learning Models in Myopia Patients |
ctrlnum |
(DE-627)DOAJ064318281 (DE-599)DOAJb6f36567d3034f13a7f106a1f5f0046e |
title_full |
Prediction of Refractive Error Based on Ultrawide Field Images With Deep Learning Models in Myopia Patients |
author_sort |
Danjuan Yang |
journal |
Frontiers in Medicine |
journalStr |
Frontiers in Medicine |
callnumber-first-code |
R |
lang_code |
eng |
isOA_bool |
true |
recordtype |
marc |
publishDateSort |
2022 |
contenttype_str_mv |
txt |
author_browse |
Danjuan Yang Meiyan Li Weizhen Li Yunzhe Wang Lingling Niu Yang Shen Xiaoyu Zhang Bo Fu Xingtao Zhou |
container_volume |
9 |
class |
R5-920 |
format_se |
Elektronische Aufsätze |
author-letter |
Danjuan Yang |
doi_str_mv |
10.3389/fmed.2022.834281 |
author2-role |
verfasserin |
title_sort |
prediction of refractive error based on ultrawide field images with deep learning models in myopia patients |
callnumber |
R5-920 |
title_auth |
Prediction of Refractive Error Based on Ultrawide Field Images With Deep Learning Models in Myopia Patients |
abstract |
SummaryUltrawide field fundus images could be applied in deep learning models to predict the refractive error of myopic patients. The predicted error was related to the older age and greater spherical power.PurposeTo explore the possibility of predicting the refractive error of myopic patients by applying deep learning models trained with ultrawide field (UWF) images.MethodsUWF fundus images were collected from left eyes of 987 myopia patients of Eye and ENT Hospital, Fudan University between November 2015 and January 2019. The fundus images were all captured with Optomap Daytona, a 200° UWF imaging device. Three deep learning models (ResNet-50, Inception-v3, Inception-ResNet-v2) were trained with the UWF images for predicting refractive error. 133 UWF fundus images were also collected after January 2021 as an the external validation data set. The predicted refractive error was compared with the “true value” measured by subjective refraction. Mean absolute error (MAE), mean absolute percentage error (MAPE) and coefficient (R2) value were calculated in the test set. The Spearman rank correlation test was applied for univariate analysis and multivariate linear regression analysis on variables affecting MAE. The weighted heat map was generated by averaging the predicted weight of each pixel.ResultsResNet-50, Inception-v3 and Inception-ResNet-v2 models were trained with the UWF images for refractive error prediction with R2 of 0.9562, 0.9555, 0.9563 and MAE of 1.72(95%CI: 1.62–1.82), 1.75(95%CI: 1.65–1.86) and 1.76(95%CI: 1.66–1.86), respectively. 29.95%, 31.47% and 29.44% of the test set were within the predictive error of 0.75D in the three models. 64.97%, 64.97%, and 64.47% was within 2.00D predictive error. The predicted MAE was related to older age (P < 0.01) and greater spherical power(P < 0.01). The optic papilla and macular region had significant predictive power in the weighted heat map.ConclusionsIt was feasible to predict refractive error in myopic patients with deep learning models trained by UWF images with the accuracy to be improved. |
abstractGer |
SummaryUltrawide field fundus images could be applied in deep learning models to predict the refractive error of myopic patients. The predicted error was related to the older age and greater spherical power.PurposeTo explore the possibility of predicting the refractive error of myopic patients by applying deep learning models trained with ultrawide field (UWF) images.MethodsUWF fundus images were collected from left eyes of 987 myopia patients of Eye and ENT Hospital, Fudan University between November 2015 and January 2019. The fundus images were all captured with Optomap Daytona, a 200° UWF imaging device. Three deep learning models (ResNet-50, Inception-v3, Inception-ResNet-v2) were trained with the UWF images for predicting refractive error. 133 UWF fundus images were also collected after January 2021 as an the external validation data set. The predicted refractive error was compared with the “true value” measured by subjective refraction. Mean absolute error (MAE), mean absolute percentage error (MAPE) and coefficient (R2) value were calculated in the test set. The Spearman rank correlation test was applied for univariate analysis and multivariate linear regression analysis on variables affecting MAE. The weighted heat map was generated by averaging the predicted weight of each pixel.ResultsResNet-50, Inception-v3 and Inception-ResNet-v2 models were trained with the UWF images for refractive error prediction with R2 of 0.9562, 0.9555, 0.9563 and MAE of 1.72(95%CI: 1.62–1.82), 1.75(95%CI: 1.65–1.86) and 1.76(95%CI: 1.66–1.86), respectively. 29.95%, 31.47% and 29.44% of the test set were within the predictive error of 0.75D in the three models. 64.97%, 64.97%, and 64.47% was within 2.00D predictive error. The predicted MAE was related to older age (P < 0.01) and greater spherical power(P < 0.01). The optic papilla and macular region had significant predictive power in the weighted heat map.ConclusionsIt was feasible to predict refractive error in myopic patients with deep learning models trained by UWF images with the accuracy to be improved. |
abstract_unstemmed |
SummaryUltrawide field fundus images could be applied in deep learning models to predict the refractive error of myopic patients. The predicted error was related to the older age and greater spherical power.PurposeTo explore the possibility of predicting the refractive error of myopic patients by applying deep learning models trained with ultrawide field (UWF) images.MethodsUWF fundus images were collected from left eyes of 987 myopia patients of Eye and ENT Hospital, Fudan University between November 2015 and January 2019. The fundus images were all captured with Optomap Daytona, a 200° UWF imaging device. Three deep learning models (ResNet-50, Inception-v3, Inception-ResNet-v2) were trained with the UWF images for predicting refractive error. 133 UWF fundus images were also collected after January 2021 as an the external validation data set. The predicted refractive error was compared with the “true value” measured by subjective refraction. Mean absolute error (MAE), mean absolute percentage error (MAPE) and coefficient (R2) value were calculated in the test set. The Spearman rank correlation test was applied for univariate analysis and multivariate linear regression analysis on variables affecting MAE. The weighted heat map was generated by averaging the predicted weight of each pixel.ResultsResNet-50, Inception-v3 and Inception-ResNet-v2 models were trained with the UWF images for refractive error prediction with R2 of 0.9562, 0.9555, 0.9563 and MAE of 1.72(95%CI: 1.62–1.82), 1.75(95%CI: 1.65–1.86) and 1.76(95%CI: 1.66–1.86), respectively. 29.95%, 31.47% and 29.44% of the test set were within the predictive error of 0.75D in the three models. 64.97%, 64.97%, and 64.47% was within 2.00D predictive error. The predicted MAE was related to older age (P < 0.01) and greater spherical power(P < 0.01). The optic papilla and macular region had significant predictive power in the weighted heat map.ConclusionsIt was feasible to predict refractive error in myopic patients with deep learning models trained by UWF images with the accuracy to be improved. |
collection_details |
GBV_USEFLAG_A SYSFLAG_A GBV_DOAJ GBV_ILN_11 GBV_ILN_20 GBV_ILN_22 GBV_ILN_23 GBV_ILN_24 GBV_ILN_31 GBV_ILN_39 GBV_ILN_40 GBV_ILN_60 GBV_ILN_62 GBV_ILN_63 GBV_ILN_65 GBV_ILN_69 GBV_ILN_73 GBV_ILN_74 GBV_ILN_95 GBV_ILN_105 GBV_ILN_110 GBV_ILN_151 GBV_ILN_161 GBV_ILN_170 GBV_ILN_206 GBV_ILN_213 GBV_ILN_230 GBV_ILN_285 GBV_ILN_293 GBV_ILN_602 GBV_ILN_2003 GBV_ILN_2014 GBV_ILN_4012 GBV_ILN_4037 GBV_ILN_4112 GBV_ILN_4125 GBV_ILN_4126 GBV_ILN_4249 GBV_ILN_4305 GBV_ILN_4306 GBV_ILN_4307 GBV_ILN_4313 GBV_ILN_4322 GBV_ILN_4323 GBV_ILN_4324 GBV_ILN_4325 GBV_ILN_4338 GBV_ILN_4367 GBV_ILN_4700 |
title_short |
Prediction of Refractive Error Based on Ultrawide Field Images With Deep Learning Models in Myopia Patients |
url |
https://doi.org/10.3389/fmed.2022.834281 https://doaj.org/article/b6f36567d3034f13a7f106a1f5f0046e https://www.frontiersin.org/articles/10.3389/fmed.2022.834281/full https://doaj.org/toc/2296-858X |
remote_bool |
true |
author2 |
Danjuan Yang Meiyan Li Weizhen Li Yunzhe Wang Lingling Niu Yang Shen Xiaoyu Zhang Bo Fu Xingtao Zhou |
author2Str |
Danjuan Yang Meiyan Li Weizhen Li Yunzhe Wang Lingling Niu Yang Shen Xiaoyu Zhang Bo Fu Xingtao Zhou |
ppnlink |
789482991 |
callnumber-subject |
R - General Medicine |
mediatype_str_mv |
c |
isOA_txt |
true |
hochschulschrift_bool |
false |
doi_str |
10.3389/fmed.2022.834281 |
callnumber-a |
R5-920 |
up_date |
2024-07-03T22:17:35.582Z |
_version_ |
1803597964217679875 |
fullrecord_marcxml |
<?xml version="1.0" encoding="UTF-8"?><collection xmlns="http://www.loc.gov/MARC21/slim"><record><leader>01000caa a22002652 4500</leader><controlfield tag="001">DOAJ064318281</controlfield><controlfield tag="003">DE-627</controlfield><controlfield tag="005">20230309035331.0</controlfield><controlfield tag="007">cr uuu---uuuuu</controlfield><controlfield tag="008">230228s2022 xx |||||o 00| ||eng c</controlfield><datafield tag="024" ind1="7" ind2=" "><subfield code="a">10.3389/fmed.2022.834281</subfield><subfield code="2">doi</subfield></datafield><datafield tag="035" ind1=" " ind2=" "><subfield code="a">(DE-627)DOAJ064318281</subfield></datafield><datafield tag="035" ind1=" " ind2=" "><subfield code="a">(DE-599)DOAJb6f36567d3034f13a7f106a1f5f0046e</subfield></datafield><datafield tag="040" ind1=" " ind2=" "><subfield code="a">DE-627</subfield><subfield code="b">ger</subfield><subfield code="c">DE-627</subfield><subfield code="e">rakwb</subfield></datafield><datafield tag="041" ind1=" " ind2=" "><subfield code="a">eng</subfield></datafield><datafield tag="050" ind1=" " ind2="0"><subfield code="a">R5-920</subfield></datafield><datafield tag="100" ind1="0" ind2=" "><subfield code="a">Danjuan Yang</subfield><subfield code="e">verfasserin</subfield><subfield code="4">aut</subfield></datafield><datafield tag="245" ind1="1" ind2="0"><subfield code="a">Prediction of Refractive Error Based on Ultrawide Field Images With Deep Learning Models in Myopia Patients</subfield></datafield><datafield tag="264" ind1=" " ind2="1"><subfield code="c">2022</subfield></datafield><datafield tag="336" ind1=" " ind2=" "><subfield code="a">Text</subfield><subfield code="b">txt</subfield><subfield code="2">rdacontent</subfield></datafield><datafield tag="337" ind1=" " ind2=" "><subfield code="a">Computermedien</subfield><subfield code="b">c</subfield><subfield code="2">rdamedia</subfield></datafield><datafield tag="338" ind1=" " ind2=" "><subfield code="a">Online-Ressource</subfield><subfield code="b">cr</subfield><subfield code="2">rdacarrier</subfield></datafield><datafield tag="520" ind1=" " ind2=" "><subfield code="a">SummaryUltrawide field fundus images could be applied in deep learning models to predict the refractive error of myopic patients. The predicted error was related to the older age and greater spherical power.PurposeTo explore the possibility of predicting the refractive error of myopic patients by applying deep learning models trained with ultrawide field (UWF) images.MethodsUWF fundus images were collected from left eyes of 987 myopia patients of Eye and ENT Hospital, Fudan University between November 2015 and January 2019. The fundus images were all captured with Optomap Daytona, a 200° UWF imaging device. Three deep learning models (ResNet-50, Inception-v3, Inception-ResNet-v2) were trained with the UWF images for predicting refractive error. 133 UWF fundus images were also collected after January 2021 as an the external validation data set. The predicted refractive error was compared with the “true value” measured by subjective refraction. Mean absolute error (MAE), mean absolute percentage error (MAPE) and coefficient (R2) value were calculated in the test set. The Spearman rank correlation test was applied for univariate analysis and multivariate linear regression analysis on variables affecting MAE. The weighted heat map was generated by averaging the predicted weight of each pixel.ResultsResNet-50, Inception-v3 and Inception-ResNet-v2 models were trained with the UWF images for refractive error prediction with R2 of 0.9562, 0.9555, 0.9563 and MAE of 1.72(95%CI: 1.62–1.82), 1.75(95%CI: 1.65–1.86) and 1.76(95%CI: 1.66–1.86), respectively. 29.95%, 31.47% and 29.44% of the test set were within the predictive error of 0.75D in the three models. 64.97%, 64.97%, and 64.47% was within 2.00D predictive error. The predicted MAE was related to older age (P &lt; 0.01) and greater spherical power(P &lt; 0.01). The optic papilla and macular region had significant predictive power in the weighted heat map.ConclusionsIt was feasible to predict refractive error in myopic patients with deep learning models trained by UWF images with the accuracy to be improved.</subfield></datafield><datafield tag="650" ind1=" " ind2="4"><subfield code="a">refractive error prediction</subfield></datafield><datafield tag="650" ind1=" " ind2="4"><subfield code="a">myopia</subfield></datafield><datafield tag="650" ind1=" " ind2="4"><subfield code="a">deep learning</subfield></datafield><datafield tag="650" ind1=" " ind2="4"><subfield code="a">ultrawide field imaging</subfield></datafield><datafield tag="650" ind1=" " ind2="4"><subfield code="a">ResNet-50</subfield></datafield><datafield tag="650" ind1=" " ind2="4"><subfield code="a">Inception-V3</subfield></datafield><datafield tag="653" ind1=" " ind2="0"><subfield code="a">Medicine (General)</subfield></datafield><datafield tag="700" ind1="0" ind2=" "><subfield code="a">Danjuan Yang</subfield><subfield code="e">verfasserin</subfield><subfield code="4">aut</subfield></datafield><datafield tag="700" ind1="0" ind2=" "><subfield code="a">Danjuan Yang</subfield><subfield code="e">verfasserin</subfield><subfield code="4">aut</subfield></datafield><datafield tag="700" ind1="0" ind2=" "><subfield code="a">Danjuan Yang</subfield><subfield code="e">verfasserin</subfield><subfield code="4">aut</subfield></datafield><datafield tag="700" ind1="0" ind2=" "><subfield code="a">Danjuan Yang</subfield><subfield code="e">verfasserin</subfield><subfield code="4">aut</subfield></datafield><datafield tag="700" ind1="0" ind2=" "><subfield code="a">Meiyan Li</subfield><subfield code="e">verfasserin</subfield><subfield code="4">aut</subfield></datafield><datafield tag="700" ind1="0" ind2=" "><subfield code="a">Meiyan Li</subfield><subfield code="e">verfasserin</subfield><subfield code="4">aut</subfield></datafield><datafield tag="700" ind1="0" ind2=" "><subfield code="a">Meiyan Li</subfield><subfield code="e">verfasserin</subfield><subfield code="4">aut</subfield></datafield><datafield tag="700" ind1="0" ind2=" "><subfield code="a">Meiyan Li</subfield><subfield code="e">verfasserin</subfield><subfield code="4">aut</subfield></datafield><datafield tag="700" ind1="0" ind2=" "><subfield code="a">Meiyan Li</subfield><subfield code="e">verfasserin</subfield><subfield code="4">aut</subfield></datafield><datafield tag="700" ind1="0" ind2=" "><subfield code="a">Weizhen Li</subfield><subfield code="e">verfasserin</subfield><subfield code="4">aut</subfield></datafield><datafield tag="700" ind1="0" ind2=" "><subfield code="a">Yunzhe Wang</subfield><subfield code="e">verfasserin</subfield><subfield code="4">aut</subfield></datafield><datafield tag="700" ind1="0" ind2=" "><subfield code="a">Lingling Niu</subfield><subfield code="e">verfasserin</subfield><subfield code="4">aut</subfield></datafield><datafield tag="700" ind1="0" ind2=" "><subfield code="a">Lingling Niu</subfield><subfield code="e">verfasserin</subfield><subfield code="4">aut</subfield></datafield><datafield tag="700" ind1="0" ind2=" "><subfield code="a">Lingling Niu</subfield><subfield code="e">verfasserin</subfield><subfield code="4">aut</subfield></datafield><datafield tag="700" ind1="0" ind2=" "><subfield code="a">Lingling Niu</subfield><subfield code="e">verfasserin</subfield><subfield code="4">aut</subfield></datafield><datafield tag="700" ind1="0" ind2=" "><subfield code="a">Lingling Niu</subfield><subfield code="e">verfasserin</subfield><subfield code="4">aut</subfield></datafield><datafield tag="700" ind1="0" ind2=" "><subfield code="a">Yang Shen</subfield><subfield code="e">verfasserin</subfield><subfield code="4">aut</subfield></datafield><datafield tag="700" ind1="0" ind2=" "><subfield code="a">Yang Shen</subfield><subfield code="e">verfasserin</subfield><subfield code="4">aut</subfield></datafield><datafield tag="700" ind1="0" ind2=" "><subfield code="a">Yang Shen</subfield><subfield code="e">verfasserin</subfield><subfield code="4">aut</subfield></datafield><datafield tag="700" ind1="0" ind2=" "><subfield code="a">Yang Shen</subfield><subfield code="e">verfasserin</subfield><subfield code="4">aut</subfield></datafield><datafield tag="700" ind1="0" ind2=" "><subfield code="a">Yang Shen</subfield><subfield code="e">verfasserin</subfield><subfield code="4">aut</subfield></datafield><datafield tag="700" ind1="0" ind2=" "><subfield code="a">Xiaoyu Zhang</subfield><subfield code="e">verfasserin</subfield><subfield code="4">aut</subfield></datafield><datafield tag="700" ind1="0" ind2=" "><subfield code="a">Xiaoyu Zhang</subfield><subfield code="e">verfasserin</subfield><subfield code="4">aut</subfield></datafield><datafield tag="700" ind1="0" ind2=" "><subfield code="a">Xiaoyu Zhang</subfield><subfield code="e">verfasserin</subfield><subfield code="4">aut</subfield></datafield><datafield tag="700" ind1="0" ind2=" "><subfield code="a">Xiaoyu Zhang</subfield><subfield code="e">verfasserin</subfield><subfield code="4">aut</subfield></datafield><datafield tag="700" ind1="0" ind2=" "><subfield code="a">Xiaoyu Zhang</subfield><subfield code="e">verfasserin</subfield><subfield code="4">aut</subfield></datafield><datafield tag="700" ind1="0" ind2=" "><subfield code="a">Bo Fu</subfield><subfield code="e">verfasserin</subfield><subfield code="4">aut</subfield></datafield><datafield tag="700" ind1="0" ind2=" "><subfield code="a">Xingtao Zhou</subfield><subfield code="e">verfasserin</subfield><subfield code="4">aut</subfield></datafield><datafield tag="700" ind1="0" ind2=" "><subfield code="a">Xingtao Zhou</subfield><subfield code="e">verfasserin</subfield><subfield code="4">aut</subfield></datafield><datafield tag="700" ind1="0" ind2=" "><subfield code="a">Xingtao Zhou</subfield><subfield code="e">verfasserin</subfield><subfield code="4">aut</subfield></datafield><datafield tag="700" ind1="0" ind2=" "><subfield code="a">Xingtao Zhou</subfield><subfield code="e">verfasserin</subfield><subfield code="4">aut</subfield></datafield><datafield tag="700" ind1="0" ind2=" "><subfield code="a">Xingtao Zhou</subfield><subfield code="e">verfasserin</subfield><subfield code="4">aut</subfield></datafield><datafield tag="773" ind1="0" ind2="8"><subfield code="i">In</subfield><subfield code="t">Frontiers in Medicine</subfield><subfield code="d">Frontiers Media S.A., 2014</subfield><subfield code="g">9(2022)</subfield><subfield code="w">(DE-627)789482991</subfield><subfield code="w">(DE-600)2775999-4</subfield><subfield code="x">2296858X</subfield><subfield code="7">nnns</subfield></datafield><datafield tag="773" ind1="1" ind2="8"><subfield code="g">volume:9</subfield><subfield code="g">year:2022</subfield></datafield><datafield tag="856" ind1="4" ind2="0"><subfield code="u">https://doi.org/10.3389/fmed.2022.834281</subfield><subfield code="z">kostenfrei</subfield></datafield><datafield tag="856" ind1="4" ind2="0"><subfield code="u">https://doaj.org/article/b6f36567d3034f13a7f106a1f5f0046e</subfield><subfield code="z">kostenfrei</subfield></datafield><datafield tag="856" ind1="4" ind2="0"><subfield code="u">https://www.frontiersin.org/articles/10.3389/fmed.2022.834281/full</subfield><subfield code="z">kostenfrei</subfield></datafield><datafield tag="856" ind1="4" ind2="2"><subfield code="u">https://doaj.org/toc/2296-858X</subfield><subfield code="y">Journal toc</subfield><subfield code="z">kostenfrei</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_USEFLAG_A</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">SYSFLAG_A</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_DOAJ</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_11</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_20</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_22</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_23</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_24</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_31</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_39</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_40</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_60</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_62</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_63</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_65</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_69</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_73</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_74</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_95</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_105</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_110</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_151</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_161</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_170</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_206</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_213</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_230</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_285</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_293</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_602</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2003</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2014</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4012</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4037</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4112</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4125</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4126</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4249</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4305</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4306</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4307</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4313</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4322</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4323</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4324</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4325</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4338</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4367</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4700</subfield></datafield><datafield tag="951" ind1=" " ind2=" "><subfield code="a">AR</subfield></datafield><datafield tag="952" ind1=" " ind2=" "><subfield code="d">9</subfield><subfield code="j">2022</subfield></datafield></record></collection>
|
score |
7.4007807 |