Reduced detection rate of artificial intelligence in images obtained from untrained endoscope models and improvement using domain adaptation algorithm
A training dataset that is limited to a specific endoscope model can overfit artificial intelligence (AI) to its unique image characteristics. The performance of the AI may degrade in images of different endoscope model. The domain adaptation algorithm, i.e., the cycle-consistent adversarial network...
Ausführliche Beschreibung
Autor*in: |
Junseok Park [verfasserIn] Youngbae Hwang [verfasserIn] Hyun Gun Kim [verfasserIn] Joon Seong Lee [verfasserIn] Jin-Oh Kim [verfasserIn] Tae Hee Lee [verfasserIn] Seong Ran Jeon [verfasserIn] Su Jin Hong [verfasserIn] Bong Min Ko [verfasserIn] Seokmin Kim [verfasserIn] |
---|
Format: |
E-Artikel |
---|---|
Sprache: |
Englisch |
Erschienen: |
2022 |
---|
Schlagwörter: |
---|
Übergeordnetes Werk: |
In: Frontiers in Medicine - Frontiers Media S.A., 2014, 9(2022) |
---|---|
Übergeordnetes Werk: |
volume:9 ; year:2022 |
Links: |
---|
DOI / URN: |
10.3389/fmed.2022.1036974 |
---|
Katalog-ID: |
DOAJ083865489 |
---|
LEADER | 01000caa a22002652 4500 | ||
---|---|---|---|
001 | DOAJ083865489 | ||
003 | DE-627 | ||
005 | 20230503074605.0 | ||
007 | cr uuu---uuuuu | ||
008 | 230311s2022 xx |||||o 00| ||eng c | ||
024 | 7 | |a 10.3389/fmed.2022.1036974 |2 doi | |
035 | |a (DE-627)DOAJ083865489 | ||
035 | |a (DE-599)DOAJe249349aab3e45b4aac5813ced1e1785 | ||
040 | |a DE-627 |b ger |c DE-627 |e rakwb | ||
041 | |a eng | ||
050 | 0 | |a R5-920 | |
100 | 0 | |a Junseok Park |e verfasserin |4 aut | |
245 | 1 | 0 | |a Reduced detection rate of artificial intelligence in images obtained from untrained endoscope models and improvement using domain adaptation algorithm |
264 | 1 | |c 2022 | |
336 | |a Text |b txt |2 rdacontent | ||
337 | |a Computermedien |b c |2 rdamedia | ||
338 | |a Online-Ressource |b cr |2 rdacarrier | ||
520 | |a A training dataset that is limited to a specific endoscope model can overfit artificial intelligence (AI) to its unique image characteristics. The performance of the AI may degrade in images of different endoscope model. The domain adaptation algorithm, i.e., the cycle-consistent adversarial network (cycleGAN), can transform the image characteristics into AI-friendly styles. We attempted to confirm the performance degradation of AIs in images of various endoscope models and aimed to improve them using cycleGAN transformation. Two AI models were developed from data of esophagogastroduodenoscopies collected retrospectively over 5 years: one for identifying the endoscope models, Olympus CV-260SL, CV-290 (Olympus, Tokyo, Japan), and PENTAX EPK-i (PENTAX Medical, Tokyo, Japan), and the other for recognizing the esophagogastric junction (EGJ). The AIs were trained using 45,683 standardized images from 1,498 cases and validated on 624 separate cases. Between the two endoscope manufacturers, there was a difference in image characteristics that could be distinguished without error by AI. The accuracy of the AI in recognizing gastroesophageal junction was >0.979 in the same endoscope-examined validation dataset as the training dataset. However, they deteriorated in datasets from different endoscopes. Cycle-consistent adversarial network can successfully convert image characteristics to ameliorate the AI performance. The improvements were statistically significant and greater in datasets from different endoscope manufacturers [original → AI-trained style, increased area under the receiver operating characteristic (ROC) curve, P-value: CV-260SL → CV-290, 0.0056, P = 0.0106; CV-260SL → EPK-i, 0.0182, P = 0.0158; CV-290 → CV-260SL, 0.0134, P < 0.0001; CV-290 → EPK-i, 0.0299, P = 0.0001; EPK-i → CV-260SL, 0.0215, P = 0.0024; and EPK-i → CV-290, 0.0616, P < 0.0001]. In conclusion, cycleGAN can transform the diverse image characteristics of endoscope models into an AI-trained style to improve the detection performance of AI. | ||
650 | 4 | |a endoscopes | |
650 | 4 | |a artificial intelligence | |
650 | 4 | |a deep learning | |
650 | 4 | |a generative adversarial network | |
650 | 4 | |a domain adaptation algorithm | |
653 | 0 | |a Medicine (General) | |
700 | 0 | |a Youngbae Hwang |e verfasserin |4 aut | |
700 | 0 | |a Hyun Gun Kim |e verfasserin |4 aut | |
700 | 0 | |a Joon Seong Lee |e verfasserin |4 aut | |
700 | 0 | |a Jin-Oh Kim |e verfasserin |4 aut | |
700 | 0 | |a Tae Hee Lee |e verfasserin |4 aut | |
700 | 0 | |a Seong Ran Jeon |e verfasserin |4 aut | |
700 | 0 | |a Su Jin Hong |e verfasserin |4 aut | |
700 | 0 | |a Bong Min Ko |e verfasserin |4 aut | |
700 | 0 | |a Seokmin Kim |e verfasserin |4 aut | |
773 | 0 | 8 | |i In |t Frontiers in Medicine |d Frontiers Media S.A., 2014 |g 9(2022) |w (DE-627)789482991 |w (DE-600)2775999-4 |x 2296858X |7 nnns |
773 | 1 | 8 | |g volume:9 |g year:2022 |
856 | 4 | 0 | |u https://doi.org/10.3389/fmed.2022.1036974 |z kostenfrei |
856 | 4 | 0 | |u https://doaj.org/article/e249349aab3e45b4aac5813ced1e1785 |z kostenfrei |
856 | 4 | 0 | |u https://www.frontiersin.org/articles/10.3389/fmed.2022.1036974/full |z kostenfrei |
856 | 4 | 2 | |u https://doaj.org/toc/2296-858X |y Journal toc |z kostenfrei |
912 | |a GBV_USEFLAG_A | ||
912 | |a SYSFLAG_A | ||
912 | |a GBV_DOAJ | ||
912 | |a SSG-OLC-PHA | ||
912 | |a GBV_ILN_11 | ||
912 | |a GBV_ILN_20 | ||
912 | |a GBV_ILN_22 | ||
912 | |a GBV_ILN_23 | ||
912 | |a GBV_ILN_24 | ||
912 | |a GBV_ILN_31 | ||
912 | |a GBV_ILN_39 | ||
912 | |a GBV_ILN_40 | ||
912 | |a GBV_ILN_60 | ||
912 | |a GBV_ILN_62 | ||
912 | |a GBV_ILN_63 | ||
912 | |a GBV_ILN_65 | ||
912 | |a GBV_ILN_69 | ||
912 | |a GBV_ILN_73 | ||
912 | |a GBV_ILN_74 | ||
912 | |a GBV_ILN_95 | ||
912 | |a GBV_ILN_105 | ||
912 | |a GBV_ILN_110 | ||
912 | |a GBV_ILN_151 | ||
912 | |a GBV_ILN_161 | ||
912 | |a GBV_ILN_170 | ||
912 | |a GBV_ILN_206 | ||
912 | |a GBV_ILN_213 | ||
912 | |a GBV_ILN_230 | ||
912 | |a GBV_ILN_285 | ||
912 | |a GBV_ILN_293 | ||
912 | |a GBV_ILN_602 | ||
912 | |a GBV_ILN_2003 | ||
912 | |a GBV_ILN_2014 | ||
912 | |a GBV_ILN_4012 | ||
912 | |a GBV_ILN_4037 | ||
912 | |a GBV_ILN_4112 | ||
912 | |a GBV_ILN_4125 | ||
912 | |a GBV_ILN_4126 | ||
912 | |a GBV_ILN_4249 | ||
912 | |a GBV_ILN_4305 | ||
912 | |a GBV_ILN_4306 | ||
912 | |a GBV_ILN_4307 | ||
912 | |a GBV_ILN_4313 | ||
912 | |a GBV_ILN_4322 | ||
912 | |a GBV_ILN_4323 | ||
912 | |a GBV_ILN_4324 | ||
912 | |a GBV_ILN_4325 | ||
912 | |a GBV_ILN_4338 | ||
912 | |a GBV_ILN_4367 | ||
912 | |a GBV_ILN_4700 | ||
951 | |a AR | ||
952 | |d 9 |j 2022 |
author_variant |
j p jp y h yh h g k hgk j s l jsl j o k jok t h l thl s r j srj s j h sjh b m k bmk s k sk |
---|---|
matchkey_str |
article:2296858X:2022----::euedtcinaefriiilnelgnenmgsbandrmnrieedsoeoesnipoe |
hierarchy_sort_str |
2022 |
callnumber-subject-code |
R |
publishDate |
2022 |
allfields |
10.3389/fmed.2022.1036974 doi (DE-627)DOAJ083865489 (DE-599)DOAJe249349aab3e45b4aac5813ced1e1785 DE-627 ger DE-627 rakwb eng R5-920 Junseok Park verfasserin aut Reduced detection rate of artificial intelligence in images obtained from untrained endoscope models and improvement using domain adaptation algorithm 2022 Text txt rdacontent Computermedien c rdamedia Online-Ressource cr rdacarrier A training dataset that is limited to a specific endoscope model can overfit artificial intelligence (AI) to its unique image characteristics. The performance of the AI may degrade in images of different endoscope model. The domain adaptation algorithm, i.e., the cycle-consistent adversarial network (cycleGAN), can transform the image characteristics into AI-friendly styles. We attempted to confirm the performance degradation of AIs in images of various endoscope models and aimed to improve them using cycleGAN transformation. Two AI models were developed from data of esophagogastroduodenoscopies collected retrospectively over 5 years: one for identifying the endoscope models, Olympus CV-260SL, CV-290 (Olympus, Tokyo, Japan), and PENTAX EPK-i (PENTAX Medical, Tokyo, Japan), and the other for recognizing the esophagogastric junction (EGJ). The AIs were trained using 45,683 standardized images from 1,498 cases and validated on 624 separate cases. Between the two endoscope manufacturers, there was a difference in image characteristics that could be distinguished without error by AI. The accuracy of the AI in recognizing gastroesophageal junction was >0.979 in the same endoscope-examined validation dataset as the training dataset. However, they deteriorated in datasets from different endoscopes. Cycle-consistent adversarial network can successfully convert image characteristics to ameliorate the AI performance. The improvements were statistically significant and greater in datasets from different endoscope manufacturers [original → AI-trained style, increased area under the receiver operating characteristic (ROC) curve, P-value: CV-260SL → CV-290, 0.0056, P = 0.0106; CV-260SL → EPK-i, 0.0182, P = 0.0158; CV-290 → CV-260SL, 0.0134, P < 0.0001; CV-290 → EPK-i, 0.0299, P = 0.0001; EPK-i → CV-260SL, 0.0215, P = 0.0024; and EPK-i → CV-290, 0.0616, P < 0.0001]. In conclusion, cycleGAN can transform the diverse image characteristics of endoscope models into an AI-trained style to improve the detection performance of AI. endoscopes artificial intelligence deep learning generative adversarial network domain adaptation algorithm Medicine (General) Youngbae Hwang verfasserin aut Hyun Gun Kim verfasserin aut Joon Seong Lee verfasserin aut Jin-Oh Kim verfasserin aut Tae Hee Lee verfasserin aut Seong Ran Jeon verfasserin aut Su Jin Hong verfasserin aut Bong Min Ko verfasserin aut Seokmin Kim verfasserin aut In Frontiers in Medicine Frontiers Media S.A., 2014 9(2022) (DE-627)789482991 (DE-600)2775999-4 2296858X nnns volume:9 year:2022 https://doi.org/10.3389/fmed.2022.1036974 kostenfrei https://doaj.org/article/e249349aab3e45b4aac5813ced1e1785 kostenfrei https://www.frontiersin.org/articles/10.3389/fmed.2022.1036974/full kostenfrei https://doaj.org/toc/2296-858X Journal toc kostenfrei GBV_USEFLAG_A SYSFLAG_A GBV_DOAJ SSG-OLC-PHA GBV_ILN_11 GBV_ILN_20 GBV_ILN_22 GBV_ILN_23 GBV_ILN_24 GBV_ILN_31 GBV_ILN_39 GBV_ILN_40 GBV_ILN_60 GBV_ILN_62 GBV_ILN_63 GBV_ILN_65 GBV_ILN_69 GBV_ILN_73 GBV_ILN_74 GBV_ILN_95 GBV_ILN_105 GBV_ILN_110 GBV_ILN_151 GBV_ILN_161 GBV_ILN_170 GBV_ILN_206 GBV_ILN_213 GBV_ILN_230 GBV_ILN_285 GBV_ILN_293 GBV_ILN_602 GBV_ILN_2003 GBV_ILN_2014 GBV_ILN_4012 GBV_ILN_4037 GBV_ILN_4112 GBV_ILN_4125 GBV_ILN_4126 GBV_ILN_4249 GBV_ILN_4305 GBV_ILN_4306 GBV_ILN_4307 GBV_ILN_4313 GBV_ILN_4322 GBV_ILN_4323 GBV_ILN_4324 GBV_ILN_4325 GBV_ILN_4338 GBV_ILN_4367 GBV_ILN_4700 AR 9 2022 |
spelling |
10.3389/fmed.2022.1036974 doi (DE-627)DOAJ083865489 (DE-599)DOAJe249349aab3e45b4aac5813ced1e1785 DE-627 ger DE-627 rakwb eng R5-920 Junseok Park verfasserin aut Reduced detection rate of artificial intelligence in images obtained from untrained endoscope models and improvement using domain adaptation algorithm 2022 Text txt rdacontent Computermedien c rdamedia Online-Ressource cr rdacarrier A training dataset that is limited to a specific endoscope model can overfit artificial intelligence (AI) to its unique image characteristics. The performance of the AI may degrade in images of different endoscope model. The domain adaptation algorithm, i.e., the cycle-consistent adversarial network (cycleGAN), can transform the image characteristics into AI-friendly styles. We attempted to confirm the performance degradation of AIs in images of various endoscope models and aimed to improve them using cycleGAN transformation. Two AI models were developed from data of esophagogastroduodenoscopies collected retrospectively over 5 years: one for identifying the endoscope models, Olympus CV-260SL, CV-290 (Olympus, Tokyo, Japan), and PENTAX EPK-i (PENTAX Medical, Tokyo, Japan), and the other for recognizing the esophagogastric junction (EGJ). The AIs were trained using 45,683 standardized images from 1,498 cases and validated on 624 separate cases. Between the two endoscope manufacturers, there was a difference in image characteristics that could be distinguished without error by AI. The accuracy of the AI in recognizing gastroesophageal junction was >0.979 in the same endoscope-examined validation dataset as the training dataset. However, they deteriorated in datasets from different endoscopes. Cycle-consistent adversarial network can successfully convert image characteristics to ameliorate the AI performance. The improvements were statistically significant and greater in datasets from different endoscope manufacturers [original → AI-trained style, increased area under the receiver operating characteristic (ROC) curve, P-value: CV-260SL → CV-290, 0.0056, P = 0.0106; CV-260SL → EPK-i, 0.0182, P = 0.0158; CV-290 → CV-260SL, 0.0134, P < 0.0001; CV-290 → EPK-i, 0.0299, P = 0.0001; EPK-i → CV-260SL, 0.0215, P = 0.0024; and EPK-i → CV-290, 0.0616, P < 0.0001]. In conclusion, cycleGAN can transform the diverse image characteristics of endoscope models into an AI-trained style to improve the detection performance of AI. endoscopes artificial intelligence deep learning generative adversarial network domain adaptation algorithm Medicine (General) Youngbae Hwang verfasserin aut Hyun Gun Kim verfasserin aut Joon Seong Lee verfasserin aut Jin-Oh Kim verfasserin aut Tae Hee Lee verfasserin aut Seong Ran Jeon verfasserin aut Su Jin Hong verfasserin aut Bong Min Ko verfasserin aut Seokmin Kim verfasserin aut In Frontiers in Medicine Frontiers Media S.A., 2014 9(2022) (DE-627)789482991 (DE-600)2775999-4 2296858X nnns volume:9 year:2022 https://doi.org/10.3389/fmed.2022.1036974 kostenfrei https://doaj.org/article/e249349aab3e45b4aac5813ced1e1785 kostenfrei https://www.frontiersin.org/articles/10.3389/fmed.2022.1036974/full kostenfrei https://doaj.org/toc/2296-858X Journal toc kostenfrei GBV_USEFLAG_A SYSFLAG_A GBV_DOAJ SSG-OLC-PHA GBV_ILN_11 GBV_ILN_20 GBV_ILN_22 GBV_ILN_23 GBV_ILN_24 GBV_ILN_31 GBV_ILN_39 GBV_ILN_40 GBV_ILN_60 GBV_ILN_62 GBV_ILN_63 GBV_ILN_65 GBV_ILN_69 GBV_ILN_73 GBV_ILN_74 GBV_ILN_95 GBV_ILN_105 GBV_ILN_110 GBV_ILN_151 GBV_ILN_161 GBV_ILN_170 GBV_ILN_206 GBV_ILN_213 GBV_ILN_230 GBV_ILN_285 GBV_ILN_293 GBV_ILN_602 GBV_ILN_2003 GBV_ILN_2014 GBV_ILN_4012 GBV_ILN_4037 GBV_ILN_4112 GBV_ILN_4125 GBV_ILN_4126 GBV_ILN_4249 GBV_ILN_4305 GBV_ILN_4306 GBV_ILN_4307 GBV_ILN_4313 GBV_ILN_4322 GBV_ILN_4323 GBV_ILN_4324 GBV_ILN_4325 GBV_ILN_4338 GBV_ILN_4367 GBV_ILN_4700 AR 9 2022 |
allfields_unstemmed |
10.3389/fmed.2022.1036974 doi (DE-627)DOAJ083865489 (DE-599)DOAJe249349aab3e45b4aac5813ced1e1785 DE-627 ger DE-627 rakwb eng R5-920 Junseok Park verfasserin aut Reduced detection rate of artificial intelligence in images obtained from untrained endoscope models and improvement using domain adaptation algorithm 2022 Text txt rdacontent Computermedien c rdamedia Online-Ressource cr rdacarrier A training dataset that is limited to a specific endoscope model can overfit artificial intelligence (AI) to its unique image characteristics. The performance of the AI may degrade in images of different endoscope model. The domain adaptation algorithm, i.e., the cycle-consistent adversarial network (cycleGAN), can transform the image characteristics into AI-friendly styles. We attempted to confirm the performance degradation of AIs in images of various endoscope models and aimed to improve them using cycleGAN transformation. Two AI models were developed from data of esophagogastroduodenoscopies collected retrospectively over 5 years: one for identifying the endoscope models, Olympus CV-260SL, CV-290 (Olympus, Tokyo, Japan), and PENTAX EPK-i (PENTAX Medical, Tokyo, Japan), and the other for recognizing the esophagogastric junction (EGJ). The AIs were trained using 45,683 standardized images from 1,498 cases and validated on 624 separate cases. Between the two endoscope manufacturers, there was a difference in image characteristics that could be distinguished without error by AI. The accuracy of the AI in recognizing gastroesophageal junction was >0.979 in the same endoscope-examined validation dataset as the training dataset. However, they deteriorated in datasets from different endoscopes. Cycle-consistent adversarial network can successfully convert image characteristics to ameliorate the AI performance. The improvements were statistically significant and greater in datasets from different endoscope manufacturers [original → AI-trained style, increased area under the receiver operating characteristic (ROC) curve, P-value: CV-260SL → CV-290, 0.0056, P = 0.0106; CV-260SL → EPK-i, 0.0182, P = 0.0158; CV-290 → CV-260SL, 0.0134, P < 0.0001; CV-290 → EPK-i, 0.0299, P = 0.0001; EPK-i → CV-260SL, 0.0215, P = 0.0024; and EPK-i → CV-290, 0.0616, P < 0.0001]. In conclusion, cycleGAN can transform the diverse image characteristics of endoscope models into an AI-trained style to improve the detection performance of AI. endoscopes artificial intelligence deep learning generative adversarial network domain adaptation algorithm Medicine (General) Youngbae Hwang verfasserin aut Hyun Gun Kim verfasserin aut Joon Seong Lee verfasserin aut Jin-Oh Kim verfasserin aut Tae Hee Lee verfasserin aut Seong Ran Jeon verfasserin aut Su Jin Hong verfasserin aut Bong Min Ko verfasserin aut Seokmin Kim verfasserin aut In Frontiers in Medicine Frontiers Media S.A., 2014 9(2022) (DE-627)789482991 (DE-600)2775999-4 2296858X nnns volume:9 year:2022 https://doi.org/10.3389/fmed.2022.1036974 kostenfrei https://doaj.org/article/e249349aab3e45b4aac5813ced1e1785 kostenfrei https://www.frontiersin.org/articles/10.3389/fmed.2022.1036974/full kostenfrei https://doaj.org/toc/2296-858X Journal toc kostenfrei GBV_USEFLAG_A SYSFLAG_A GBV_DOAJ SSG-OLC-PHA GBV_ILN_11 GBV_ILN_20 GBV_ILN_22 GBV_ILN_23 GBV_ILN_24 GBV_ILN_31 GBV_ILN_39 GBV_ILN_40 GBV_ILN_60 GBV_ILN_62 GBV_ILN_63 GBV_ILN_65 GBV_ILN_69 GBV_ILN_73 GBV_ILN_74 GBV_ILN_95 GBV_ILN_105 GBV_ILN_110 GBV_ILN_151 GBV_ILN_161 GBV_ILN_170 GBV_ILN_206 GBV_ILN_213 GBV_ILN_230 GBV_ILN_285 GBV_ILN_293 GBV_ILN_602 GBV_ILN_2003 GBV_ILN_2014 GBV_ILN_4012 GBV_ILN_4037 GBV_ILN_4112 GBV_ILN_4125 GBV_ILN_4126 GBV_ILN_4249 GBV_ILN_4305 GBV_ILN_4306 GBV_ILN_4307 GBV_ILN_4313 GBV_ILN_4322 GBV_ILN_4323 GBV_ILN_4324 GBV_ILN_4325 GBV_ILN_4338 GBV_ILN_4367 GBV_ILN_4700 AR 9 2022 |
allfieldsGer |
10.3389/fmed.2022.1036974 doi (DE-627)DOAJ083865489 (DE-599)DOAJe249349aab3e45b4aac5813ced1e1785 DE-627 ger DE-627 rakwb eng R5-920 Junseok Park verfasserin aut Reduced detection rate of artificial intelligence in images obtained from untrained endoscope models and improvement using domain adaptation algorithm 2022 Text txt rdacontent Computermedien c rdamedia Online-Ressource cr rdacarrier A training dataset that is limited to a specific endoscope model can overfit artificial intelligence (AI) to its unique image characteristics. The performance of the AI may degrade in images of different endoscope model. The domain adaptation algorithm, i.e., the cycle-consistent adversarial network (cycleGAN), can transform the image characteristics into AI-friendly styles. We attempted to confirm the performance degradation of AIs in images of various endoscope models and aimed to improve them using cycleGAN transformation. Two AI models were developed from data of esophagogastroduodenoscopies collected retrospectively over 5 years: one for identifying the endoscope models, Olympus CV-260SL, CV-290 (Olympus, Tokyo, Japan), and PENTAX EPK-i (PENTAX Medical, Tokyo, Japan), and the other for recognizing the esophagogastric junction (EGJ). The AIs were trained using 45,683 standardized images from 1,498 cases and validated on 624 separate cases. Between the two endoscope manufacturers, there was a difference in image characteristics that could be distinguished without error by AI. The accuracy of the AI in recognizing gastroesophageal junction was >0.979 in the same endoscope-examined validation dataset as the training dataset. However, they deteriorated in datasets from different endoscopes. Cycle-consistent adversarial network can successfully convert image characteristics to ameliorate the AI performance. The improvements were statistically significant and greater in datasets from different endoscope manufacturers [original → AI-trained style, increased area under the receiver operating characteristic (ROC) curve, P-value: CV-260SL → CV-290, 0.0056, P = 0.0106; CV-260SL → EPK-i, 0.0182, P = 0.0158; CV-290 → CV-260SL, 0.0134, P < 0.0001; CV-290 → EPK-i, 0.0299, P = 0.0001; EPK-i → CV-260SL, 0.0215, P = 0.0024; and EPK-i → CV-290, 0.0616, P < 0.0001]. In conclusion, cycleGAN can transform the diverse image characteristics of endoscope models into an AI-trained style to improve the detection performance of AI. endoscopes artificial intelligence deep learning generative adversarial network domain adaptation algorithm Medicine (General) Youngbae Hwang verfasserin aut Hyun Gun Kim verfasserin aut Joon Seong Lee verfasserin aut Jin-Oh Kim verfasserin aut Tae Hee Lee verfasserin aut Seong Ran Jeon verfasserin aut Su Jin Hong verfasserin aut Bong Min Ko verfasserin aut Seokmin Kim verfasserin aut In Frontiers in Medicine Frontiers Media S.A., 2014 9(2022) (DE-627)789482991 (DE-600)2775999-4 2296858X nnns volume:9 year:2022 https://doi.org/10.3389/fmed.2022.1036974 kostenfrei https://doaj.org/article/e249349aab3e45b4aac5813ced1e1785 kostenfrei https://www.frontiersin.org/articles/10.3389/fmed.2022.1036974/full kostenfrei https://doaj.org/toc/2296-858X Journal toc kostenfrei GBV_USEFLAG_A SYSFLAG_A GBV_DOAJ SSG-OLC-PHA GBV_ILN_11 GBV_ILN_20 GBV_ILN_22 GBV_ILN_23 GBV_ILN_24 GBV_ILN_31 GBV_ILN_39 GBV_ILN_40 GBV_ILN_60 GBV_ILN_62 GBV_ILN_63 GBV_ILN_65 GBV_ILN_69 GBV_ILN_73 GBV_ILN_74 GBV_ILN_95 GBV_ILN_105 GBV_ILN_110 GBV_ILN_151 GBV_ILN_161 GBV_ILN_170 GBV_ILN_206 GBV_ILN_213 GBV_ILN_230 GBV_ILN_285 GBV_ILN_293 GBV_ILN_602 GBV_ILN_2003 GBV_ILN_2014 GBV_ILN_4012 GBV_ILN_4037 GBV_ILN_4112 GBV_ILN_4125 GBV_ILN_4126 GBV_ILN_4249 GBV_ILN_4305 GBV_ILN_4306 GBV_ILN_4307 GBV_ILN_4313 GBV_ILN_4322 GBV_ILN_4323 GBV_ILN_4324 GBV_ILN_4325 GBV_ILN_4338 GBV_ILN_4367 GBV_ILN_4700 AR 9 2022 |
allfieldsSound |
10.3389/fmed.2022.1036974 doi (DE-627)DOAJ083865489 (DE-599)DOAJe249349aab3e45b4aac5813ced1e1785 DE-627 ger DE-627 rakwb eng R5-920 Junseok Park verfasserin aut Reduced detection rate of artificial intelligence in images obtained from untrained endoscope models and improvement using domain adaptation algorithm 2022 Text txt rdacontent Computermedien c rdamedia Online-Ressource cr rdacarrier A training dataset that is limited to a specific endoscope model can overfit artificial intelligence (AI) to its unique image characteristics. The performance of the AI may degrade in images of different endoscope model. The domain adaptation algorithm, i.e., the cycle-consistent adversarial network (cycleGAN), can transform the image characteristics into AI-friendly styles. We attempted to confirm the performance degradation of AIs in images of various endoscope models and aimed to improve them using cycleGAN transformation. Two AI models were developed from data of esophagogastroduodenoscopies collected retrospectively over 5 years: one for identifying the endoscope models, Olympus CV-260SL, CV-290 (Olympus, Tokyo, Japan), and PENTAX EPK-i (PENTAX Medical, Tokyo, Japan), and the other for recognizing the esophagogastric junction (EGJ). The AIs were trained using 45,683 standardized images from 1,498 cases and validated on 624 separate cases. Between the two endoscope manufacturers, there was a difference in image characteristics that could be distinguished without error by AI. The accuracy of the AI in recognizing gastroesophageal junction was >0.979 in the same endoscope-examined validation dataset as the training dataset. However, they deteriorated in datasets from different endoscopes. Cycle-consistent adversarial network can successfully convert image characteristics to ameliorate the AI performance. The improvements were statistically significant and greater in datasets from different endoscope manufacturers [original → AI-trained style, increased area under the receiver operating characteristic (ROC) curve, P-value: CV-260SL → CV-290, 0.0056, P = 0.0106; CV-260SL → EPK-i, 0.0182, P = 0.0158; CV-290 → CV-260SL, 0.0134, P < 0.0001; CV-290 → EPK-i, 0.0299, P = 0.0001; EPK-i → CV-260SL, 0.0215, P = 0.0024; and EPK-i → CV-290, 0.0616, P < 0.0001]. In conclusion, cycleGAN can transform the diverse image characteristics of endoscope models into an AI-trained style to improve the detection performance of AI. endoscopes artificial intelligence deep learning generative adversarial network domain adaptation algorithm Medicine (General) Youngbae Hwang verfasserin aut Hyun Gun Kim verfasserin aut Joon Seong Lee verfasserin aut Jin-Oh Kim verfasserin aut Tae Hee Lee verfasserin aut Seong Ran Jeon verfasserin aut Su Jin Hong verfasserin aut Bong Min Ko verfasserin aut Seokmin Kim verfasserin aut In Frontiers in Medicine Frontiers Media S.A., 2014 9(2022) (DE-627)789482991 (DE-600)2775999-4 2296858X nnns volume:9 year:2022 https://doi.org/10.3389/fmed.2022.1036974 kostenfrei https://doaj.org/article/e249349aab3e45b4aac5813ced1e1785 kostenfrei https://www.frontiersin.org/articles/10.3389/fmed.2022.1036974/full kostenfrei https://doaj.org/toc/2296-858X Journal toc kostenfrei GBV_USEFLAG_A SYSFLAG_A GBV_DOAJ SSG-OLC-PHA GBV_ILN_11 GBV_ILN_20 GBV_ILN_22 GBV_ILN_23 GBV_ILN_24 GBV_ILN_31 GBV_ILN_39 GBV_ILN_40 GBV_ILN_60 GBV_ILN_62 GBV_ILN_63 GBV_ILN_65 GBV_ILN_69 GBV_ILN_73 GBV_ILN_74 GBV_ILN_95 GBV_ILN_105 GBV_ILN_110 GBV_ILN_151 GBV_ILN_161 GBV_ILN_170 GBV_ILN_206 GBV_ILN_213 GBV_ILN_230 GBV_ILN_285 GBV_ILN_293 GBV_ILN_602 GBV_ILN_2003 GBV_ILN_2014 GBV_ILN_4012 GBV_ILN_4037 GBV_ILN_4112 GBV_ILN_4125 GBV_ILN_4126 GBV_ILN_4249 GBV_ILN_4305 GBV_ILN_4306 GBV_ILN_4307 GBV_ILN_4313 GBV_ILN_4322 GBV_ILN_4323 GBV_ILN_4324 GBV_ILN_4325 GBV_ILN_4338 GBV_ILN_4367 GBV_ILN_4700 AR 9 2022 |
language |
English |
source |
In Frontiers in Medicine 9(2022) volume:9 year:2022 |
sourceStr |
In Frontiers in Medicine 9(2022) volume:9 year:2022 |
format_phy_str_mv |
Article |
institution |
findex.gbv.de |
topic_facet |
endoscopes artificial intelligence deep learning generative adversarial network domain adaptation algorithm Medicine (General) |
isfreeaccess_bool |
true |
container_title |
Frontiers in Medicine |
authorswithroles_txt_mv |
Junseok Park @@aut@@ Youngbae Hwang @@aut@@ Hyun Gun Kim @@aut@@ Joon Seong Lee @@aut@@ Jin-Oh Kim @@aut@@ Tae Hee Lee @@aut@@ Seong Ran Jeon @@aut@@ Su Jin Hong @@aut@@ Bong Min Ko @@aut@@ Seokmin Kim @@aut@@ |
publishDateDaySort_date |
2022-01-01T00:00:00Z |
hierarchy_top_id |
789482991 |
id |
DOAJ083865489 |
language_de |
englisch |
fullrecord |
<?xml version="1.0" encoding="UTF-8"?><collection xmlns="http://www.loc.gov/MARC21/slim"><record><leader>01000caa a22002652 4500</leader><controlfield tag="001">DOAJ083865489</controlfield><controlfield tag="003">DE-627</controlfield><controlfield tag="005">20230503074605.0</controlfield><controlfield tag="007">cr uuu---uuuuu</controlfield><controlfield tag="008">230311s2022 xx |||||o 00| ||eng c</controlfield><datafield tag="024" ind1="7" ind2=" "><subfield code="a">10.3389/fmed.2022.1036974</subfield><subfield code="2">doi</subfield></datafield><datafield tag="035" ind1=" " ind2=" "><subfield code="a">(DE-627)DOAJ083865489</subfield></datafield><datafield tag="035" ind1=" " ind2=" "><subfield code="a">(DE-599)DOAJe249349aab3e45b4aac5813ced1e1785</subfield></datafield><datafield tag="040" ind1=" " ind2=" "><subfield code="a">DE-627</subfield><subfield code="b">ger</subfield><subfield code="c">DE-627</subfield><subfield code="e">rakwb</subfield></datafield><datafield tag="041" ind1=" " ind2=" "><subfield code="a">eng</subfield></datafield><datafield tag="050" ind1=" " ind2="0"><subfield code="a">R5-920</subfield></datafield><datafield tag="100" ind1="0" ind2=" "><subfield code="a">Junseok Park</subfield><subfield code="e">verfasserin</subfield><subfield code="4">aut</subfield></datafield><datafield tag="245" ind1="1" ind2="0"><subfield code="a">Reduced detection rate of artificial intelligence in images obtained from untrained endoscope models and improvement using domain adaptation algorithm</subfield></datafield><datafield tag="264" ind1=" " ind2="1"><subfield code="c">2022</subfield></datafield><datafield tag="336" ind1=" " ind2=" "><subfield code="a">Text</subfield><subfield code="b">txt</subfield><subfield code="2">rdacontent</subfield></datafield><datafield tag="337" ind1=" " ind2=" "><subfield code="a">Computermedien</subfield><subfield code="b">c</subfield><subfield code="2">rdamedia</subfield></datafield><datafield tag="338" ind1=" " ind2=" "><subfield code="a">Online-Ressource</subfield><subfield code="b">cr</subfield><subfield code="2">rdacarrier</subfield></datafield><datafield tag="520" ind1=" " ind2=" "><subfield code="a">A training dataset that is limited to a specific endoscope model can overfit artificial intelligence (AI) to its unique image characteristics. The performance of the AI may degrade in images of different endoscope model. The domain adaptation algorithm, i.e., the cycle-consistent adversarial network (cycleGAN), can transform the image characteristics into AI-friendly styles. We attempted to confirm the performance degradation of AIs in images of various endoscope models and aimed to improve them using cycleGAN transformation. Two AI models were developed from data of esophagogastroduodenoscopies collected retrospectively over 5 years: one for identifying the endoscope models, Olympus CV-260SL, CV-290 (Olympus, Tokyo, Japan), and PENTAX EPK-i (PENTAX Medical, Tokyo, Japan), and the other for recognizing the esophagogastric junction (EGJ). The AIs were trained using 45,683 standardized images from 1,498 cases and validated on 624 separate cases. Between the two endoscope manufacturers, there was a difference in image characteristics that could be distinguished without error by AI. The accuracy of the AI in recognizing gastroesophageal junction was &gt;0.979 in the same endoscope-examined validation dataset as the training dataset. However, they deteriorated in datasets from different endoscopes. Cycle-consistent adversarial network can successfully convert image characteristics to ameliorate the AI performance. The improvements were statistically significant and greater in datasets from different endoscope manufacturers [original → AI-trained style, increased area under the receiver operating characteristic (ROC) curve, P-value: CV-260SL → CV-290, 0.0056, P = 0.0106; CV-260SL → EPK-i, 0.0182, P = 0.0158; CV-290 → CV-260SL, 0.0134, P &lt; 0.0001; CV-290 → EPK-i, 0.0299, P = 0.0001; EPK-i → CV-260SL, 0.0215, P = 0.0024; and EPK-i → CV-290, 0.0616, P &lt; 0.0001]. In conclusion, cycleGAN can transform the diverse image characteristics of endoscope models into an AI-trained style to improve the detection performance of AI.</subfield></datafield><datafield tag="650" ind1=" " ind2="4"><subfield code="a">endoscopes</subfield></datafield><datafield tag="650" ind1=" " ind2="4"><subfield code="a">artificial intelligence</subfield></datafield><datafield tag="650" ind1=" " ind2="4"><subfield code="a">deep learning</subfield></datafield><datafield tag="650" ind1=" " ind2="4"><subfield code="a">generative adversarial network</subfield></datafield><datafield tag="650" ind1=" " ind2="4"><subfield code="a">domain adaptation algorithm</subfield></datafield><datafield tag="653" ind1=" " ind2="0"><subfield code="a">Medicine (General)</subfield></datafield><datafield tag="700" ind1="0" ind2=" "><subfield code="a">Youngbae Hwang</subfield><subfield code="e">verfasserin</subfield><subfield code="4">aut</subfield></datafield><datafield tag="700" ind1="0" ind2=" "><subfield code="a">Hyun Gun Kim</subfield><subfield code="e">verfasserin</subfield><subfield code="4">aut</subfield></datafield><datafield tag="700" ind1="0" ind2=" "><subfield code="a">Joon Seong Lee</subfield><subfield code="e">verfasserin</subfield><subfield code="4">aut</subfield></datafield><datafield tag="700" ind1="0" ind2=" "><subfield code="a">Jin-Oh Kim</subfield><subfield code="e">verfasserin</subfield><subfield code="4">aut</subfield></datafield><datafield tag="700" ind1="0" ind2=" "><subfield code="a">Tae Hee Lee</subfield><subfield code="e">verfasserin</subfield><subfield code="4">aut</subfield></datafield><datafield tag="700" ind1="0" ind2=" "><subfield code="a">Seong Ran Jeon</subfield><subfield code="e">verfasserin</subfield><subfield code="4">aut</subfield></datafield><datafield tag="700" ind1="0" ind2=" "><subfield code="a">Su Jin Hong</subfield><subfield code="e">verfasserin</subfield><subfield code="4">aut</subfield></datafield><datafield tag="700" ind1="0" ind2=" "><subfield code="a">Bong Min Ko</subfield><subfield code="e">verfasserin</subfield><subfield code="4">aut</subfield></datafield><datafield tag="700" ind1="0" ind2=" "><subfield code="a">Seokmin Kim</subfield><subfield code="e">verfasserin</subfield><subfield code="4">aut</subfield></datafield><datafield tag="773" ind1="0" ind2="8"><subfield code="i">In</subfield><subfield code="t">Frontiers in Medicine</subfield><subfield code="d">Frontiers Media S.A., 2014</subfield><subfield code="g">9(2022)</subfield><subfield code="w">(DE-627)789482991</subfield><subfield code="w">(DE-600)2775999-4</subfield><subfield code="x">2296858X</subfield><subfield code="7">nnns</subfield></datafield><datafield tag="773" ind1="1" ind2="8"><subfield code="g">volume:9</subfield><subfield code="g">year:2022</subfield></datafield><datafield tag="856" ind1="4" ind2="0"><subfield code="u">https://doi.org/10.3389/fmed.2022.1036974</subfield><subfield code="z">kostenfrei</subfield></datafield><datafield tag="856" ind1="4" ind2="0"><subfield code="u">https://doaj.org/article/e249349aab3e45b4aac5813ced1e1785</subfield><subfield code="z">kostenfrei</subfield></datafield><datafield tag="856" ind1="4" ind2="0"><subfield code="u">https://www.frontiersin.org/articles/10.3389/fmed.2022.1036974/full</subfield><subfield code="z">kostenfrei</subfield></datafield><datafield tag="856" ind1="4" ind2="2"><subfield code="u">https://doaj.org/toc/2296-858X</subfield><subfield code="y">Journal toc</subfield><subfield code="z">kostenfrei</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_USEFLAG_A</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">SYSFLAG_A</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_DOAJ</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">SSG-OLC-PHA</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_11</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_20</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_22</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_23</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_24</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_31</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_39</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_40</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_60</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_62</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_63</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_65</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_69</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_73</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_74</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_95</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_105</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_110</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_151</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_161</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_170</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_206</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_213</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_230</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_285</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_293</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_602</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2003</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2014</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4012</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4037</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4112</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4125</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4126</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4249</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4305</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4306</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4307</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4313</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4322</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4323</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4324</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4325</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4338</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4367</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4700</subfield></datafield><datafield tag="951" ind1=" " ind2=" "><subfield code="a">AR</subfield></datafield><datafield tag="952" ind1=" " ind2=" "><subfield code="d">9</subfield><subfield code="j">2022</subfield></datafield></record></collection>
|
callnumber-first |
R - Medicine |
author |
Junseok Park |
spellingShingle |
Junseok Park misc R5-920 misc endoscopes misc artificial intelligence misc deep learning misc generative adversarial network misc domain adaptation algorithm misc Medicine (General) Reduced detection rate of artificial intelligence in images obtained from untrained endoscope models and improvement using domain adaptation algorithm |
authorStr |
Junseok Park |
ppnlink_with_tag_str_mv |
@@773@@(DE-627)789482991 |
format |
electronic Article |
delete_txt_mv |
keep |
author_role |
aut aut aut aut aut aut aut aut aut aut |
collection |
DOAJ |
remote_str |
true |
callnumber-label |
R5-920 |
illustrated |
Not Illustrated |
issn |
2296858X |
topic_title |
R5-920 Reduced detection rate of artificial intelligence in images obtained from untrained endoscope models and improvement using domain adaptation algorithm endoscopes artificial intelligence deep learning generative adversarial network domain adaptation algorithm |
topic |
misc R5-920 misc endoscopes misc artificial intelligence misc deep learning misc generative adversarial network misc domain adaptation algorithm misc Medicine (General) |
topic_unstemmed |
misc R5-920 misc endoscopes misc artificial intelligence misc deep learning misc generative adversarial network misc domain adaptation algorithm misc Medicine (General) |
topic_browse |
misc R5-920 misc endoscopes misc artificial intelligence misc deep learning misc generative adversarial network misc domain adaptation algorithm misc Medicine (General) |
format_facet |
Elektronische Aufsätze Aufsätze Elektronische Ressource |
format_main_str_mv |
Text Zeitschrift/Artikel |
carriertype_str_mv |
cr |
hierarchy_parent_title |
Frontiers in Medicine |
hierarchy_parent_id |
789482991 |
hierarchy_top_title |
Frontiers in Medicine |
isfreeaccess_txt |
true |
familylinks_str_mv |
(DE-627)789482991 (DE-600)2775999-4 |
title |
Reduced detection rate of artificial intelligence in images obtained from untrained endoscope models and improvement using domain adaptation algorithm |
ctrlnum |
(DE-627)DOAJ083865489 (DE-599)DOAJe249349aab3e45b4aac5813ced1e1785 |
title_full |
Reduced detection rate of artificial intelligence in images obtained from untrained endoscope models and improvement using domain adaptation algorithm |
author_sort |
Junseok Park |
journal |
Frontiers in Medicine |
journalStr |
Frontiers in Medicine |
callnumber-first-code |
R |
lang_code |
eng |
isOA_bool |
true |
recordtype |
marc |
publishDateSort |
2022 |
contenttype_str_mv |
txt |
author_browse |
Junseok Park Youngbae Hwang Hyun Gun Kim Joon Seong Lee Jin-Oh Kim Tae Hee Lee Seong Ran Jeon Su Jin Hong Bong Min Ko Seokmin Kim |
container_volume |
9 |
class |
R5-920 |
format_se |
Elektronische Aufsätze |
author-letter |
Junseok Park |
doi_str_mv |
10.3389/fmed.2022.1036974 |
author2-role |
verfasserin |
title_sort |
reduced detection rate of artificial intelligence in images obtained from untrained endoscope models and improvement using domain adaptation algorithm |
callnumber |
R5-920 |
title_auth |
Reduced detection rate of artificial intelligence in images obtained from untrained endoscope models and improvement using domain adaptation algorithm |
abstract |
A training dataset that is limited to a specific endoscope model can overfit artificial intelligence (AI) to its unique image characteristics. The performance of the AI may degrade in images of different endoscope model. The domain adaptation algorithm, i.e., the cycle-consistent adversarial network (cycleGAN), can transform the image characteristics into AI-friendly styles. We attempted to confirm the performance degradation of AIs in images of various endoscope models and aimed to improve them using cycleGAN transformation. Two AI models were developed from data of esophagogastroduodenoscopies collected retrospectively over 5 years: one for identifying the endoscope models, Olympus CV-260SL, CV-290 (Olympus, Tokyo, Japan), and PENTAX EPK-i (PENTAX Medical, Tokyo, Japan), and the other for recognizing the esophagogastric junction (EGJ). The AIs were trained using 45,683 standardized images from 1,498 cases and validated on 624 separate cases. Between the two endoscope manufacturers, there was a difference in image characteristics that could be distinguished without error by AI. The accuracy of the AI in recognizing gastroesophageal junction was >0.979 in the same endoscope-examined validation dataset as the training dataset. However, they deteriorated in datasets from different endoscopes. Cycle-consistent adversarial network can successfully convert image characteristics to ameliorate the AI performance. The improvements were statistically significant and greater in datasets from different endoscope manufacturers [original → AI-trained style, increased area under the receiver operating characteristic (ROC) curve, P-value: CV-260SL → CV-290, 0.0056, P = 0.0106; CV-260SL → EPK-i, 0.0182, P = 0.0158; CV-290 → CV-260SL, 0.0134, P < 0.0001; CV-290 → EPK-i, 0.0299, P = 0.0001; EPK-i → CV-260SL, 0.0215, P = 0.0024; and EPK-i → CV-290, 0.0616, P < 0.0001]. In conclusion, cycleGAN can transform the diverse image characteristics of endoscope models into an AI-trained style to improve the detection performance of AI. |
abstractGer |
A training dataset that is limited to a specific endoscope model can overfit artificial intelligence (AI) to its unique image characteristics. The performance of the AI may degrade in images of different endoscope model. The domain adaptation algorithm, i.e., the cycle-consistent adversarial network (cycleGAN), can transform the image characteristics into AI-friendly styles. We attempted to confirm the performance degradation of AIs in images of various endoscope models and aimed to improve them using cycleGAN transformation. Two AI models were developed from data of esophagogastroduodenoscopies collected retrospectively over 5 years: one for identifying the endoscope models, Olympus CV-260SL, CV-290 (Olympus, Tokyo, Japan), and PENTAX EPK-i (PENTAX Medical, Tokyo, Japan), and the other for recognizing the esophagogastric junction (EGJ). The AIs were trained using 45,683 standardized images from 1,498 cases and validated on 624 separate cases. Between the two endoscope manufacturers, there was a difference in image characteristics that could be distinguished without error by AI. The accuracy of the AI in recognizing gastroesophageal junction was >0.979 in the same endoscope-examined validation dataset as the training dataset. However, they deteriorated in datasets from different endoscopes. Cycle-consistent adversarial network can successfully convert image characteristics to ameliorate the AI performance. The improvements were statistically significant and greater in datasets from different endoscope manufacturers [original → AI-trained style, increased area under the receiver operating characteristic (ROC) curve, P-value: CV-260SL → CV-290, 0.0056, P = 0.0106; CV-260SL → EPK-i, 0.0182, P = 0.0158; CV-290 → CV-260SL, 0.0134, P < 0.0001; CV-290 → EPK-i, 0.0299, P = 0.0001; EPK-i → CV-260SL, 0.0215, P = 0.0024; and EPK-i → CV-290, 0.0616, P < 0.0001]. In conclusion, cycleGAN can transform the diverse image characteristics of endoscope models into an AI-trained style to improve the detection performance of AI. |
abstract_unstemmed |
A training dataset that is limited to a specific endoscope model can overfit artificial intelligence (AI) to its unique image characteristics. The performance of the AI may degrade in images of different endoscope model. The domain adaptation algorithm, i.e., the cycle-consistent adversarial network (cycleGAN), can transform the image characteristics into AI-friendly styles. We attempted to confirm the performance degradation of AIs in images of various endoscope models and aimed to improve them using cycleGAN transformation. Two AI models were developed from data of esophagogastroduodenoscopies collected retrospectively over 5 years: one for identifying the endoscope models, Olympus CV-260SL, CV-290 (Olympus, Tokyo, Japan), and PENTAX EPK-i (PENTAX Medical, Tokyo, Japan), and the other for recognizing the esophagogastric junction (EGJ). The AIs were trained using 45,683 standardized images from 1,498 cases and validated on 624 separate cases. Between the two endoscope manufacturers, there was a difference in image characteristics that could be distinguished without error by AI. The accuracy of the AI in recognizing gastroesophageal junction was >0.979 in the same endoscope-examined validation dataset as the training dataset. However, they deteriorated in datasets from different endoscopes. Cycle-consistent adversarial network can successfully convert image characteristics to ameliorate the AI performance. The improvements were statistically significant and greater in datasets from different endoscope manufacturers [original → AI-trained style, increased area under the receiver operating characteristic (ROC) curve, P-value: CV-260SL → CV-290, 0.0056, P = 0.0106; CV-260SL → EPK-i, 0.0182, P = 0.0158; CV-290 → CV-260SL, 0.0134, P < 0.0001; CV-290 → EPK-i, 0.0299, P = 0.0001; EPK-i → CV-260SL, 0.0215, P = 0.0024; and EPK-i → CV-290, 0.0616, P < 0.0001]. In conclusion, cycleGAN can transform the diverse image characteristics of endoscope models into an AI-trained style to improve the detection performance of AI. |
collection_details |
GBV_USEFLAG_A SYSFLAG_A GBV_DOAJ SSG-OLC-PHA GBV_ILN_11 GBV_ILN_20 GBV_ILN_22 GBV_ILN_23 GBV_ILN_24 GBV_ILN_31 GBV_ILN_39 GBV_ILN_40 GBV_ILN_60 GBV_ILN_62 GBV_ILN_63 GBV_ILN_65 GBV_ILN_69 GBV_ILN_73 GBV_ILN_74 GBV_ILN_95 GBV_ILN_105 GBV_ILN_110 GBV_ILN_151 GBV_ILN_161 GBV_ILN_170 GBV_ILN_206 GBV_ILN_213 GBV_ILN_230 GBV_ILN_285 GBV_ILN_293 GBV_ILN_602 GBV_ILN_2003 GBV_ILN_2014 GBV_ILN_4012 GBV_ILN_4037 GBV_ILN_4112 GBV_ILN_4125 GBV_ILN_4126 GBV_ILN_4249 GBV_ILN_4305 GBV_ILN_4306 GBV_ILN_4307 GBV_ILN_4313 GBV_ILN_4322 GBV_ILN_4323 GBV_ILN_4324 GBV_ILN_4325 GBV_ILN_4338 GBV_ILN_4367 GBV_ILN_4700 |
title_short |
Reduced detection rate of artificial intelligence in images obtained from untrained endoscope models and improvement using domain adaptation algorithm |
url |
https://doi.org/10.3389/fmed.2022.1036974 https://doaj.org/article/e249349aab3e45b4aac5813ced1e1785 https://www.frontiersin.org/articles/10.3389/fmed.2022.1036974/full https://doaj.org/toc/2296-858X |
remote_bool |
true |
author2 |
Youngbae Hwang Hyun Gun Kim Joon Seong Lee Jin-Oh Kim Tae Hee Lee Seong Ran Jeon Su Jin Hong Bong Min Ko Seokmin Kim |
author2Str |
Youngbae Hwang Hyun Gun Kim Joon Seong Lee Jin-Oh Kim Tae Hee Lee Seong Ran Jeon Su Jin Hong Bong Min Ko Seokmin Kim |
ppnlink |
789482991 |
callnumber-subject |
R - General Medicine |
mediatype_str_mv |
c |
isOA_txt |
true |
hochschulschrift_bool |
false |
doi_str |
10.3389/fmed.2022.1036974 |
callnumber-a |
R5-920 |
up_date |
2024-07-03T19:51:39.825Z |
_version_ |
1803588783128444928 |
fullrecord_marcxml |
<?xml version="1.0" encoding="UTF-8"?><collection xmlns="http://www.loc.gov/MARC21/slim"><record><leader>01000caa a22002652 4500</leader><controlfield tag="001">DOAJ083865489</controlfield><controlfield tag="003">DE-627</controlfield><controlfield tag="005">20230503074605.0</controlfield><controlfield tag="007">cr uuu---uuuuu</controlfield><controlfield tag="008">230311s2022 xx |||||o 00| ||eng c</controlfield><datafield tag="024" ind1="7" ind2=" "><subfield code="a">10.3389/fmed.2022.1036974</subfield><subfield code="2">doi</subfield></datafield><datafield tag="035" ind1=" " ind2=" "><subfield code="a">(DE-627)DOAJ083865489</subfield></datafield><datafield tag="035" ind1=" " ind2=" "><subfield code="a">(DE-599)DOAJe249349aab3e45b4aac5813ced1e1785</subfield></datafield><datafield tag="040" ind1=" " ind2=" "><subfield code="a">DE-627</subfield><subfield code="b">ger</subfield><subfield code="c">DE-627</subfield><subfield code="e">rakwb</subfield></datafield><datafield tag="041" ind1=" " ind2=" "><subfield code="a">eng</subfield></datafield><datafield tag="050" ind1=" " ind2="0"><subfield code="a">R5-920</subfield></datafield><datafield tag="100" ind1="0" ind2=" "><subfield code="a">Junseok Park</subfield><subfield code="e">verfasserin</subfield><subfield code="4">aut</subfield></datafield><datafield tag="245" ind1="1" ind2="0"><subfield code="a">Reduced detection rate of artificial intelligence in images obtained from untrained endoscope models and improvement using domain adaptation algorithm</subfield></datafield><datafield tag="264" ind1=" " ind2="1"><subfield code="c">2022</subfield></datafield><datafield tag="336" ind1=" " ind2=" "><subfield code="a">Text</subfield><subfield code="b">txt</subfield><subfield code="2">rdacontent</subfield></datafield><datafield tag="337" ind1=" " ind2=" "><subfield code="a">Computermedien</subfield><subfield code="b">c</subfield><subfield code="2">rdamedia</subfield></datafield><datafield tag="338" ind1=" " ind2=" "><subfield code="a">Online-Ressource</subfield><subfield code="b">cr</subfield><subfield code="2">rdacarrier</subfield></datafield><datafield tag="520" ind1=" " ind2=" "><subfield code="a">A training dataset that is limited to a specific endoscope model can overfit artificial intelligence (AI) to its unique image characteristics. The performance of the AI may degrade in images of different endoscope model. The domain adaptation algorithm, i.e., the cycle-consistent adversarial network (cycleGAN), can transform the image characteristics into AI-friendly styles. We attempted to confirm the performance degradation of AIs in images of various endoscope models and aimed to improve them using cycleGAN transformation. Two AI models were developed from data of esophagogastroduodenoscopies collected retrospectively over 5 years: one for identifying the endoscope models, Olympus CV-260SL, CV-290 (Olympus, Tokyo, Japan), and PENTAX EPK-i (PENTAX Medical, Tokyo, Japan), and the other for recognizing the esophagogastric junction (EGJ). The AIs were trained using 45,683 standardized images from 1,498 cases and validated on 624 separate cases. Between the two endoscope manufacturers, there was a difference in image characteristics that could be distinguished without error by AI. The accuracy of the AI in recognizing gastroesophageal junction was &gt;0.979 in the same endoscope-examined validation dataset as the training dataset. However, they deteriorated in datasets from different endoscopes. Cycle-consistent adversarial network can successfully convert image characteristics to ameliorate the AI performance. The improvements were statistically significant and greater in datasets from different endoscope manufacturers [original → AI-trained style, increased area under the receiver operating characteristic (ROC) curve, P-value: CV-260SL → CV-290, 0.0056, P = 0.0106; CV-260SL → EPK-i, 0.0182, P = 0.0158; CV-290 → CV-260SL, 0.0134, P &lt; 0.0001; CV-290 → EPK-i, 0.0299, P = 0.0001; EPK-i → CV-260SL, 0.0215, P = 0.0024; and EPK-i → CV-290, 0.0616, P &lt; 0.0001]. In conclusion, cycleGAN can transform the diverse image characteristics of endoscope models into an AI-trained style to improve the detection performance of AI.</subfield></datafield><datafield tag="650" ind1=" " ind2="4"><subfield code="a">endoscopes</subfield></datafield><datafield tag="650" ind1=" " ind2="4"><subfield code="a">artificial intelligence</subfield></datafield><datafield tag="650" ind1=" " ind2="4"><subfield code="a">deep learning</subfield></datafield><datafield tag="650" ind1=" " ind2="4"><subfield code="a">generative adversarial network</subfield></datafield><datafield tag="650" ind1=" " ind2="4"><subfield code="a">domain adaptation algorithm</subfield></datafield><datafield tag="653" ind1=" " ind2="0"><subfield code="a">Medicine (General)</subfield></datafield><datafield tag="700" ind1="0" ind2=" "><subfield code="a">Youngbae Hwang</subfield><subfield code="e">verfasserin</subfield><subfield code="4">aut</subfield></datafield><datafield tag="700" ind1="0" ind2=" "><subfield code="a">Hyun Gun Kim</subfield><subfield code="e">verfasserin</subfield><subfield code="4">aut</subfield></datafield><datafield tag="700" ind1="0" ind2=" "><subfield code="a">Joon Seong Lee</subfield><subfield code="e">verfasserin</subfield><subfield code="4">aut</subfield></datafield><datafield tag="700" ind1="0" ind2=" "><subfield code="a">Jin-Oh Kim</subfield><subfield code="e">verfasserin</subfield><subfield code="4">aut</subfield></datafield><datafield tag="700" ind1="0" ind2=" "><subfield code="a">Tae Hee Lee</subfield><subfield code="e">verfasserin</subfield><subfield code="4">aut</subfield></datafield><datafield tag="700" ind1="0" ind2=" "><subfield code="a">Seong Ran Jeon</subfield><subfield code="e">verfasserin</subfield><subfield code="4">aut</subfield></datafield><datafield tag="700" ind1="0" ind2=" "><subfield code="a">Su Jin Hong</subfield><subfield code="e">verfasserin</subfield><subfield code="4">aut</subfield></datafield><datafield tag="700" ind1="0" ind2=" "><subfield code="a">Bong Min Ko</subfield><subfield code="e">verfasserin</subfield><subfield code="4">aut</subfield></datafield><datafield tag="700" ind1="0" ind2=" "><subfield code="a">Seokmin Kim</subfield><subfield code="e">verfasserin</subfield><subfield code="4">aut</subfield></datafield><datafield tag="773" ind1="0" ind2="8"><subfield code="i">In</subfield><subfield code="t">Frontiers in Medicine</subfield><subfield code="d">Frontiers Media S.A., 2014</subfield><subfield code="g">9(2022)</subfield><subfield code="w">(DE-627)789482991</subfield><subfield code="w">(DE-600)2775999-4</subfield><subfield code="x">2296858X</subfield><subfield code="7">nnns</subfield></datafield><datafield tag="773" ind1="1" ind2="8"><subfield code="g">volume:9</subfield><subfield code="g">year:2022</subfield></datafield><datafield tag="856" ind1="4" ind2="0"><subfield code="u">https://doi.org/10.3389/fmed.2022.1036974</subfield><subfield code="z">kostenfrei</subfield></datafield><datafield tag="856" ind1="4" ind2="0"><subfield code="u">https://doaj.org/article/e249349aab3e45b4aac5813ced1e1785</subfield><subfield code="z">kostenfrei</subfield></datafield><datafield tag="856" ind1="4" ind2="0"><subfield code="u">https://www.frontiersin.org/articles/10.3389/fmed.2022.1036974/full</subfield><subfield code="z">kostenfrei</subfield></datafield><datafield tag="856" ind1="4" ind2="2"><subfield code="u">https://doaj.org/toc/2296-858X</subfield><subfield code="y">Journal toc</subfield><subfield code="z">kostenfrei</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_USEFLAG_A</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">SYSFLAG_A</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_DOAJ</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">SSG-OLC-PHA</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_11</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_20</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_22</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_23</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_24</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_31</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_39</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_40</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_60</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_62</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_63</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_65</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_69</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_73</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_74</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_95</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_105</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_110</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_151</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_161</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_170</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_206</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_213</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_230</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_285</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_293</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_602</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2003</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2014</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4012</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4037</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4112</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4125</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4126</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4249</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4305</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4306</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4307</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4313</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4322</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4323</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4324</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4325</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4338</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4367</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4700</subfield></datafield><datafield tag="951" ind1=" " ind2=" "><subfield code="a">AR</subfield></datafield><datafield tag="952" ind1=" " ind2=" "><subfield code="d">9</subfield><subfield code="j">2022</subfield></datafield></record></collection>
|
score |
7.400609 |