Saliency-based 3D convolutional neural network for categorising common focal liver lesions on multisequence MRI
Abstract Background The imaging features of focal liver lesions (FLLs) are diverse and complex. Diagnosing FLLs with imaging alone remains challenging. We developed and validated an interpretable deep learning model for the classification of seven categories of FLLs on multisequence MRI and compared...
Ausführliche Beschreibung
Autor*in: |
Shu-Hui Wang [verfasserIn] Xin-Jun Han [verfasserIn] Jing Du [verfasserIn] Zhen-Chang Wang [verfasserIn] Chunwang Yuan [verfasserIn] Yinan Chen [verfasserIn] Yajing Zhu [verfasserIn] Xin Dou [verfasserIn] Xiao-Wei Xu [verfasserIn] Hui Xu [verfasserIn] Zheng-Han Yang [verfasserIn] |
---|
Format: |
E-Artikel |
---|---|
Sprache: |
Englisch |
Erschienen: |
2021 |
---|
Schlagwörter: |
---|
Übergeordnetes Werk: |
In: Insights into Imaging - SpringerOpen, 2013, 12(2021), 1, Seite 12 |
---|---|
Übergeordnetes Werk: |
volume:12 ; year:2021 ; number:1 ; pages:12 |
Links: |
---|
DOI / URN: |
10.1186/s13244-021-01117-z |
---|
Katalog-ID: |
DOAJ075379929 |
---|
LEADER | 01000caa a22002652 4500 | ||
---|---|---|---|
001 | DOAJ075379929 | ||
003 | DE-627 | ||
005 | 20230503065618.0 | ||
007 | cr uuu---uuuuu | ||
008 | 230228s2021 xx |||||o 00| ||eng c | ||
024 | 7 | |a 10.1186/s13244-021-01117-z |2 doi | |
035 | |a (DE-627)DOAJ075379929 | ||
035 | |a (DE-599)DOAJd040028a424a4073bc8776acf614b66e | ||
040 | |a DE-627 |b ger |c DE-627 |e rakwb | ||
041 | |a eng | ||
050 | 0 | |a R895-920 | |
100 | 0 | |a Shu-Hui Wang |e verfasserin |4 aut | |
245 | 1 | 0 | |a Saliency-based 3D convolutional neural network for categorising common focal liver lesions on multisequence MRI |
264 | 1 | |c 2021 | |
336 | |a Text |b txt |2 rdacontent | ||
337 | |a Computermedien |b c |2 rdamedia | ||
338 | |a Online-Ressource |b cr |2 rdacarrier | ||
520 | |a Abstract Background The imaging features of focal liver lesions (FLLs) are diverse and complex. Diagnosing FLLs with imaging alone remains challenging. We developed and validated an interpretable deep learning model for the classification of seven categories of FLLs on multisequence MRI and compared the differential diagnosis between the proposed model and radiologists. Methods In all, 557 lesions examined by multisequence MRI were utilised in this retrospective study and divided into training–validation (n = 444) and test (n = 113) datasets. The area under the receiver operating characteristic curve (AUC) was calculated to evaluate the performance of the model. The accuracy and confusion matrix of the model and individual radiologists were compared. Saliency maps were generated to highlight the activation region based on the model perspective. Results The AUC of the two- and seven-way classifications of the model were 0.969 (95% CI 0.944–0.994) and from 0.919 (95% CI 0.857–0.980) to 0.999 (95% CI 0.996–1.000), respectively. The model accuracy (79.6%) of the seven-way classification was higher than that of the radiology residents (66.4%, p = 0.035) and general radiologists (73.5%, p = 0.346) but lower than that of the academic radiologists (85.4%, p = 0.291). Confusion matrices showed the sources of diagnostic errors for the model and individual radiologists for each disease. Saliency maps detected the activation regions associated with each predicted class. Conclusion This interpretable deep learning model showed high diagnostic performance in the differentiation of FLLs on multisequence MRI. The analysis principle contributing to the predictions can be explained via saliency maps. | ||
650 | 4 | |a Deep learning | |
650 | 4 | |a MRI | |
650 | 4 | |a Classification | |
650 | 4 | |a Focal liver lesion | |
650 | 4 | |a Model interpretation | |
653 | 0 | |a Medical physics. Medical radiology. Nuclear medicine | |
700 | 0 | |a Xin-Jun Han |e verfasserin |4 aut | |
700 | 0 | |a Jing Du |e verfasserin |4 aut | |
700 | 0 | |a Zhen-Chang Wang |e verfasserin |4 aut | |
700 | 0 | |a Chunwang Yuan |e verfasserin |4 aut | |
700 | 0 | |a Yinan Chen |e verfasserin |4 aut | |
700 | 0 | |a Yajing Zhu |e verfasserin |4 aut | |
700 | 0 | |a Xin Dou |e verfasserin |4 aut | |
700 | 0 | |a Xiao-Wei Xu |e verfasserin |4 aut | |
700 | 0 | |a Hui Xu |e verfasserin |4 aut | |
700 | 0 | |a Zheng-Han Yang |e verfasserin |4 aut | |
773 | 0 | 8 | |i In |t Insights into Imaging |d SpringerOpen, 2013 |g 12(2021), 1, Seite 12 |w (DE-627)621547425 |w (DE-600)2543323-4 |x 18694101 |7 nnns |
773 | 1 | 8 | |g volume:12 |g year:2021 |g number:1 |g pages:12 |
856 | 4 | 0 | |u https://doi.org/10.1186/s13244-021-01117-z |z kostenfrei |
856 | 4 | 0 | |u https://doaj.org/article/d040028a424a4073bc8776acf614b66e |z kostenfrei |
856 | 4 | 0 | |u https://doi.org/10.1186/s13244-021-01117-z |z kostenfrei |
856 | 4 | 2 | |u https://doaj.org/toc/1869-4101 |y Journal toc |z kostenfrei |
912 | |a GBV_USEFLAG_A | ||
912 | |a SYSFLAG_A | ||
912 | |a GBV_DOAJ | ||
912 | |a SSG-OLC-PHA | ||
912 | |a GBV_ILN_20 | ||
912 | |a GBV_ILN_22 | ||
912 | |a GBV_ILN_23 | ||
912 | |a GBV_ILN_24 | ||
912 | |a GBV_ILN_31 | ||
912 | |a GBV_ILN_39 | ||
912 | |a GBV_ILN_40 | ||
912 | |a GBV_ILN_60 | ||
912 | |a GBV_ILN_62 | ||
912 | |a GBV_ILN_63 | ||
912 | |a GBV_ILN_65 | ||
912 | |a GBV_ILN_69 | ||
912 | |a GBV_ILN_73 | ||
912 | |a GBV_ILN_74 | ||
912 | |a GBV_ILN_95 | ||
912 | |a GBV_ILN_105 | ||
912 | |a GBV_ILN_110 | ||
912 | |a GBV_ILN_151 | ||
912 | |a GBV_ILN_161 | ||
912 | |a GBV_ILN_170 | ||
912 | |a GBV_ILN_206 | ||
912 | |a GBV_ILN_213 | ||
912 | |a GBV_ILN_230 | ||
912 | |a GBV_ILN_285 | ||
912 | |a GBV_ILN_293 | ||
912 | |a GBV_ILN_602 | ||
912 | |a GBV_ILN_2005 | ||
912 | |a GBV_ILN_2009 | ||
912 | |a GBV_ILN_2011 | ||
912 | |a GBV_ILN_2014 | ||
912 | |a GBV_ILN_2055 | ||
912 | |a GBV_ILN_2111 | ||
912 | |a GBV_ILN_4012 | ||
912 | |a GBV_ILN_4037 | ||
912 | |a GBV_ILN_4112 | ||
912 | |a GBV_ILN_4125 | ||
912 | |a GBV_ILN_4126 | ||
912 | |a GBV_ILN_4249 | ||
912 | |a GBV_ILN_4277 | ||
912 | |a GBV_ILN_4305 | ||
912 | |a GBV_ILN_4306 | ||
912 | |a GBV_ILN_4307 | ||
912 | |a GBV_ILN_4313 | ||
912 | |a GBV_ILN_4322 | ||
912 | |a GBV_ILN_4323 | ||
912 | |a GBV_ILN_4324 | ||
912 | |a GBV_ILN_4325 | ||
912 | |a GBV_ILN_4338 | ||
912 | |a GBV_ILN_4367 | ||
912 | |a GBV_ILN_4700 | ||
951 | |a AR | ||
952 | |d 12 |j 2021 |e 1 |h 12 |
author_variant |
s h w shw x j h xjh j d jd z c w zcw c y cy y c yc y z yz x d xd x w x xwx h x hx z h y zhy |
---|---|
matchkey_str |
article:18694101:2021----::ainyae3cnouinlerlewrfraeoiigomnoalvr |
hierarchy_sort_str |
2021 |
callnumber-subject-code |
R |
publishDate |
2021 |
allfields |
10.1186/s13244-021-01117-z doi (DE-627)DOAJ075379929 (DE-599)DOAJd040028a424a4073bc8776acf614b66e DE-627 ger DE-627 rakwb eng R895-920 Shu-Hui Wang verfasserin aut Saliency-based 3D convolutional neural network for categorising common focal liver lesions on multisequence MRI 2021 Text txt rdacontent Computermedien c rdamedia Online-Ressource cr rdacarrier Abstract Background The imaging features of focal liver lesions (FLLs) are diverse and complex. Diagnosing FLLs with imaging alone remains challenging. We developed and validated an interpretable deep learning model for the classification of seven categories of FLLs on multisequence MRI and compared the differential diagnosis between the proposed model and radiologists. Methods In all, 557 lesions examined by multisequence MRI were utilised in this retrospective study and divided into training–validation (n = 444) and test (n = 113) datasets. The area under the receiver operating characteristic curve (AUC) was calculated to evaluate the performance of the model. The accuracy and confusion matrix of the model and individual radiologists were compared. Saliency maps were generated to highlight the activation region based on the model perspective. Results The AUC of the two- and seven-way classifications of the model were 0.969 (95% CI 0.944–0.994) and from 0.919 (95% CI 0.857–0.980) to 0.999 (95% CI 0.996–1.000), respectively. The model accuracy (79.6%) of the seven-way classification was higher than that of the radiology residents (66.4%, p = 0.035) and general radiologists (73.5%, p = 0.346) but lower than that of the academic radiologists (85.4%, p = 0.291). Confusion matrices showed the sources of diagnostic errors for the model and individual radiologists for each disease. Saliency maps detected the activation regions associated with each predicted class. Conclusion This interpretable deep learning model showed high diagnostic performance in the differentiation of FLLs on multisequence MRI. The analysis principle contributing to the predictions can be explained via saliency maps. Deep learning MRI Classification Focal liver lesion Model interpretation Medical physics. Medical radiology. Nuclear medicine Xin-Jun Han verfasserin aut Jing Du verfasserin aut Zhen-Chang Wang verfasserin aut Chunwang Yuan verfasserin aut Yinan Chen verfasserin aut Yajing Zhu verfasserin aut Xin Dou verfasserin aut Xiao-Wei Xu verfasserin aut Hui Xu verfasserin aut Zheng-Han Yang verfasserin aut In Insights into Imaging SpringerOpen, 2013 12(2021), 1, Seite 12 (DE-627)621547425 (DE-600)2543323-4 18694101 nnns volume:12 year:2021 number:1 pages:12 https://doi.org/10.1186/s13244-021-01117-z kostenfrei https://doaj.org/article/d040028a424a4073bc8776acf614b66e kostenfrei https://doi.org/10.1186/s13244-021-01117-z kostenfrei https://doaj.org/toc/1869-4101 Journal toc kostenfrei GBV_USEFLAG_A SYSFLAG_A GBV_DOAJ SSG-OLC-PHA GBV_ILN_20 GBV_ILN_22 GBV_ILN_23 GBV_ILN_24 GBV_ILN_31 GBV_ILN_39 GBV_ILN_40 GBV_ILN_60 GBV_ILN_62 GBV_ILN_63 GBV_ILN_65 GBV_ILN_69 GBV_ILN_73 GBV_ILN_74 GBV_ILN_95 GBV_ILN_105 GBV_ILN_110 GBV_ILN_151 GBV_ILN_161 GBV_ILN_170 GBV_ILN_206 GBV_ILN_213 GBV_ILN_230 GBV_ILN_285 GBV_ILN_293 GBV_ILN_602 GBV_ILN_2005 GBV_ILN_2009 GBV_ILN_2011 GBV_ILN_2014 GBV_ILN_2055 GBV_ILN_2111 GBV_ILN_4012 GBV_ILN_4037 GBV_ILN_4112 GBV_ILN_4125 GBV_ILN_4126 GBV_ILN_4249 GBV_ILN_4277 GBV_ILN_4305 GBV_ILN_4306 GBV_ILN_4307 GBV_ILN_4313 GBV_ILN_4322 GBV_ILN_4323 GBV_ILN_4324 GBV_ILN_4325 GBV_ILN_4338 GBV_ILN_4367 GBV_ILN_4700 AR 12 2021 1 12 |
spelling |
10.1186/s13244-021-01117-z doi (DE-627)DOAJ075379929 (DE-599)DOAJd040028a424a4073bc8776acf614b66e DE-627 ger DE-627 rakwb eng R895-920 Shu-Hui Wang verfasserin aut Saliency-based 3D convolutional neural network for categorising common focal liver lesions on multisequence MRI 2021 Text txt rdacontent Computermedien c rdamedia Online-Ressource cr rdacarrier Abstract Background The imaging features of focal liver lesions (FLLs) are diverse and complex. Diagnosing FLLs with imaging alone remains challenging. We developed and validated an interpretable deep learning model for the classification of seven categories of FLLs on multisequence MRI and compared the differential diagnosis between the proposed model and radiologists. Methods In all, 557 lesions examined by multisequence MRI were utilised in this retrospective study and divided into training–validation (n = 444) and test (n = 113) datasets. The area under the receiver operating characteristic curve (AUC) was calculated to evaluate the performance of the model. The accuracy and confusion matrix of the model and individual radiologists were compared. Saliency maps were generated to highlight the activation region based on the model perspective. Results The AUC of the two- and seven-way classifications of the model were 0.969 (95% CI 0.944–0.994) and from 0.919 (95% CI 0.857–0.980) to 0.999 (95% CI 0.996–1.000), respectively. The model accuracy (79.6%) of the seven-way classification was higher than that of the radiology residents (66.4%, p = 0.035) and general radiologists (73.5%, p = 0.346) but lower than that of the academic radiologists (85.4%, p = 0.291). Confusion matrices showed the sources of diagnostic errors for the model and individual radiologists for each disease. Saliency maps detected the activation regions associated with each predicted class. Conclusion This interpretable deep learning model showed high diagnostic performance in the differentiation of FLLs on multisequence MRI. The analysis principle contributing to the predictions can be explained via saliency maps. Deep learning MRI Classification Focal liver lesion Model interpretation Medical physics. Medical radiology. Nuclear medicine Xin-Jun Han verfasserin aut Jing Du verfasserin aut Zhen-Chang Wang verfasserin aut Chunwang Yuan verfasserin aut Yinan Chen verfasserin aut Yajing Zhu verfasserin aut Xin Dou verfasserin aut Xiao-Wei Xu verfasserin aut Hui Xu verfasserin aut Zheng-Han Yang verfasserin aut In Insights into Imaging SpringerOpen, 2013 12(2021), 1, Seite 12 (DE-627)621547425 (DE-600)2543323-4 18694101 nnns volume:12 year:2021 number:1 pages:12 https://doi.org/10.1186/s13244-021-01117-z kostenfrei https://doaj.org/article/d040028a424a4073bc8776acf614b66e kostenfrei https://doi.org/10.1186/s13244-021-01117-z kostenfrei https://doaj.org/toc/1869-4101 Journal toc kostenfrei GBV_USEFLAG_A SYSFLAG_A GBV_DOAJ SSG-OLC-PHA GBV_ILN_20 GBV_ILN_22 GBV_ILN_23 GBV_ILN_24 GBV_ILN_31 GBV_ILN_39 GBV_ILN_40 GBV_ILN_60 GBV_ILN_62 GBV_ILN_63 GBV_ILN_65 GBV_ILN_69 GBV_ILN_73 GBV_ILN_74 GBV_ILN_95 GBV_ILN_105 GBV_ILN_110 GBV_ILN_151 GBV_ILN_161 GBV_ILN_170 GBV_ILN_206 GBV_ILN_213 GBV_ILN_230 GBV_ILN_285 GBV_ILN_293 GBV_ILN_602 GBV_ILN_2005 GBV_ILN_2009 GBV_ILN_2011 GBV_ILN_2014 GBV_ILN_2055 GBV_ILN_2111 GBV_ILN_4012 GBV_ILN_4037 GBV_ILN_4112 GBV_ILN_4125 GBV_ILN_4126 GBV_ILN_4249 GBV_ILN_4277 GBV_ILN_4305 GBV_ILN_4306 GBV_ILN_4307 GBV_ILN_4313 GBV_ILN_4322 GBV_ILN_4323 GBV_ILN_4324 GBV_ILN_4325 GBV_ILN_4338 GBV_ILN_4367 GBV_ILN_4700 AR 12 2021 1 12 |
allfields_unstemmed |
10.1186/s13244-021-01117-z doi (DE-627)DOAJ075379929 (DE-599)DOAJd040028a424a4073bc8776acf614b66e DE-627 ger DE-627 rakwb eng R895-920 Shu-Hui Wang verfasserin aut Saliency-based 3D convolutional neural network for categorising common focal liver lesions on multisequence MRI 2021 Text txt rdacontent Computermedien c rdamedia Online-Ressource cr rdacarrier Abstract Background The imaging features of focal liver lesions (FLLs) are diverse and complex. Diagnosing FLLs with imaging alone remains challenging. We developed and validated an interpretable deep learning model for the classification of seven categories of FLLs on multisequence MRI and compared the differential diagnosis between the proposed model and radiologists. Methods In all, 557 lesions examined by multisequence MRI were utilised in this retrospective study and divided into training–validation (n = 444) and test (n = 113) datasets. The area under the receiver operating characteristic curve (AUC) was calculated to evaluate the performance of the model. The accuracy and confusion matrix of the model and individual radiologists were compared. Saliency maps were generated to highlight the activation region based on the model perspective. Results The AUC of the two- and seven-way classifications of the model were 0.969 (95% CI 0.944–0.994) and from 0.919 (95% CI 0.857–0.980) to 0.999 (95% CI 0.996–1.000), respectively. The model accuracy (79.6%) of the seven-way classification was higher than that of the radiology residents (66.4%, p = 0.035) and general radiologists (73.5%, p = 0.346) but lower than that of the academic radiologists (85.4%, p = 0.291). Confusion matrices showed the sources of diagnostic errors for the model and individual radiologists for each disease. Saliency maps detected the activation regions associated with each predicted class. Conclusion This interpretable deep learning model showed high diagnostic performance in the differentiation of FLLs on multisequence MRI. The analysis principle contributing to the predictions can be explained via saliency maps. Deep learning MRI Classification Focal liver lesion Model interpretation Medical physics. Medical radiology. Nuclear medicine Xin-Jun Han verfasserin aut Jing Du verfasserin aut Zhen-Chang Wang verfasserin aut Chunwang Yuan verfasserin aut Yinan Chen verfasserin aut Yajing Zhu verfasserin aut Xin Dou verfasserin aut Xiao-Wei Xu verfasserin aut Hui Xu verfasserin aut Zheng-Han Yang verfasserin aut In Insights into Imaging SpringerOpen, 2013 12(2021), 1, Seite 12 (DE-627)621547425 (DE-600)2543323-4 18694101 nnns volume:12 year:2021 number:1 pages:12 https://doi.org/10.1186/s13244-021-01117-z kostenfrei https://doaj.org/article/d040028a424a4073bc8776acf614b66e kostenfrei https://doi.org/10.1186/s13244-021-01117-z kostenfrei https://doaj.org/toc/1869-4101 Journal toc kostenfrei GBV_USEFLAG_A SYSFLAG_A GBV_DOAJ SSG-OLC-PHA GBV_ILN_20 GBV_ILN_22 GBV_ILN_23 GBV_ILN_24 GBV_ILN_31 GBV_ILN_39 GBV_ILN_40 GBV_ILN_60 GBV_ILN_62 GBV_ILN_63 GBV_ILN_65 GBV_ILN_69 GBV_ILN_73 GBV_ILN_74 GBV_ILN_95 GBV_ILN_105 GBV_ILN_110 GBV_ILN_151 GBV_ILN_161 GBV_ILN_170 GBV_ILN_206 GBV_ILN_213 GBV_ILN_230 GBV_ILN_285 GBV_ILN_293 GBV_ILN_602 GBV_ILN_2005 GBV_ILN_2009 GBV_ILN_2011 GBV_ILN_2014 GBV_ILN_2055 GBV_ILN_2111 GBV_ILN_4012 GBV_ILN_4037 GBV_ILN_4112 GBV_ILN_4125 GBV_ILN_4126 GBV_ILN_4249 GBV_ILN_4277 GBV_ILN_4305 GBV_ILN_4306 GBV_ILN_4307 GBV_ILN_4313 GBV_ILN_4322 GBV_ILN_4323 GBV_ILN_4324 GBV_ILN_4325 GBV_ILN_4338 GBV_ILN_4367 GBV_ILN_4700 AR 12 2021 1 12 |
allfieldsGer |
10.1186/s13244-021-01117-z doi (DE-627)DOAJ075379929 (DE-599)DOAJd040028a424a4073bc8776acf614b66e DE-627 ger DE-627 rakwb eng R895-920 Shu-Hui Wang verfasserin aut Saliency-based 3D convolutional neural network for categorising common focal liver lesions on multisequence MRI 2021 Text txt rdacontent Computermedien c rdamedia Online-Ressource cr rdacarrier Abstract Background The imaging features of focal liver lesions (FLLs) are diverse and complex. Diagnosing FLLs with imaging alone remains challenging. We developed and validated an interpretable deep learning model for the classification of seven categories of FLLs on multisequence MRI and compared the differential diagnosis between the proposed model and radiologists. Methods In all, 557 lesions examined by multisequence MRI were utilised in this retrospective study and divided into training–validation (n = 444) and test (n = 113) datasets. The area under the receiver operating characteristic curve (AUC) was calculated to evaluate the performance of the model. The accuracy and confusion matrix of the model and individual radiologists were compared. Saliency maps were generated to highlight the activation region based on the model perspective. Results The AUC of the two- and seven-way classifications of the model were 0.969 (95% CI 0.944–0.994) and from 0.919 (95% CI 0.857–0.980) to 0.999 (95% CI 0.996–1.000), respectively. The model accuracy (79.6%) of the seven-way classification was higher than that of the radiology residents (66.4%, p = 0.035) and general radiologists (73.5%, p = 0.346) but lower than that of the academic radiologists (85.4%, p = 0.291). Confusion matrices showed the sources of diagnostic errors for the model and individual radiologists for each disease. Saliency maps detected the activation regions associated with each predicted class. Conclusion This interpretable deep learning model showed high diagnostic performance in the differentiation of FLLs on multisequence MRI. The analysis principle contributing to the predictions can be explained via saliency maps. Deep learning MRI Classification Focal liver lesion Model interpretation Medical physics. Medical radiology. Nuclear medicine Xin-Jun Han verfasserin aut Jing Du verfasserin aut Zhen-Chang Wang verfasserin aut Chunwang Yuan verfasserin aut Yinan Chen verfasserin aut Yajing Zhu verfasserin aut Xin Dou verfasserin aut Xiao-Wei Xu verfasserin aut Hui Xu verfasserin aut Zheng-Han Yang verfasserin aut In Insights into Imaging SpringerOpen, 2013 12(2021), 1, Seite 12 (DE-627)621547425 (DE-600)2543323-4 18694101 nnns volume:12 year:2021 number:1 pages:12 https://doi.org/10.1186/s13244-021-01117-z kostenfrei https://doaj.org/article/d040028a424a4073bc8776acf614b66e kostenfrei https://doi.org/10.1186/s13244-021-01117-z kostenfrei https://doaj.org/toc/1869-4101 Journal toc kostenfrei GBV_USEFLAG_A SYSFLAG_A GBV_DOAJ SSG-OLC-PHA GBV_ILN_20 GBV_ILN_22 GBV_ILN_23 GBV_ILN_24 GBV_ILN_31 GBV_ILN_39 GBV_ILN_40 GBV_ILN_60 GBV_ILN_62 GBV_ILN_63 GBV_ILN_65 GBV_ILN_69 GBV_ILN_73 GBV_ILN_74 GBV_ILN_95 GBV_ILN_105 GBV_ILN_110 GBV_ILN_151 GBV_ILN_161 GBV_ILN_170 GBV_ILN_206 GBV_ILN_213 GBV_ILN_230 GBV_ILN_285 GBV_ILN_293 GBV_ILN_602 GBV_ILN_2005 GBV_ILN_2009 GBV_ILN_2011 GBV_ILN_2014 GBV_ILN_2055 GBV_ILN_2111 GBV_ILN_4012 GBV_ILN_4037 GBV_ILN_4112 GBV_ILN_4125 GBV_ILN_4126 GBV_ILN_4249 GBV_ILN_4277 GBV_ILN_4305 GBV_ILN_4306 GBV_ILN_4307 GBV_ILN_4313 GBV_ILN_4322 GBV_ILN_4323 GBV_ILN_4324 GBV_ILN_4325 GBV_ILN_4338 GBV_ILN_4367 GBV_ILN_4700 AR 12 2021 1 12 |
allfieldsSound |
10.1186/s13244-021-01117-z doi (DE-627)DOAJ075379929 (DE-599)DOAJd040028a424a4073bc8776acf614b66e DE-627 ger DE-627 rakwb eng R895-920 Shu-Hui Wang verfasserin aut Saliency-based 3D convolutional neural network for categorising common focal liver lesions on multisequence MRI 2021 Text txt rdacontent Computermedien c rdamedia Online-Ressource cr rdacarrier Abstract Background The imaging features of focal liver lesions (FLLs) are diverse and complex. Diagnosing FLLs with imaging alone remains challenging. We developed and validated an interpretable deep learning model for the classification of seven categories of FLLs on multisequence MRI and compared the differential diagnosis between the proposed model and radiologists. Methods In all, 557 lesions examined by multisequence MRI were utilised in this retrospective study and divided into training–validation (n = 444) and test (n = 113) datasets. The area under the receiver operating characteristic curve (AUC) was calculated to evaluate the performance of the model. The accuracy and confusion matrix of the model and individual radiologists were compared. Saliency maps were generated to highlight the activation region based on the model perspective. Results The AUC of the two- and seven-way classifications of the model were 0.969 (95% CI 0.944–0.994) and from 0.919 (95% CI 0.857–0.980) to 0.999 (95% CI 0.996–1.000), respectively. The model accuracy (79.6%) of the seven-way classification was higher than that of the radiology residents (66.4%, p = 0.035) and general radiologists (73.5%, p = 0.346) but lower than that of the academic radiologists (85.4%, p = 0.291). Confusion matrices showed the sources of diagnostic errors for the model and individual radiologists for each disease. Saliency maps detected the activation regions associated with each predicted class. Conclusion This interpretable deep learning model showed high diagnostic performance in the differentiation of FLLs on multisequence MRI. The analysis principle contributing to the predictions can be explained via saliency maps. Deep learning MRI Classification Focal liver lesion Model interpretation Medical physics. Medical radiology. Nuclear medicine Xin-Jun Han verfasserin aut Jing Du verfasserin aut Zhen-Chang Wang verfasserin aut Chunwang Yuan verfasserin aut Yinan Chen verfasserin aut Yajing Zhu verfasserin aut Xin Dou verfasserin aut Xiao-Wei Xu verfasserin aut Hui Xu verfasserin aut Zheng-Han Yang verfasserin aut In Insights into Imaging SpringerOpen, 2013 12(2021), 1, Seite 12 (DE-627)621547425 (DE-600)2543323-4 18694101 nnns volume:12 year:2021 number:1 pages:12 https://doi.org/10.1186/s13244-021-01117-z kostenfrei https://doaj.org/article/d040028a424a4073bc8776acf614b66e kostenfrei https://doi.org/10.1186/s13244-021-01117-z kostenfrei https://doaj.org/toc/1869-4101 Journal toc kostenfrei GBV_USEFLAG_A SYSFLAG_A GBV_DOAJ SSG-OLC-PHA GBV_ILN_20 GBV_ILN_22 GBV_ILN_23 GBV_ILN_24 GBV_ILN_31 GBV_ILN_39 GBV_ILN_40 GBV_ILN_60 GBV_ILN_62 GBV_ILN_63 GBV_ILN_65 GBV_ILN_69 GBV_ILN_73 GBV_ILN_74 GBV_ILN_95 GBV_ILN_105 GBV_ILN_110 GBV_ILN_151 GBV_ILN_161 GBV_ILN_170 GBV_ILN_206 GBV_ILN_213 GBV_ILN_230 GBV_ILN_285 GBV_ILN_293 GBV_ILN_602 GBV_ILN_2005 GBV_ILN_2009 GBV_ILN_2011 GBV_ILN_2014 GBV_ILN_2055 GBV_ILN_2111 GBV_ILN_4012 GBV_ILN_4037 GBV_ILN_4112 GBV_ILN_4125 GBV_ILN_4126 GBV_ILN_4249 GBV_ILN_4277 GBV_ILN_4305 GBV_ILN_4306 GBV_ILN_4307 GBV_ILN_4313 GBV_ILN_4322 GBV_ILN_4323 GBV_ILN_4324 GBV_ILN_4325 GBV_ILN_4338 GBV_ILN_4367 GBV_ILN_4700 AR 12 2021 1 12 |
language |
English |
source |
In Insights into Imaging 12(2021), 1, Seite 12 volume:12 year:2021 number:1 pages:12 |
sourceStr |
In Insights into Imaging 12(2021), 1, Seite 12 volume:12 year:2021 number:1 pages:12 |
format_phy_str_mv |
Article |
institution |
findex.gbv.de |
topic_facet |
Deep learning MRI Classification Focal liver lesion Model interpretation Medical physics. Medical radiology. Nuclear medicine |
isfreeaccess_bool |
true |
container_title |
Insights into Imaging |
authorswithroles_txt_mv |
Shu-Hui Wang @@aut@@ Xin-Jun Han @@aut@@ Jing Du @@aut@@ Zhen-Chang Wang @@aut@@ Chunwang Yuan @@aut@@ Yinan Chen @@aut@@ Yajing Zhu @@aut@@ Xin Dou @@aut@@ Xiao-Wei Xu @@aut@@ Hui Xu @@aut@@ Zheng-Han Yang @@aut@@ |
publishDateDaySort_date |
2021-01-01T00:00:00Z |
hierarchy_top_id |
621547425 |
id |
DOAJ075379929 |
language_de |
englisch |
fullrecord |
<?xml version="1.0" encoding="UTF-8"?><collection xmlns="http://www.loc.gov/MARC21/slim"><record><leader>01000caa a22002652 4500</leader><controlfield tag="001">DOAJ075379929</controlfield><controlfield tag="003">DE-627</controlfield><controlfield tag="005">20230503065618.0</controlfield><controlfield tag="007">cr uuu---uuuuu</controlfield><controlfield tag="008">230228s2021 xx |||||o 00| ||eng c</controlfield><datafield tag="024" ind1="7" ind2=" "><subfield code="a">10.1186/s13244-021-01117-z</subfield><subfield code="2">doi</subfield></datafield><datafield tag="035" ind1=" " ind2=" "><subfield code="a">(DE-627)DOAJ075379929</subfield></datafield><datafield tag="035" ind1=" " ind2=" "><subfield code="a">(DE-599)DOAJd040028a424a4073bc8776acf614b66e</subfield></datafield><datafield tag="040" ind1=" " ind2=" "><subfield code="a">DE-627</subfield><subfield code="b">ger</subfield><subfield code="c">DE-627</subfield><subfield code="e">rakwb</subfield></datafield><datafield tag="041" ind1=" " ind2=" "><subfield code="a">eng</subfield></datafield><datafield tag="050" ind1=" " ind2="0"><subfield code="a">R895-920</subfield></datafield><datafield tag="100" ind1="0" ind2=" "><subfield code="a">Shu-Hui Wang</subfield><subfield code="e">verfasserin</subfield><subfield code="4">aut</subfield></datafield><datafield tag="245" ind1="1" ind2="0"><subfield code="a">Saliency-based 3D convolutional neural network for categorising common focal liver lesions on multisequence MRI</subfield></datafield><datafield tag="264" ind1=" " ind2="1"><subfield code="c">2021</subfield></datafield><datafield tag="336" ind1=" " ind2=" "><subfield code="a">Text</subfield><subfield code="b">txt</subfield><subfield code="2">rdacontent</subfield></datafield><datafield tag="337" ind1=" " ind2=" "><subfield code="a">Computermedien</subfield><subfield code="b">c</subfield><subfield code="2">rdamedia</subfield></datafield><datafield tag="338" ind1=" " ind2=" "><subfield code="a">Online-Ressource</subfield><subfield code="b">cr</subfield><subfield code="2">rdacarrier</subfield></datafield><datafield tag="520" ind1=" " ind2=" "><subfield code="a">Abstract Background The imaging features of focal liver lesions (FLLs) are diverse and complex. Diagnosing FLLs with imaging alone remains challenging. We developed and validated an interpretable deep learning model for the classification of seven categories of FLLs on multisequence MRI and compared the differential diagnosis between the proposed model and radiologists. Methods In all, 557 lesions examined by multisequence MRI were utilised in this retrospective study and divided into training–validation (n = 444) and test (n = 113) datasets. The area under the receiver operating characteristic curve (AUC) was calculated to evaluate the performance of the model. The accuracy and confusion matrix of the model and individual radiologists were compared. Saliency maps were generated to highlight the activation region based on the model perspective. Results The AUC of the two- and seven-way classifications of the model were 0.969 (95% CI 0.944–0.994) and from 0.919 (95% CI 0.857–0.980) to 0.999 (95% CI 0.996–1.000), respectively. The model accuracy (79.6%) of the seven-way classification was higher than that of the radiology residents (66.4%, p = 0.035) and general radiologists (73.5%, p = 0.346) but lower than that of the academic radiologists (85.4%, p = 0.291). Confusion matrices showed the sources of diagnostic errors for the model and individual radiologists for each disease. Saliency maps detected the activation regions associated with each predicted class. Conclusion This interpretable deep learning model showed high diagnostic performance in the differentiation of FLLs on multisequence MRI. The analysis principle contributing to the predictions can be explained via saliency maps.</subfield></datafield><datafield tag="650" ind1=" " ind2="4"><subfield code="a">Deep learning</subfield></datafield><datafield tag="650" ind1=" " ind2="4"><subfield code="a">MRI</subfield></datafield><datafield tag="650" ind1=" " ind2="4"><subfield code="a">Classification</subfield></datafield><datafield tag="650" ind1=" " ind2="4"><subfield code="a">Focal liver lesion</subfield></datafield><datafield tag="650" ind1=" " ind2="4"><subfield code="a">Model interpretation</subfield></datafield><datafield tag="653" ind1=" " ind2="0"><subfield code="a">Medical physics. Medical radiology. Nuclear medicine</subfield></datafield><datafield tag="700" ind1="0" ind2=" "><subfield code="a">Xin-Jun Han</subfield><subfield code="e">verfasserin</subfield><subfield code="4">aut</subfield></datafield><datafield tag="700" ind1="0" ind2=" "><subfield code="a">Jing Du</subfield><subfield code="e">verfasserin</subfield><subfield code="4">aut</subfield></datafield><datafield tag="700" ind1="0" ind2=" "><subfield code="a">Zhen-Chang Wang</subfield><subfield code="e">verfasserin</subfield><subfield code="4">aut</subfield></datafield><datafield tag="700" ind1="0" ind2=" "><subfield code="a">Chunwang Yuan</subfield><subfield code="e">verfasserin</subfield><subfield code="4">aut</subfield></datafield><datafield tag="700" ind1="0" ind2=" "><subfield code="a">Yinan Chen</subfield><subfield code="e">verfasserin</subfield><subfield code="4">aut</subfield></datafield><datafield tag="700" ind1="0" ind2=" "><subfield code="a">Yajing Zhu</subfield><subfield code="e">verfasserin</subfield><subfield code="4">aut</subfield></datafield><datafield tag="700" ind1="0" ind2=" "><subfield code="a">Xin Dou</subfield><subfield code="e">verfasserin</subfield><subfield code="4">aut</subfield></datafield><datafield tag="700" ind1="0" ind2=" "><subfield code="a">Xiao-Wei Xu</subfield><subfield code="e">verfasserin</subfield><subfield code="4">aut</subfield></datafield><datafield tag="700" ind1="0" ind2=" "><subfield code="a">Hui Xu</subfield><subfield code="e">verfasserin</subfield><subfield code="4">aut</subfield></datafield><datafield tag="700" ind1="0" ind2=" "><subfield code="a">Zheng-Han Yang</subfield><subfield code="e">verfasserin</subfield><subfield code="4">aut</subfield></datafield><datafield tag="773" ind1="0" ind2="8"><subfield code="i">In</subfield><subfield code="t">Insights into Imaging</subfield><subfield code="d">SpringerOpen, 2013</subfield><subfield code="g">12(2021), 1, Seite 12</subfield><subfield code="w">(DE-627)621547425</subfield><subfield code="w">(DE-600)2543323-4</subfield><subfield code="x">18694101</subfield><subfield code="7">nnns</subfield></datafield><datafield tag="773" ind1="1" ind2="8"><subfield code="g">volume:12</subfield><subfield code="g">year:2021</subfield><subfield code="g">number:1</subfield><subfield code="g">pages:12</subfield></datafield><datafield tag="856" ind1="4" ind2="0"><subfield code="u">https://doi.org/10.1186/s13244-021-01117-z</subfield><subfield code="z">kostenfrei</subfield></datafield><datafield tag="856" ind1="4" ind2="0"><subfield code="u">https://doaj.org/article/d040028a424a4073bc8776acf614b66e</subfield><subfield code="z">kostenfrei</subfield></datafield><datafield tag="856" ind1="4" ind2="0"><subfield code="u">https://doi.org/10.1186/s13244-021-01117-z</subfield><subfield code="z">kostenfrei</subfield></datafield><datafield tag="856" ind1="4" ind2="2"><subfield code="u">https://doaj.org/toc/1869-4101</subfield><subfield code="y">Journal toc</subfield><subfield code="z">kostenfrei</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_USEFLAG_A</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">SYSFLAG_A</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_DOAJ</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">SSG-OLC-PHA</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_20</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_22</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_23</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_24</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_31</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_39</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_40</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_60</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_62</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_63</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_65</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_69</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_73</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_74</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_95</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_105</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_110</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_151</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_161</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_170</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_206</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_213</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_230</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_285</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_293</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_602</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2005</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2009</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2011</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2014</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2055</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2111</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4012</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4037</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4112</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4125</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4126</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4249</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4277</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4305</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4306</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4307</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4313</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4322</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4323</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4324</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4325</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4338</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4367</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4700</subfield></datafield><datafield tag="951" ind1=" " ind2=" "><subfield code="a">AR</subfield></datafield><datafield tag="952" ind1=" " ind2=" "><subfield code="d">12</subfield><subfield code="j">2021</subfield><subfield code="e">1</subfield><subfield code="h">12</subfield></datafield></record></collection>
|
callnumber-first |
R - Medicine |
author |
Shu-Hui Wang |
spellingShingle |
Shu-Hui Wang misc R895-920 misc Deep learning misc MRI misc Classification misc Focal liver lesion misc Model interpretation misc Medical physics. Medical radiology. Nuclear medicine Saliency-based 3D convolutional neural network for categorising common focal liver lesions on multisequence MRI |
authorStr |
Shu-Hui Wang |
ppnlink_with_tag_str_mv |
@@773@@(DE-627)621547425 |
format |
electronic Article |
delete_txt_mv |
keep |
author_role |
aut aut aut aut aut aut aut aut aut aut aut |
collection |
DOAJ |
remote_str |
true |
callnumber-label |
R895-920 |
illustrated |
Not Illustrated |
issn |
18694101 |
topic_title |
R895-920 Saliency-based 3D convolutional neural network for categorising common focal liver lesions on multisequence MRI Deep learning MRI Classification Focal liver lesion Model interpretation |
topic |
misc R895-920 misc Deep learning misc MRI misc Classification misc Focal liver lesion misc Model interpretation misc Medical physics. Medical radiology. Nuclear medicine |
topic_unstemmed |
misc R895-920 misc Deep learning misc MRI misc Classification misc Focal liver lesion misc Model interpretation misc Medical physics. Medical radiology. Nuclear medicine |
topic_browse |
misc R895-920 misc Deep learning misc MRI misc Classification misc Focal liver lesion misc Model interpretation misc Medical physics. Medical radiology. Nuclear medicine |
format_facet |
Elektronische Aufsätze Aufsätze Elektronische Ressource |
format_main_str_mv |
Text Zeitschrift/Artikel |
carriertype_str_mv |
cr |
hierarchy_parent_title |
Insights into Imaging |
hierarchy_parent_id |
621547425 |
hierarchy_top_title |
Insights into Imaging |
isfreeaccess_txt |
true |
familylinks_str_mv |
(DE-627)621547425 (DE-600)2543323-4 |
title |
Saliency-based 3D convolutional neural network for categorising common focal liver lesions on multisequence MRI |
ctrlnum |
(DE-627)DOAJ075379929 (DE-599)DOAJd040028a424a4073bc8776acf614b66e |
title_full |
Saliency-based 3D convolutional neural network for categorising common focal liver lesions on multisequence MRI |
author_sort |
Shu-Hui Wang |
journal |
Insights into Imaging |
journalStr |
Insights into Imaging |
callnumber-first-code |
R |
lang_code |
eng |
isOA_bool |
true |
recordtype |
marc |
publishDateSort |
2021 |
contenttype_str_mv |
txt |
container_start_page |
12 |
author_browse |
Shu-Hui Wang Xin-Jun Han Jing Du Zhen-Chang Wang Chunwang Yuan Yinan Chen Yajing Zhu Xin Dou Xiao-Wei Xu Hui Xu Zheng-Han Yang |
container_volume |
12 |
class |
R895-920 |
format_se |
Elektronische Aufsätze |
author-letter |
Shu-Hui Wang |
doi_str_mv |
10.1186/s13244-021-01117-z |
author2-role |
verfasserin |
title_sort |
saliency-based 3d convolutional neural network for categorising common focal liver lesions on multisequence mri |
callnumber |
R895-920 |
title_auth |
Saliency-based 3D convolutional neural network for categorising common focal liver lesions on multisequence MRI |
abstract |
Abstract Background The imaging features of focal liver lesions (FLLs) are diverse and complex. Diagnosing FLLs with imaging alone remains challenging. We developed and validated an interpretable deep learning model for the classification of seven categories of FLLs on multisequence MRI and compared the differential diagnosis between the proposed model and radiologists. Methods In all, 557 lesions examined by multisequence MRI were utilised in this retrospective study and divided into training–validation (n = 444) and test (n = 113) datasets. The area under the receiver operating characteristic curve (AUC) was calculated to evaluate the performance of the model. The accuracy and confusion matrix of the model and individual radiologists were compared. Saliency maps were generated to highlight the activation region based on the model perspective. Results The AUC of the two- and seven-way classifications of the model were 0.969 (95% CI 0.944–0.994) and from 0.919 (95% CI 0.857–0.980) to 0.999 (95% CI 0.996–1.000), respectively. The model accuracy (79.6%) of the seven-way classification was higher than that of the radiology residents (66.4%, p = 0.035) and general radiologists (73.5%, p = 0.346) but lower than that of the academic radiologists (85.4%, p = 0.291). Confusion matrices showed the sources of diagnostic errors for the model and individual radiologists for each disease. Saliency maps detected the activation regions associated with each predicted class. Conclusion This interpretable deep learning model showed high diagnostic performance in the differentiation of FLLs on multisequence MRI. The analysis principle contributing to the predictions can be explained via saliency maps. |
abstractGer |
Abstract Background The imaging features of focal liver lesions (FLLs) are diverse and complex. Diagnosing FLLs with imaging alone remains challenging. We developed and validated an interpretable deep learning model for the classification of seven categories of FLLs on multisequence MRI and compared the differential diagnosis between the proposed model and radiologists. Methods In all, 557 lesions examined by multisequence MRI were utilised in this retrospective study and divided into training–validation (n = 444) and test (n = 113) datasets. The area under the receiver operating characteristic curve (AUC) was calculated to evaluate the performance of the model. The accuracy and confusion matrix of the model and individual radiologists were compared. Saliency maps were generated to highlight the activation region based on the model perspective. Results The AUC of the two- and seven-way classifications of the model were 0.969 (95% CI 0.944–0.994) and from 0.919 (95% CI 0.857–0.980) to 0.999 (95% CI 0.996–1.000), respectively. The model accuracy (79.6%) of the seven-way classification was higher than that of the radiology residents (66.4%, p = 0.035) and general radiologists (73.5%, p = 0.346) but lower than that of the academic radiologists (85.4%, p = 0.291). Confusion matrices showed the sources of diagnostic errors for the model and individual radiologists for each disease. Saliency maps detected the activation regions associated with each predicted class. Conclusion This interpretable deep learning model showed high diagnostic performance in the differentiation of FLLs on multisequence MRI. The analysis principle contributing to the predictions can be explained via saliency maps. |
abstract_unstemmed |
Abstract Background The imaging features of focal liver lesions (FLLs) are diverse and complex. Diagnosing FLLs with imaging alone remains challenging. We developed and validated an interpretable deep learning model for the classification of seven categories of FLLs on multisequence MRI and compared the differential diagnosis between the proposed model and radiologists. Methods In all, 557 lesions examined by multisequence MRI were utilised in this retrospective study and divided into training–validation (n = 444) and test (n = 113) datasets. The area under the receiver operating characteristic curve (AUC) was calculated to evaluate the performance of the model. The accuracy and confusion matrix of the model and individual radiologists were compared. Saliency maps were generated to highlight the activation region based on the model perspective. Results The AUC of the two- and seven-way classifications of the model were 0.969 (95% CI 0.944–0.994) and from 0.919 (95% CI 0.857–0.980) to 0.999 (95% CI 0.996–1.000), respectively. The model accuracy (79.6%) of the seven-way classification was higher than that of the radiology residents (66.4%, p = 0.035) and general radiologists (73.5%, p = 0.346) but lower than that of the academic radiologists (85.4%, p = 0.291). Confusion matrices showed the sources of diagnostic errors for the model and individual radiologists for each disease. Saliency maps detected the activation regions associated with each predicted class. Conclusion This interpretable deep learning model showed high diagnostic performance in the differentiation of FLLs on multisequence MRI. The analysis principle contributing to the predictions can be explained via saliency maps. |
collection_details |
GBV_USEFLAG_A SYSFLAG_A GBV_DOAJ SSG-OLC-PHA GBV_ILN_20 GBV_ILN_22 GBV_ILN_23 GBV_ILN_24 GBV_ILN_31 GBV_ILN_39 GBV_ILN_40 GBV_ILN_60 GBV_ILN_62 GBV_ILN_63 GBV_ILN_65 GBV_ILN_69 GBV_ILN_73 GBV_ILN_74 GBV_ILN_95 GBV_ILN_105 GBV_ILN_110 GBV_ILN_151 GBV_ILN_161 GBV_ILN_170 GBV_ILN_206 GBV_ILN_213 GBV_ILN_230 GBV_ILN_285 GBV_ILN_293 GBV_ILN_602 GBV_ILN_2005 GBV_ILN_2009 GBV_ILN_2011 GBV_ILN_2014 GBV_ILN_2055 GBV_ILN_2111 GBV_ILN_4012 GBV_ILN_4037 GBV_ILN_4112 GBV_ILN_4125 GBV_ILN_4126 GBV_ILN_4249 GBV_ILN_4277 GBV_ILN_4305 GBV_ILN_4306 GBV_ILN_4307 GBV_ILN_4313 GBV_ILN_4322 GBV_ILN_4323 GBV_ILN_4324 GBV_ILN_4325 GBV_ILN_4338 GBV_ILN_4367 GBV_ILN_4700 |
container_issue |
1 |
title_short |
Saliency-based 3D convolutional neural network for categorising common focal liver lesions on multisequence MRI |
url |
https://doi.org/10.1186/s13244-021-01117-z https://doaj.org/article/d040028a424a4073bc8776acf614b66e https://doaj.org/toc/1869-4101 |
remote_bool |
true |
author2 |
Xin-Jun Han Jing Du Zhen-Chang Wang Chunwang Yuan Yinan Chen Yajing Zhu Xin Dou Xiao-Wei Xu Hui Xu Zheng-Han Yang |
author2Str |
Xin-Jun Han Jing Du Zhen-Chang Wang Chunwang Yuan Yinan Chen Yajing Zhu Xin Dou Xiao-Wei Xu Hui Xu Zheng-Han Yang |
ppnlink |
621547425 |
callnumber-subject |
R - General Medicine |
mediatype_str_mv |
c |
isOA_txt |
true |
hochschulschrift_bool |
false |
doi_str |
10.1186/s13244-021-01117-z |
callnumber-a |
R895-920 |
up_date |
2024-07-03T14:34:42.919Z |
_version_ |
1803568842458267648 |
fullrecord_marcxml |
<?xml version="1.0" encoding="UTF-8"?><collection xmlns="http://www.loc.gov/MARC21/slim"><record><leader>01000caa a22002652 4500</leader><controlfield tag="001">DOAJ075379929</controlfield><controlfield tag="003">DE-627</controlfield><controlfield tag="005">20230503065618.0</controlfield><controlfield tag="007">cr uuu---uuuuu</controlfield><controlfield tag="008">230228s2021 xx |||||o 00| ||eng c</controlfield><datafield tag="024" ind1="7" ind2=" "><subfield code="a">10.1186/s13244-021-01117-z</subfield><subfield code="2">doi</subfield></datafield><datafield tag="035" ind1=" " ind2=" "><subfield code="a">(DE-627)DOAJ075379929</subfield></datafield><datafield tag="035" ind1=" " ind2=" "><subfield code="a">(DE-599)DOAJd040028a424a4073bc8776acf614b66e</subfield></datafield><datafield tag="040" ind1=" " ind2=" "><subfield code="a">DE-627</subfield><subfield code="b">ger</subfield><subfield code="c">DE-627</subfield><subfield code="e">rakwb</subfield></datafield><datafield tag="041" ind1=" " ind2=" "><subfield code="a">eng</subfield></datafield><datafield tag="050" ind1=" " ind2="0"><subfield code="a">R895-920</subfield></datafield><datafield tag="100" ind1="0" ind2=" "><subfield code="a">Shu-Hui Wang</subfield><subfield code="e">verfasserin</subfield><subfield code="4">aut</subfield></datafield><datafield tag="245" ind1="1" ind2="0"><subfield code="a">Saliency-based 3D convolutional neural network for categorising common focal liver lesions on multisequence MRI</subfield></datafield><datafield tag="264" ind1=" " ind2="1"><subfield code="c">2021</subfield></datafield><datafield tag="336" ind1=" " ind2=" "><subfield code="a">Text</subfield><subfield code="b">txt</subfield><subfield code="2">rdacontent</subfield></datafield><datafield tag="337" ind1=" " ind2=" "><subfield code="a">Computermedien</subfield><subfield code="b">c</subfield><subfield code="2">rdamedia</subfield></datafield><datafield tag="338" ind1=" " ind2=" "><subfield code="a">Online-Ressource</subfield><subfield code="b">cr</subfield><subfield code="2">rdacarrier</subfield></datafield><datafield tag="520" ind1=" " ind2=" "><subfield code="a">Abstract Background The imaging features of focal liver lesions (FLLs) are diverse and complex. Diagnosing FLLs with imaging alone remains challenging. We developed and validated an interpretable deep learning model for the classification of seven categories of FLLs on multisequence MRI and compared the differential diagnosis between the proposed model and radiologists. Methods In all, 557 lesions examined by multisequence MRI were utilised in this retrospective study and divided into training–validation (n = 444) and test (n = 113) datasets. The area under the receiver operating characteristic curve (AUC) was calculated to evaluate the performance of the model. The accuracy and confusion matrix of the model and individual radiologists were compared. Saliency maps were generated to highlight the activation region based on the model perspective. Results The AUC of the two- and seven-way classifications of the model were 0.969 (95% CI 0.944–0.994) and from 0.919 (95% CI 0.857–0.980) to 0.999 (95% CI 0.996–1.000), respectively. The model accuracy (79.6%) of the seven-way classification was higher than that of the radiology residents (66.4%, p = 0.035) and general radiologists (73.5%, p = 0.346) but lower than that of the academic radiologists (85.4%, p = 0.291). Confusion matrices showed the sources of diagnostic errors for the model and individual radiologists for each disease. Saliency maps detected the activation regions associated with each predicted class. Conclusion This interpretable deep learning model showed high diagnostic performance in the differentiation of FLLs on multisequence MRI. The analysis principle contributing to the predictions can be explained via saliency maps.</subfield></datafield><datafield tag="650" ind1=" " ind2="4"><subfield code="a">Deep learning</subfield></datafield><datafield tag="650" ind1=" " ind2="4"><subfield code="a">MRI</subfield></datafield><datafield tag="650" ind1=" " ind2="4"><subfield code="a">Classification</subfield></datafield><datafield tag="650" ind1=" " ind2="4"><subfield code="a">Focal liver lesion</subfield></datafield><datafield tag="650" ind1=" " ind2="4"><subfield code="a">Model interpretation</subfield></datafield><datafield tag="653" ind1=" " ind2="0"><subfield code="a">Medical physics. Medical radiology. Nuclear medicine</subfield></datafield><datafield tag="700" ind1="0" ind2=" "><subfield code="a">Xin-Jun Han</subfield><subfield code="e">verfasserin</subfield><subfield code="4">aut</subfield></datafield><datafield tag="700" ind1="0" ind2=" "><subfield code="a">Jing Du</subfield><subfield code="e">verfasserin</subfield><subfield code="4">aut</subfield></datafield><datafield tag="700" ind1="0" ind2=" "><subfield code="a">Zhen-Chang Wang</subfield><subfield code="e">verfasserin</subfield><subfield code="4">aut</subfield></datafield><datafield tag="700" ind1="0" ind2=" "><subfield code="a">Chunwang Yuan</subfield><subfield code="e">verfasserin</subfield><subfield code="4">aut</subfield></datafield><datafield tag="700" ind1="0" ind2=" "><subfield code="a">Yinan Chen</subfield><subfield code="e">verfasserin</subfield><subfield code="4">aut</subfield></datafield><datafield tag="700" ind1="0" ind2=" "><subfield code="a">Yajing Zhu</subfield><subfield code="e">verfasserin</subfield><subfield code="4">aut</subfield></datafield><datafield tag="700" ind1="0" ind2=" "><subfield code="a">Xin Dou</subfield><subfield code="e">verfasserin</subfield><subfield code="4">aut</subfield></datafield><datafield tag="700" ind1="0" ind2=" "><subfield code="a">Xiao-Wei Xu</subfield><subfield code="e">verfasserin</subfield><subfield code="4">aut</subfield></datafield><datafield tag="700" ind1="0" ind2=" "><subfield code="a">Hui Xu</subfield><subfield code="e">verfasserin</subfield><subfield code="4">aut</subfield></datafield><datafield tag="700" ind1="0" ind2=" "><subfield code="a">Zheng-Han Yang</subfield><subfield code="e">verfasserin</subfield><subfield code="4">aut</subfield></datafield><datafield tag="773" ind1="0" ind2="8"><subfield code="i">In</subfield><subfield code="t">Insights into Imaging</subfield><subfield code="d">SpringerOpen, 2013</subfield><subfield code="g">12(2021), 1, Seite 12</subfield><subfield code="w">(DE-627)621547425</subfield><subfield code="w">(DE-600)2543323-4</subfield><subfield code="x">18694101</subfield><subfield code="7">nnns</subfield></datafield><datafield tag="773" ind1="1" ind2="8"><subfield code="g">volume:12</subfield><subfield code="g">year:2021</subfield><subfield code="g">number:1</subfield><subfield code="g">pages:12</subfield></datafield><datafield tag="856" ind1="4" ind2="0"><subfield code="u">https://doi.org/10.1186/s13244-021-01117-z</subfield><subfield code="z">kostenfrei</subfield></datafield><datafield tag="856" ind1="4" ind2="0"><subfield code="u">https://doaj.org/article/d040028a424a4073bc8776acf614b66e</subfield><subfield code="z">kostenfrei</subfield></datafield><datafield tag="856" ind1="4" ind2="0"><subfield code="u">https://doi.org/10.1186/s13244-021-01117-z</subfield><subfield code="z">kostenfrei</subfield></datafield><datafield tag="856" ind1="4" ind2="2"><subfield code="u">https://doaj.org/toc/1869-4101</subfield><subfield code="y">Journal toc</subfield><subfield code="z">kostenfrei</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_USEFLAG_A</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">SYSFLAG_A</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_DOAJ</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">SSG-OLC-PHA</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_20</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_22</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_23</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_24</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_31</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_39</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_40</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_60</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_62</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_63</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_65</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_69</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_73</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_74</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_95</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_105</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_110</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_151</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_161</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_170</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_206</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_213</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_230</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_285</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_293</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_602</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2005</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2009</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2011</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2014</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2055</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2111</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4012</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4037</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4112</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4125</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4126</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4249</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4277</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4305</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4306</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4307</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4313</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4322</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4323</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4324</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4325</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4338</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4367</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4700</subfield></datafield><datafield tag="951" ind1=" " ind2=" "><subfield code="a">AR</subfield></datafield><datafield tag="952" ind1=" " ind2=" "><subfield code="d">12</subfield><subfield code="j">2021</subfield><subfield code="e">1</subfield><subfield code="h">12</subfield></datafield></record></collection>
|
score |
7.398837 |