Dynamic PET Image Denoising Using Deep Convolutional Neural Networks Without Prior Training Datasets
Deep learning has attracted growing interest for application to medical imaging, such as positron emission tomography (PET), due to its excellent performance. Convolutional neural networks (CNNs), a facet of deep learning requires large training-image datasets. This presents a challenge in a clinica...
Ausführliche Beschreibung
Autor*in: |
Fumio Hashimoto [verfasserIn] Hiroyuki Ohba [verfasserIn] Kibo Ote [verfasserIn] Atsushi Teramoto [verfasserIn] Hideo Tsukada [verfasserIn] |
---|
Format: |
E-Artikel |
---|---|
Sprache: |
Englisch |
Erschienen: |
2019 |
---|
Schlagwörter: |
---|
Übergeordnetes Werk: |
In: IEEE Access - IEEE, 2014, 7(2019), Seite 96594-96603 |
---|---|
Übergeordnetes Werk: |
volume:7 ; year:2019 ; pages:96594-96603 |
Links: |
---|
DOI / URN: |
10.1109/ACCESS.2019.2929230 |
---|
Katalog-ID: |
DOAJ05768474X |
---|
LEADER | 01000caa a22002652 4500 | ||
---|---|---|---|
001 | DOAJ05768474X | ||
003 | DE-627 | ||
005 | 20230501192242.0 | ||
007 | cr uuu---uuuuu | ||
008 | 230227s2019 xx |||||o 00| ||eng c | ||
024 | 7 | |a 10.1109/ACCESS.2019.2929230 |2 doi | |
035 | |a (DE-627)DOAJ05768474X | ||
035 | |a (DE-599)DOAJd4701094f5b94878b8831baaf553f1e4 | ||
040 | |a DE-627 |b ger |c DE-627 |e rakwb | ||
041 | |a eng | ||
050 | 0 | |a TK1-9971 | |
100 | 0 | |a Fumio Hashimoto |e verfasserin |4 aut | |
245 | 1 | 0 | |a Dynamic PET Image Denoising Using Deep Convolutional Neural Networks Without Prior Training Datasets |
264 | 1 | |c 2019 | |
336 | |a Text |b txt |2 rdacontent | ||
337 | |a Computermedien |b c |2 rdamedia | ||
338 | |a Online-Ressource |b cr |2 rdacarrier | ||
520 | |a Deep learning has attracted growing interest for application to medical imaging, such as positron emission tomography (PET), due to its excellent performance. Convolutional neural networks (CNNs), a facet of deep learning requires large training-image datasets. This presents a challenge in a clinical setting because it is difficult to prepare large, high-quality patient-related datasets. Recently, the deep image prior (DIP) approach has been devised, based on the fact that CNN structures have the intrinsic ability to solve inverse problems such as denoising without pre-training and do not require the preparation of training datasets. Herein, we proposed the dynamic PET image denoising using a DIP approach, with the PET data itself being used to reduce the statistical image noise. Static PET data were acquired for input to the network, with the dynamic PET images being handled as training labels, while the denoised dynamic PET images were represented by the network output. We applied the proposed DIP method to computer simulations and also to real data acquired from a living monkey brain with <sup<18</sup<F-fluoro-2-deoxy-D-glucose (<sup<18</sup<F-FDG). As a simulation result, our DIP method produced less noisy and more accurate dynamic images than the other algorithms. Moreover, using real data, the DIP method was found to perform better than other types of post-denoising method in terms of contrast-to-noise ratio, and also maintain the contrast-to-noise ratio when resampling the list data to 1/5 and 1/10 of the original size, demonstrating that the DIP method could be applied to low-dose PET imaging. These results indicated that the proposed DIP method provides a promising means of post-denoising for dynamic PET images. | ||
650 | 4 | |a Convolutional neural networks | |
650 | 4 | |a deep image prior | |
650 | 4 | |a deep learning | |
650 | 4 | |a denoising | |
650 | 4 | |a dynamic positron emission tomography | |
653 | 0 | |a Electrical engineering. Electronics. Nuclear engineering | |
700 | 0 | |a Hiroyuki Ohba |e verfasserin |4 aut | |
700 | 0 | |a Kibo Ote |e verfasserin |4 aut | |
700 | 0 | |a Atsushi Teramoto |e verfasserin |4 aut | |
700 | 0 | |a Hideo Tsukada |e verfasserin |4 aut | |
773 | 0 | 8 | |i In |t IEEE Access |d IEEE, 2014 |g 7(2019), Seite 96594-96603 |w (DE-627)728440385 |w (DE-600)2687964-5 |x 21693536 |7 nnns |
773 | 1 | 8 | |g volume:7 |g year:2019 |g pages:96594-96603 |
856 | 4 | 0 | |u https://doi.org/10.1109/ACCESS.2019.2929230 |z kostenfrei |
856 | 4 | 0 | |u https://doaj.org/article/d4701094f5b94878b8831baaf553f1e4 |z kostenfrei |
856 | 4 | 0 | |u https://ieeexplore.ieee.org/document/8764327/ |z kostenfrei |
856 | 4 | 2 | |u https://doaj.org/toc/2169-3536 |y Journal toc |z kostenfrei |
912 | |a GBV_USEFLAG_A | ||
912 | |a SYSFLAG_A | ||
912 | |a GBV_DOAJ | ||
912 | |a SSG-OLC-PHA | ||
912 | |a GBV_ILN_11 | ||
912 | |a GBV_ILN_20 | ||
912 | |a GBV_ILN_22 | ||
912 | |a GBV_ILN_23 | ||
912 | |a GBV_ILN_24 | ||
912 | |a GBV_ILN_31 | ||
912 | |a GBV_ILN_39 | ||
912 | |a GBV_ILN_40 | ||
912 | |a GBV_ILN_60 | ||
912 | |a GBV_ILN_62 | ||
912 | |a GBV_ILN_63 | ||
912 | |a GBV_ILN_65 | ||
912 | |a GBV_ILN_69 | ||
912 | |a GBV_ILN_70 | ||
912 | |a GBV_ILN_73 | ||
912 | |a GBV_ILN_95 | ||
912 | |a GBV_ILN_105 | ||
912 | |a GBV_ILN_110 | ||
912 | |a GBV_ILN_151 | ||
912 | |a GBV_ILN_161 | ||
912 | |a GBV_ILN_170 | ||
912 | |a GBV_ILN_213 | ||
912 | |a GBV_ILN_230 | ||
912 | |a GBV_ILN_285 | ||
912 | |a GBV_ILN_293 | ||
912 | |a GBV_ILN_370 | ||
912 | |a GBV_ILN_602 | ||
912 | |a GBV_ILN_2014 | ||
912 | |a GBV_ILN_4012 | ||
912 | |a GBV_ILN_4037 | ||
912 | |a GBV_ILN_4112 | ||
912 | |a GBV_ILN_4125 | ||
912 | |a GBV_ILN_4126 | ||
912 | |a GBV_ILN_4249 | ||
912 | |a GBV_ILN_4305 | ||
912 | |a GBV_ILN_4306 | ||
912 | |a GBV_ILN_4307 | ||
912 | |a GBV_ILN_4313 | ||
912 | |a GBV_ILN_4322 | ||
912 | |a GBV_ILN_4323 | ||
912 | |a GBV_ILN_4324 | ||
912 | |a GBV_ILN_4325 | ||
912 | |a GBV_ILN_4335 | ||
912 | |a GBV_ILN_4338 | ||
912 | |a GBV_ILN_4367 | ||
912 | |a GBV_ILN_4700 | ||
951 | |a AR | ||
952 | |d 7 |j 2019 |h 96594-96603 |
author_variant |
f h fh h o ho k o ko a t at h t ht |
---|---|
matchkey_str |
article:21693536:2019----::yaiptmgdniigsndecnouinlerlewrsihu |
hierarchy_sort_str |
2019 |
callnumber-subject-code |
TK |
publishDate |
2019 |
allfields |
10.1109/ACCESS.2019.2929230 doi (DE-627)DOAJ05768474X (DE-599)DOAJd4701094f5b94878b8831baaf553f1e4 DE-627 ger DE-627 rakwb eng TK1-9971 Fumio Hashimoto verfasserin aut Dynamic PET Image Denoising Using Deep Convolutional Neural Networks Without Prior Training Datasets 2019 Text txt rdacontent Computermedien c rdamedia Online-Ressource cr rdacarrier Deep learning has attracted growing interest for application to medical imaging, such as positron emission tomography (PET), due to its excellent performance. Convolutional neural networks (CNNs), a facet of deep learning requires large training-image datasets. This presents a challenge in a clinical setting because it is difficult to prepare large, high-quality patient-related datasets. Recently, the deep image prior (DIP) approach has been devised, based on the fact that CNN structures have the intrinsic ability to solve inverse problems such as denoising without pre-training and do not require the preparation of training datasets. Herein, we proposed the dynamic PET image denoising using a DIP approach, with the PET data itself being used to reduce the statistical image noise. Static PET data were acquired for input to the network, with the dynamic PET images being handled as training labels, while the denoised dynamic PET images were represented by the network output. We applied the proposed DIP method to computer simulations and also to real data acquired from a living monkey brain with <sup<18</sup<F-fluoro-2-deoxy-D-glucose (<sup<18</sup<F-FDG). As a simulation result, our DIP method produced less noisy and more accurate dynamic images than the other algorithms. Moreover, using real data, the DIP method was found to perform better than other types of post-denoising method in terms of contrast-to-noise ratio, and also maintain the contrast-to-noise ratio when resampling the list data to 1/5 and 1/10 of the original size, demonstrating that the DIP method could be applied to low-dose PET imaging. These results indicated that the proposed DIP method provides a promising means of post-denoising for dynamic PET images. Convolutional neural networks deep image prior deep learning denoising dynamic positron emission tomography Electrical engineering. Electronics. Nuclear engineering Hiroyuki Ohba verfasserin aut Kibo Ote verfasserin aut Atsushi Teramoto verfasserin aut Hideo Tsukada verfasserin aut In IEEE Access IEEE, 2014 7(2019), Seite 96594-96603 (DE-627)728440385 (DE-600)2687964-5 21693536 nnns volume:7 year:2019 pages:96594-96603 https://doi.org/10.1109/ACCESS.2019.2929230 kostenfrei https://doaj.org/article/d4701094f5b94878b8831baaf553f1e4 kostenfrei https://ieeexplore.ieee.org/document/8764327/ kostenfrei https://doaj.org/toc/2169-3536 Journal toc kostenfrei GBV_USEFLAG_A SYSFLAG_A GBV_DOAJ SSG-OLC-PHA GBV_ILN_11 GBV_ILN_20 GBV_ILN_22 GBV_ILN_23 GBV_ILN_24 GBV_ILN_31 GBV_ILN_39 GBV_ILN_40 GBV_ILN_60 GBV_ILN_62 GBV_ILN_63 GBV_ILN_65 GBV_ILN_69 GBV_ILN_70 GBV_ILN_73 GBV_ILN_95 GBV_ILN_105 GBV_ILN_110 GBV_ILN_151 GBV_ILN_161 GBV_ILN_170 GBV_ILN_213 GBV_ILN_230 GBV_ILN_285 GBV_ILN_293 GBV_ILN_370 GBV_ILN_602 GBV_ILN_2014 GBV_ILN_4012 GBV_ILN_4037 GBV_ILN_4112 GBV_ILN_4125 GBV_ILN_4126 GBV_ILN_4249 GBV_ILN_4305 GBV_ILN_4306 GBV_ILN_4307 GBV_ILN_4313 GBV_ILN_4322 GBV_ILN_4323 GBV_ILN_4324 GBV_ILN_4325 GBV_ILN_4335 GBV_ILN_4338 GBV_ILN_4367 GBV_ILN_4700 AR 7 2019 96594-96603 |
spelling |
10.1109/ACCESS.2019.2929230 doi (DE-627)DOAJ05768474X (DE-599)DOAJd4701094f5b94878b8831baaf553f1e4 DE-627 ger DE-627 rakwb eng TK1-9971 Fumio Hashimoto verfasserin aut Dynamic PET Image Denoising Using Deep Convolutional Neural Networks Without Prior Training Datasets 2019 Text txt rdacontent Computermedien c rdamedia Online-Ressource cr rdacarrier Deep learning has attracted growing interest for application to medical imaging, such as positron emission tomography (PET), due to its excellent performance. Convolutional neural networks (CNNs), a facet of deep learning requires large training-image datasets. This presents a challenge in a clinical setting because it is difficult to prepare large, high-quality patient-related datasets. Recently, the deep image prior (DIP) approach has been devised, based on the fact that CNN structures have the intrinsic ability to solve inverse problems such as denoising without pre-training and do not require the preparation of training datasets. Herein, we proposed the dynamic PET image denoising using a DIP approach, with the PET data itself being used to reduce the statistical image noise. Static PET data were acquired for input to the network, with the dynamic PET images being handled as training labels, while the denoised dynamic PET images were represented by the network output. We applied the proposed DIP method to computer simulations and also to real data acquired from a living monkey brain with <sup<18</sup<F-fluoro-2-deoxy-D-glucose (<sup<18</sup<F-FDG). As a simulation result, our DIP method produced less noisy and more accurate dynamic images than the other algorithms. Moreover, using real data, the DIP method was found to perform better than other types of post-denoising method in terms of contrast-to-noise ratio, and also maintain the contrast-to-noise ratio when resampling the list data to 1/5 and 1/10 of the original size, demonstrating that the DIP method could be applied to low-dose PET imaging. These results indicated that the proposed DIP method provides a promising means of post-denoising for dynamic PET images. Convolutional neural networks deep image prior deep learning denoising dynamic positron emission tomography Electrical engineering. Electronics. Nuclear engineering Hiroyuki Ohba verfasserin aut Kibo Ote verfasserin aut Atsushi Teramoto verfasserin aut Hideo Tsukada verfasserin aut In IEEE Access IEEE, 2014 7(2019), Seite 96594-96603 (DE-627)728440385 (DE-600)2687964-5 21693536 nnns volume:7 year:2019 pages:96594-96603 https://doi.org/10.1109/ACCESS.2019.2929230 kostenfrei https://doaj.org/article/d4701094f5b94878b8831baaf553f1e4 kostenfrei https://ieeexplore.ieee.org/document/8764327/ kostenfrei https://doaj.org/toc/2169-3536 Journal toc kostenfrei GBV_USEFLAG_A SYSFLAG_A GBV_DOAJ SSG-OLC-PHA GBV_ILN_11 GBV_ILN_20 GBV_ILN_22 GBV_ILN_23 GBV_ILN_24 GBV_ILN_31 GBV_ILN_39 GBV_ILN_40 GBV_ILN_60 GBV_ILN_62 GBV_ILN_63 GBV_ILN_65 GBV_ILN_69 GBV_ILN_70 GBV_ILN_73 GBV_ILN_95 GBV_ILN_105 GBV_ILN_110 GBV_ILN_151 GBV_ILN_161 GBV_ILN_170 GBV_ILN_213 GBV_ILN_230 GBV_ILN_285 GBV_ILN_293 GBV_ILN_370 GBV_ILN_602 GBV_ILN_2014 GBV_ILN_4012 GBV_ILN_4037 GBV_ILN_4112 GBV_ILN_4125 GBV_ILN_4126 GBV_ILN_4249 GBV_ILN_4305 GBV_ILN_4306 GBV_ILN_4307 GBV_ILN_4313 GBV_ILN_4322 GBV_ILN_4323 GBV_ILN_4324 GBV_ILN_4325 GBV_ILN_4335 GBV_ILN_4338 GBV_ILN_4367 GBV_ILN_4700 AR 7 2019 96594-96603 |
allfields_unstemmed |
10.1109/ACCESS.2019.2929230 doi (DE-627)DOAJ05768474X (DE-599)DOAJd4701094f5b94878b8831baaf553f1e4 DE-627 ger DE-627 rakwb eng TK1-9971 Fumio Hashimoto verfasserin aut Dynamic PET Image Denoising Using Deep Convolutional Neural Networks Without Prior Training Datasets 2019 Text txt rdacontent Computermedien c rdamedia Online-Ressource cr rdacarrier Deep learning has attracted growing interest for application to medical imaging, such as positron emission tomography (PET), due to its excellent performance. Convolutional neural networks (CNNs), a facet of deep learning requires large training-image datasets. This presents a challenge in a clinical setting because it is difficult to prepare large, high-quality patient-related datasets. Recently, the deep image prior (DIP) approach has been devised, based on the fact that CNN structures have the intrinsic ability to solve inverse problems such as denoising without pre-training and do not require the preparation of training datasets. Herein, we proposed the dynamic PET image denoising using a DIP approach, with the PET data itself being used to reduce the statistical image noise. Static PET data were acquired for input to the network, with the dynamic PET images being handled as training labels, while the denoised dynamic PET images were represented by the network output. We applied the proposed DIP method to computer simulations and also to real data acquired from a living monkey brain with <sup<18</sup<F-fluoro-2-deoxy-D-glucose (<sup<18</sup<F-FDG). As a simulation result, our DIP method produced less noisy and more accurate dynamic images than the other algorithms. Moreover, using real data, the DIP method was found to perform better than other types of post-denoising method in terms of contrast-to-noise ratio, and also maintain the contrast-to-noise ratio when resampling the list data to 1/5 and 1/10 of the original size, demonstrating that the DIP method could be applied to low-dose PET imaging. These results indicated that the proposed DIP method provides a promising means of post-denoising for dynamic PET images. Convolutional neural networks deep image prior deep learning denoising dynamic positron emission tomography Electrical engineering. Electronics. Nuclear engineering Hiroyuki Ohba verfasserin aut Kibo Ote verfasserin aut Atsushi Teramoto verfasserin aut Hideo Tsukada verfasserin aut In IEEE Access IEEE, 2014 7(2019), Seite 96594-96603 (DE-627)728440385 (DE-600)2687964-5 21693536 nnns volume:7 year:2019 pages:96594-96603 https://doi.org/10.1109/ACCESS.2019.2929230 kostenfrei https://doaj.org/article/d4701094f5b94878b8831baaf553f1e4 kostenfrei https://ieeexplore.ieee.org/document/8764327/ kostenfrei https://doaj.org/toc/2169-3536 Journal toc kostenfrei GBV_USEFLAG_A SYSFLAG_A GBV_DOAJ SSG-OLC-PHA GBV_ILN_11 GBV_ILN_20 GBV_ILN_22 GBV_ILN_23 GBV_ILN_24 GBV_ILN_31 GBV_ILN_39 GBV_ILN_40 GBV_ILN_60 GBV_ILN_62 GBV_ILN_63 GBV_ILN_65 GBV_ILN_69 GBV_ILN_70 GBV_ILN_73 GBV_ILN_95 GBV_ILN_105 GBV_ILN_110 GBV_ILN_151 GBV_ILN_161 GBV_ILN_170 GBV_ILN_213 GBV_ILN_230 GBV_ILN_285 GBV_ILN_293 GBV_ILN_370 GBV_ILN_602 GBV_ILN_2014 GBV_ILN_4012 GBV_ILN_4037 GBV_ILN_4112 GBV_ILN_4125 GBV_ILN_4126 GBV_ILN_4249 GBV_ILN_4305 GBV_ILN_4306 GBV_ILN_4307 GBV_ILN_4313 GBV_ILN_4322 GBV_ILN_4323 GBV_ILN_4324 GBV_ILN_4325 GBV_ILN_4335 GBV_ILN_4338 GBV_ILN_4367 GBV_ILN_4700 AR 7 2019 96594-96603 |
allfieldsGer |
10.1109/ACCESS.2019.2929230 doi (DE-627)DOAJ05768474X (DE-599)DOAJd4701094f5b94878b8831baaf553f1e4 DE-627 ger DE-627 rakwb eng TK1-9971 Fumio Hashimoto verfasserin aut Dynamic PET Image Denoising Using Deep Convolutional Neural Networks Without Prior Training Datasets 2019 Text txt rdacontent Computermedien c rdamedia Online-Ressource cr rdacarrier Deep learning has attracted growing interest for application to medical imaging, such as positron emission tomography (PET), due to its excellent performance. Convolutional neural networks (CNNs), a facet of deep learning requires large training-image datasets. This presents a challenge in a clinical setting because it is difficult to prepare large, high-quality patient-related datasets. Recently, the deep image prior (DIP) approach has been devised, based on the fact that CNN structures have the intrinsic ability to solve inverse problems such as denoising without pre-training and do not require the preparation of training datasets. Herein, we proposed the dynamic PET image denoising using a DIP approach, with the PET data itself being used to reduce the statistical image noise. Static PET data were acquired for input to the network, with the dynamic PET images being handled as training labels, while the denoised dynamic PET images were represented by the network output. We applied the proposed DIP method to computer simulations and also to real data acquired from a living monkey brain with <sup<18</sup<F-fluoro-2-deoxy-D-glucose (<sup<18</sup<F-FDG). As a simulation result, our DIP method produced less noisy and more accurate dynamic images than the other algorithms. Moreover, using real data, the DIP method was found to perform better than other types of post-denoising method in terms of contrast-to-noise ratio, and also maintain the contrast-to-noise ratio when resampling the list data to 1/5 and 1/10 of the original size, demonstrating that the DIP method could be applied to low-dose PET imaging. These results indicated that the proposed DIP method provides a promising means of post-denoising for dynamic PET images. Convolutional neural networks deep image prior deep learning denoising dynamic positron emission tomography Electrical engineering. Electronics. Nuclear engineering Hiroyuki Ohba verfasserin aut Kibo Ote verfasserin aut Atsushi Teramoto verfasserin aut Hideo Tsukada verfasserin aut In IEEE Access IEEE, 2014 7(2019), Seite 96594-96603 (DE-627)728440385 (DE-600)2687964-5 21693536 nnns volume:7 year:2019 pages:96594-96603 https://doi.org/10.1109/ACCESS.2019.2929230 kostenfrei https://doaj.org/article/d4701094f5b94878b8831baaf553f1e4 kostenfrei https://ieeexplore.ieee.org/document/8764327/ kostenfrei https://doaj.org/toc/2169-3536 Journal toc kostenfrei GBV_USEFLAG_A SYSFLAG_A GBV_DOAJ SSG-OLC-PHA GBV_ILN_11 GBV_ILN_20 GBV_ILN_22 GBV_ILN_23 GBV_ILN_24 GBV_ILN_31 GBV_ILN_39 GBV_ILN_40 GBV_ILN_60 GBV_ILN_62 GBV_ILN_63 GBV_ILN_65 GBV_ILN_69 GBV_ILN_70 GBV_ILN_73 GBV_ILN_95 GBV_ILN_105 GBV_ILN_110 GBV_ILN_151 GBV_ILN_161 GBV_ILN_170 GBV_ILN_213 GBV_ILN_230 GBV_ILN_285 GBV_ILN_293 GBV_ILN_370 GBV_ILN_602 GBV_ILN_2014 GBV_ILN_4012 GBV_ILN_4037 GBV_ILN_4112 GBV_ILN_4125 GBV_ILN_4126 GBV_ILN_4249 GBV_ILN_4305 GBV_ILN_4306 GBV_ILN_4307 GBV_ILN_4313 GBV_ILN_4322 GBV_ILN_4323 GBV_ILN_4324 GBV_ILN_4325 GBV_ILN_4335 GBV_ILN_4338 GBV_ILN_4367 GBV_ILN_4700 AR 7 2019 96594-96603 |
allfieldsSound |
10.1109/ACCESS.2019.2929230 doi (DE-627)DOAJ05768474X (DE-599)DOAJd4701094f5b94878b8831baaf553f1e4 DE-627 ger DE-627 rakwb eng TK1-9971 Fumio Hashimoto verfasserin aut Dynamic PET Image Denoising Using Deep Convolutional Neural Networks Without Prior Training Datasets 2019 Text txt rdacontent Computermedien c rdamedia Online-Ressource cr rdacarrier Deep learning has attracted growing interest for application to medical imaging, such as positron emission tomography (PET), due to its excellent performance. Convolutional neural networks (CNNs), a facet of deep learning requires large training-image datasets. This presents a challenge in a clinical setting because it is difficult to prepare large, high-quality patient-related datasets. Recently, the deep image prior (DIP) approach has been devised, based on the fact that CNN structures have the intrinsic ability to solve inverse problems such as denoising without pre-training and do not require the preparation of training datasets. Herein, we proposed the dynamic PET image denoising using a DIP approach, with the PET data itself being used to reduce the statistical image noise. Static PET data were acquired for input to the network, with the dynamic PET images being handled as training labels, while the denoised dynamic PET images were represented by the network output. We applied the proposed DIP method to computer simulations and also to real data acquired from a living monkey brain with <sup<18</sup<F-fluoro-2-deoxy-D-glucose (<sup<18</sup<F-FDG). As a simulation result, our DIP method produced less noisy and more accurate dynamic images than the other algorithms. Moreover, using real data, the DIP method was found to perform better than other types of post-denoising method in terms of contrast-to-noise ratio, and also maintain the contrast-to-noise ratio when resampling the list data to 1/5 and 1/10 of the original size, demonstrating that the DIP method could be applied to low-dose PET imaging. These results indicated that the proposed DIP method provides a promising means of post-denoising for dynamic PET images. Convolutional neural networks deep image prior deep learning denoising dynamic positron emission tomography Electrical engineering. Electronics. Nuclear engineering Hiroyuki Ohba verfasserin aut Kibo Ote verfasserin aut Atsushi Teramoto verfasserin aut Hideo Tsukada verfasserin aut In IEEE Access IEEE, 2014 7(2019), Seite 96594-96603 (DE-627)728440385 (DE-600)2687964-5 21693536 nnns volume:7 year:2019 pages:96594-96603 https://doi.org/10.1109/ACCESS.2019.2929230 kostenfrei https://doaj.org/article/d4701094f5b94878b8831baaf553f1e4 kostenfrei https://ieeexplore.ieee.org/document/8764327/ kostenfrei https://doaj.org/toc/2169-3536 Journal toc kostenfrei GBV_USEFLAG_A SYSFLAG_A GBV_DOAJ SSG-OLC-PHA GBV_ILN_11 GBV_ILN_20 GBV_ILN_22 GBV_ILN_23 GBV_ILN_24 GBV_ILN_31 GBV_ILN_39 GBV_ILN_40 GBV_ILN_60 GBV_ILN_62 GBV_ILN_63 GBV_ILN_65 GBV_ILN_69 GBV_ILN_70 GBV_ILN_73 GBV_ILN_95 GBV_ILN_105 GBV_ILN_110 GBV_ILN_151 GBV_ILN_161 GBV_ILN_170 GBV_ILN_213 GBV_ILN_230 GBV_ILN_285 GBV_ILN_293 GBV_ILN_370 GBV_ILN_602 GBV_ILN_2014 GBV_ILN_4012 GBV_ILN_4037 GBV_ILN_4112 GBV_ILN_4125 GBV_ILN_4126 GBV_ILN_4249 GBV_ILN_4305 GBV_ILN_4306 GBV_ILN_4307 GBV_ILN_4313 GBV_ILN_4322 GBV_ILN_4323 GBV_ILN_4324 GBV_ILN_4325 GBV_ILN_4335 GBV_ILN_4338 GBV_ILN_4367 GBV_ILN_4700 AR 7 2019 96594-96603 |
language |
English |
source |
In IEEE Access 7(2019), Seite 96594-96603 volume:7 year:2019 pages:96594-96603 |
sourceStr |
In IEEE Access 7(2019), Seite 96594-96603 volume:7 year:2019 pages:96594-96603 |
format_phy_str_mv |
Article |
institution |
findex.gbv.de |
topic_facet |
Convolutional neural networks deep image prior deep learning denoising dynamic positron emission tomography Electrical engineering. Electronics. Nuclear engineering |
isfreeaccess_bool |
true |
container_title |
IEEE Access |
authorswithroles_txt_mv |
Fumio Hashimoto @@aut@@ Hiroyuki Ohba @@aut@@ Kibo Ote @@aut@@ Atsushi Teramoto @@aut@@ Hideo Tsukada @@aut@@ |
publishDateDaySort_date |
2019-01-01T00:00:00Z |
hierarchy_top_id |
728440385 |
id |
DOAJ05768474X |
language_de |
englisch |
fullrecord |
<?xml version="1.0" encoding="UTF-8"?><collection xmlns="http://www.loc.gov/MARC21/slim"><record><leader>01000caa a22002652 4500</leader><controlfield tag="001">DOAJ05768474X</controlfield><controlfield tag="003">DE-627</controlfield><controlfield tag="005">20230501192242.0</controlfield><controlfield tag="007">cr uuu---uuuuu</controlfield><controlfield tag="008">230227s2019 xx |||||o 00| ||eng c</controlfield><datafield tag="024" ind1="7" ind2=" "><subfield code="a">10.1109/ACCESS.2019.2929230</subfield><subfield code="2">doi</subfield></datafield><datafield tag="035" ind1=" " ind2=" "><subfield code="a">(DE-627)DOAJ05768474X</subfield></datafield><datafield tag="035" ind1=" " ind2=" "><subfield code="a">(DE-599)DOAJd4701094f5b94878b8831baaf553f1e4</subfield></datafield><datafield tag="040" ind1=" " ind2=" "><subfield code="a">DE-627</subfield><subfield code="b">ger</subfield><subfield code="c">DE-627</subfield><subfield code="e">rakwb</subfield></datafield><datafield tag="041" ind1=" " ind2=" "><subfield code="a">eng</subfield></datafield><datafield tag="050" ind1=" " ind2="0"><subfield code="a">TK1-9971</subfield></datafield><datafield tag="100" ind1="0" ind2=" "><subfield code="a">Fumio Hashimoto</subfield><subfield code="e">verfasserin</subfield><subfield code="4">aut</subfield></datafield><datafield tag="245" ind1="1" ind2="0"><subfield code="a">Dynamic PET Image Denoising Using Deep Convolutional Neural Networks Without Prior Training Datasets</subfield></datafield><datafield tag="264" ind1=" " ind2="1"><subfield code="c">2019</subfield></datafield><datafield tag="336" ind1=" " ind2=" "><subfield code="a">Text</subfield><subfield code="b">txt</subfield><subfield code="2">rdacontent</subfield></datafield><datafield tag="337" ind1=" " ind2=" "><subfield code="a">Computermedien</subfield><subfield code="b">c</subfield><subfield code="2">rdamedia</subfield></datafield><datafield tag="338" ind1=" " ind2=" "><subfield code="a">Online-Ressource</subfield><subfield code="b">cr</subfield><subfield code="2">rdacarrier</subfield></datafield><datafield tag="520" ind1=" " ind2=" "><subfield code="a">Deep learning has attracted growing interest for application to medical imaging, such as positron emission tomography (PET), due to its excellent performance. Convolutional neural networks (CNNs), a facet of deep learning requires large training-image datasets. This presents a challenge in a clinical setting because it is difficult to prepare large, high-quality patient-related datasets. Recently, the deep image prior (DIP) approach has been devised, based on the fact that CNN structures have the intrinsic ability to solve inverse problems such as denoising without pre-training and do not require the preparation of training datasets. Herein, we proposed the dynamic PET image denoising using a DIP approach, with the PET data itself being used to reduce the statistical image noise. Static PET data were acquired for input to the network, with the dynamic PET images being handled as training labels, while the denoised dynamic PET images were represented by the network output. We applied the proposed DIP method to computer simulations and also to real data acquired from a living monkey brain with <sup<18</sup<F-fluoro-2-deoxy-D-glucose (<sup<18</sup<F-FDG). As a simulation result, our DIP method produced less noisy and more accurate dynamic images than the other algorithms. Moreover, using real data, the DIP method was found to perform better than other types of post-denoising method in terms of contrast-to-noise ratio, and also maintain the contrast-to-noise ratio when resampling the list data to 1/5 and 1/10 of the original size, demonstrating that the DIP method could be applied to low-dose PET imaging. These results indicated that the proposed DIP method provides a promising means of post-denoising for dynamic PET images.</subfield></datafield><datafield tag="650" ind1=" " ind2="4"><subfield code="a">Convolutional neural networks</subfield></datafield><datafield tag="650" ind1=" " ind2="4"><subfield code="a">deep image prior</subfield></datafield><datafield tag="650" ind1=" " ind2="4"><subfield code="a">deep learning</subfield></datafield><datafield tag="650" ind1=" " ind2="4"><subfield code="a">denoising</subfield></datafield><datafield tag="650" ind1=" " ind2="4"><subfield code="a">dynamic positron emission tomography</subfield></datafield><datafield tag="653" ind1=" " ind2="0"><subfield code="a">Electrical engineering. Electronics. Nuclear engineering</subfield></datafield><datafield tag="700" ind1="0" ind2=" "><subfield code="a">Hiroyuki Ohba</subfield><subfield code="e">verfasserin</subfield><subfield code="4">aut</subfield></datafield><datafield tag="700" ind1="0" ind2=" "><subfield code="a">Kibo Ote</subfield><subfield code="e">verfasserin</subfield><subfield code="4">aut</subfield></datafield><datafield tag="700" ind1="0" ind2=" "><subfield code="a">Atsushi Teramoto</subfield><subfield code="e">verfasserin</subfield><subfield code="4">aut</subfield></datafield><datafield tag="700" ind1="0" ind2=" "><subfield code="a">Hideo Tsukada</subfield><subfield code="e">verfasserin</subfield><subfield code="4">aut</subfield></datafield><datafield tag="773" ind1="0" ind2="8"><subfield code="i">In</subfield><subfield code="t">IEEE Access</subfield><subfield code="d">IEEE, 2014</subfield><subfield code="g">7(2019), Seite 96594-96603</subfield><subfield code="w">(DE-627)728440385</subfield><subfield code="w">(DE-600)2687964-5</subfield><subfield code="x">21693536</subfield><subfield code="7">nnns</subfield></datafield><datafield tag="773" ind1="1" ind2="8"><subfield code="g">volume:7</subfield><subfield code="g">year:2019</subfield><subfield code="g">pages:96594-96603</subfield></datafield><datafield tag="856" ind1="4" ind2="0"><subfield code="u">https://doi.org/10.1109/ACCESS.2019.2929230</subfield><subfield code="z">kostenfrei</subfield></datafield><datafield tag="856" ind1="4" ind2="0"><subfield code="u">https://doaj.org/article/d4701094f5b94878b8831baaf553f1e4</subfield><subfield code="z">kostenfrei</subfield></datafield><datafield tag="856" ind1="4" ind2="0"><subfield code="u">https://ieeexplore.ieee.org/document/8764327/</subfield><subfield code="z">kostenfrei</subfield></datafield><datafield tag="856" ind1="4" ind2="2"><subfield code="u">https://doaj.org/toc/2169-3536</subfield><subfield code="y">Journal toc</subfield><subfield code="z">kostenfrei</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_USEFLAG_A</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">SYSFLAG_A</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_DOAJ</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">SSG-OLC-PHA</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_11</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_20</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_22</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_23</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_24</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_31</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_39</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_40</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_60</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_62</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_63</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_65</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_69</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_70</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_73</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_95</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_105</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_110</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_151</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_161</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_170</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_213</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_230</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_285</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_293</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_370</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_602</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2014</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4012</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4037</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4112</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4125</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4126</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4249</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4305</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4306</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4307</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4313</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4322</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4323</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4324</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4325</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4335</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4338</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4367</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4700</subfield></datafield><datafield tag="951" ind1=" " ind2=" "><subfield code="a">AR</subfield></datafield><datafield tag="952" ind1=" " ind2=" "><subfield code="d">7</subfield><subfield code="j">2019</subfield><subfield code="h">96594-96603</subfield></datafield></record></collection>
|
callnumber-first |
T - Technology |
author |
Fumio Hashimoto |
spellingShingle |
Fumio Hashimoto misc TK1-9971 misc Convolutional neural networks misc deep image prior misc deep learning misc denoising misc dynamic positron emission tomography misc Electrical engineering. Electronics. Nuclear engineering Dynamic PET Image Denoising Using Deep Convolutional Neural Networks Without Prior Training Datasets |
authorStr |
Fumio Hashimoto |
ppnlink_with_tag_str_mv |
@@773@@(DE-627)728440385 |
format |
electronic Article |
delete_txt_mv |
keep |
author_role |
aut aut aut aut aut |
collection |
DOAJ |
remote_str |
true |
callnumber-label |
TK1-9971 |
illustrated |
Not Illustrated |
issn |
21693536 |
topic_title |
TK1-9971 Dynamic PET Image Denoising Using Deep Convolutional Neural Networks Without Prior Training Datasets Convolutional neural networks deep image prior deep learning denoising dynamic positron emission tomography |
topic |
misc TK1-9971 misc Convolutional neural networks misc deep image prior misc deep learning misc denoising misc dynamic positron emission tomography misc Electrical engineering. Electronics. Nuclear engineering |
topic_unstemmed |
misc TK1-9971 misc Convolutional neural networks misc deep image prior misc deep learning misc denoising misc dynamic positron emission tomography misc Electrical engineering. Electronics. Nuclear engineering |
topic_browse |
misc TK1-9971 misc Convolutional neural networks misc deep image prior misc deep learning misc denoising misc dynamic positron emission tomography misc Electrical engineering. Electronics. Nuclear engineering |
format_facet |
Elektronische Aufsätze Aufsätze Elektronische Ressource |
format_main_str_mv |
Text Zeitschrift/Artikel |
carriertype_str_mv |
cr |
hierarchy_parent_title |
IEEE Access |
hierarchy_parent_id |
728440385 |
hierarchy_top_title |
IEEE Access |
isfreeaccess_txt |
true |
familylinks_str_mv |
(DE-627)728440385 (DE-600)2687964-5 |
title |
Dynamic PET Image Denoising Using Deep Convolutional Neural Networks Without Prior Training Datasets |
ctrlnum |
(DE-627)DOAJ05768474X (DE-599)DOAJd4701094f5b94878b8831baaf553f1e4 |
title_full |
Dynamic PET Image Denoising Using Deep Convolutional Neural Networks Without Prior Training Datasets |
author_sort |
Fumio Hashimoto |
journal |
IEEE Access |
journalStr |
IEEE Access |
callnumber-first-code |
T |
lang_code |
eng |
isOA_bool |
true |
recordtype |
marc |
publishDateSort |
2019 |
contenttype_str_mv |
txt |
container_start_page |
96594 |
author_browse |
Fumio Hashimoto Hiroyuki Ohba Kibo Ote Atsushi Teramoto Hideo Tsukada |
container_volume |
7 |
class |
TK1-9971 |
format_se |
Elektronische Aufsätze |
author-letter |
Fumio Hashimoto |
doi_str_mv |
10.1109/ACCESS.2019.2929230 |
author2-role |
verfasserin |
title_sort |
dynamic pet image denoising using deep convolutional neural networks without prior training datasets |
callnumber |
TK1-9971 |
title_auth |
Dynamic PET Image Denoising Using Deep Convolutional Neural Networks Without Prior Training Datasets |
abstract |
Deep learning has attracted growing interest for application to medical imaging, such as positron emission tomography (PET), due to its excellent performance. Convolutional neural networks (CNNs), a facet of deep learning requires large training-image datasets. This presents a challenge in a clinical setting because it is difficult to prepare large, high-quality patient-related datasets. Recently, the deep image prior (DIP) approach has been devised, based on the fact that CNN structures have the intrinsic ability to solve inverse problems such as denoising without pre-training and do not require the preparation of training datasets. Herein, we proposed the dynamic PET image denoising using a DIP approach, with the PET data itself being used to reduce the statistical image noise. Static PET data were acquired for input to the network, with the dynamic PET images being handled as training labels, while the denoised dynamic PET images were represented by the network output. We applied the proposed DIP method to computer simulations and also to real data acquired from a living monkey brain with <sup<18</sup<F-fluoro-2-deoxy-D-glucose (<sup<18</sup<F-FDG). As a simulation result, our DIP method produced less noisy and more accurate dynamic images than the other algorithms. Moreover, using real data, the DIP method was found to perform better than other types of post-denoising method in terms of contrast-to-noise ratio, and also maintain the contrast-to-noise ratio when resampling the list data to 1/5 and 1/10 of the original size, demonstrating that the DIP method could be applied to low-dose PET imaging. These results indicated that the proposed DIP method provides a promising means of post-denoising for dynamic PET images. |
abstractGer |
Deep learning has attracted growing interest for application to medical imaging, such as positron emission tomography (PET), due to its excellent performance. Convolutional neural networks (CNNs), a facet of deep learning requires large training-image datasets. This presents a challenge in a clinical setting because it is difficult to prepare large, high-quality patient-related datasets. Recently, the deep image prior (DIP) approach has been devised, based on the fact that CNN structures have the intrinsic ability to solve inverse problems such as denoising without pre-training and do not require the preparation of training datasets. Herein, we proposed the dynamic PET image denoising using a DIP approach, with the PET data itself being used to reduce the statistical image noise. Static PET data were acquired for input to the network, with the dynamic PET images being handled as training labels, while the denoised dynamic PET images were represented by the network output. We applied the proposed DIP method to computer simulations and also to real data acquired from a living monkey brain with <sup<18</sup<F-fluoro-2-deoxy-D-glucose (<sup<18</sup<F-FDG). As a simulation result, our DIP method produced less noisy and more accurate dynamic images than the other algorithms. Moreover, using real data, the DIP method was found to perform better than other types of post-denoising method in terms of contrast-to-noise ratio, and also maintain the contrast-to-noise ratio when resampling the list data to 1/5 and 1/10 of the original size, demonstrating that the DIP method could be applied to low-dose PET imaging. These results indicated that the proposed DIP method provides a promising means of post-denoising for dynamic PET images. |
abstract_unstemmed |
Deep learning has attracted growing interest for application to medical imaging, such as positron emission tomography (PET), due to its excellent performance. Convolutional neural networks (CNNs), a facet of deep learning requires large training-image datasets. This presents a challenge in a clinical setting because it is difficult to prepare large, high-quality patient-related datasets. Recently, the deep image prior (DIP) approach has been devised, based on the fact that CNN structures have the intrinsic ability to solve inverse problems such as denoising without pre-training and do not require the preparation of training datasets. Herein, we proposed the dynamic PET image denoising using a DIP approach, with the PET data itself being used to reduce the statistical image noise. Static PET data were acquired for input to the network, with the dynamic PET images being handled as training labels, while the denoised dynamic PET images were represented by the network output. We applied the proposed DIP method to computer simulations and also to real data acquired from a living monkey brain with <sup<18</sup<F-fluoro-2-deoxy-D-glucose (<sup<18</sup<F-FDG). As a simulation result, our DIP method produced less noisy and more accurate dynamic images than the other algorithms. Moreover, using real data, the DIP method was found to perform better than other types of post-denoising method in terms of contrast-to-noise ratio, and also maintain the contrast-to-noise ratio when resampling the list data to 1/5 and 1/10 of the original size, demonstrating that the DIP method could be applied to low-dose PET imaging. These results indicated that the proposed DIP method provides a promising means of post-denoising for dynamic PET images. |
collection_details |
GBV_USEFLAG_A SYSFLAG_A GBV_DOAJ SSG-OLC-PHA GBV_ILN_11 GBV_ILN_20 GBV_ILN_22 GBV_ILN_23 GBV_ILN_24 GBV_ILN_31 GBV_ILN_39 GBV_ILN_40 GBV_ILN_60 GBV_ILN_62 GBV_ILN_63 GBV_ILN_65 GBV_ILN_69 GBV_ILN_70 GBV_ILN_73 GBV_ILN_95 GBV_ILN_105 GBV_ILN_110 GBV_ILN_151 GBV_ILN_161 GBV_ILN_170 GBV_ILN_213 GBV_ILN_230 GBV_ILN_285 GBV_ILN_293 GBV_ILN_370 GBV_ILN_602 GBV_ILN_2014 GBV_ILN_4012 GBV_ILN_4037 GBV_ILN_4112 GBV_ILN_4125 GBV_ILN_4126 GBV_ILN_4249 GBV_ILN_4305 GBV_ILN_4306 GBV_ILN_4307 GBV_ILN_4313 GBV_ILN_4322 GBV_ILN_4323 GBV_ILN_4324 GBV_ILN_4325 GBV_ILN_4335 GBV_ILN_4338 GBV_ILN_4367 GBV_ILN_4700 |
title_short |
Dynamic PET Image Denoising Using Deep Convolutional Neural Networks Without Prior Training Datasets |
url |
https://doi.org/10.1109/ACCESS.2019.2929230 https://doaj.org/article/d4701094f5b94878b8831baaf553f1e4 https://ieeexplore.ieee.org/document/8764327/ https://doaj.org/toc/2169-3536 |
remote_bool |
true |
author2 |
Hiroyuki Ohba Kibo Ote Atsushi Teramoto Hideo Tsukada |
author2Str |
Hiroyuki Ohba Kibo Ote Atsushi Teramoto Hideo Tsukada |
ppnlink |
728440385 |
callnumber-subject |
TK - Electrical and Nuclear Engineering |
mediatype_str_mv |
c |
isOA_txt |
true |
hochschulschrift_bool |
false |
doi_str |
10.1109/ACCESS.2019.2929230 |
callnumber-a |
TK1-9971 |
up_date |
2024-07-03T13:29:52.438Z |
_version_ |
1803564762998505472 |
fullrecord_marcxml |
<?xml version="1.0" encoding="UTF-8"?><collection xmlns="http://www.loc.gov/MARC21/slim"><record><leader>01000caa a22002652 4500</leader><controlfield tag="001">DOAJ05768474X</controlfield><controlfield tag="003">DE-627</controlfield><controlfield tag="005">20230501192242.0</controlfield><controlfield tag="007">cr uuu---uuuuu</controlfield><controlfield tag="008">230227s2019 xx |||||o 00| ||eng c</controlfield><datafield tag="024" ind1="7" ind2=" "><subfield code="a">10.1109/ACCESS.2019.2929230</subfield><subfield code="2">doi</subfield></datafield><datafield tag="035" ind1=" " ind2=" "><subfield code="a">(DE-627)DOAJ05768474X</subfield></datafield><datafield tag="035" ind1=" " ind2=" "><subfield code="a">(DE-599)DOAJd4701094f5b94878b8831baaf553f1e4</subfield></datafield><datafield tag="040" ind1=" " ind2=" "><subfield code="a">DE-627</subfield><subfield code="b">ger</subfield><subfield code="c">DE-627</subfield><subfield code="e">rakwb</subfield></datafield><datafield tag="041" ind1=" " ind2=" "><subfield code="a">eng</subfield></datafield><datafield tag="050" ind1=" " ind2="0"><subfield code="a">TK1-9971</subfield></datafield><datafield tag="100" ind1="0" ind2=" "><subfield code="a">Fumio Hashimoto</subfield><subfield code="e">verfasserin</subfield><subfield code="4">aut</subfield></datafield><datafield tag="245" ind1="1" ind2="0"><subfield code="a">Dynamic PET Image Denoising Using Deep Convolutional Neural Networks Without Prior Training Datasets</subfield></datafield><datafield tag="264" ind1=" " ind2="1"><subfield code="c">2019</subfield></datafield><datafield tag="336" ind1=" " ind2=" "><subfield code="a">Text</subfield><subfield code="b">txt</subfield><subfield code="2">rdacontent</subfield></datafield><datafield tag="337" ind1=" " ind2=" "><subfield code="a">Computermedien</subfield><subfield code="b">c</subfield><subfield code="2">rdamedia</subfield></datafield><datafield tag="338" ind1=" " ind2=" "><subfield code="a">Online-Ressource</subfield><subfield code="b">cr</subfield><subfield code="2">rdacarrier</subfield></datafield><datafield tag="520" ind1=" " ind2=" "><subfield code="a">Deep learning has attracted growing interest for application to medical imaging, such as positron emission tomography (PET), due to its excellent performance. Convolutional neural networks (CNNs), a facet of deep learning requires large training-image datasets. This presents a challenge in a clinical setting because it is difficult to prepare large, high-quality patient-related datasets. Recently, the deep image prior (DIP) approach has been devised, based on the fact that CNN structures have the intrinsic ability to solve inverse problems such as denoising without pre-training and do not require the preparation of training datasets. Herein, we proposed the dynamic PET image denoising using a DIP approach, with the PET data itself being used to reduce the statistical image noise. Static PET data were acquired for input to the network, with the dynamic PET images being handled as training labels, while the denoised dynamic PET images were represented by the network output. We applied the proposed DIP method to computer simulations and also to real data acquired from a living monkey brain with <sup<18</sup<F-fluoro-2-deoxy-D-glucose (<sup<18</sup<F-FDG). As a simulation result, our DIP method produced less noisy and more accurate dynamic images than the other algorithms. Moreover, using real data, the DIP method was found to perform better than other types of post-denoising method in terms of contrast-to-noise ratio, and also maintain the contrast-to-noise ratio when resampling the list data to 1/5 and 1/10 of the original size, demonstrating that the DIP method could be applied to low-dose PET imaging. These results indicated that the proposed DIP method provides a promising means of post-denoising for dynamic PET images.</subfield></datafield><datafield tag="650" ind1=" " ind2="4"><subfield code="a">Convolutional neural networks</subfield></datafield><datafield tag="650" ind1=" " ind2="4"><subfield code="a">deep image prior</subfield></datafield><datafield tag="650" ind1=" " ind2="4"><subfield code="a">deep learning</subfield></datafield><datafield tag="650" ind1=" " ind2="4"><subfield code="a">denoising</subfield></datafield><datafield tag="650" ind1=" " ind2="4"><subfield code="a">dynamic positron emission tomography</subfield></datafield><datafield tag="653" ind1=" " ind2="0"><subfield code="a">Electrical engineering. Electronics. Nuclear engineering</subfield></datafield><datafield tag="700" ind1="0" ind2=" "><subfield code="a">Hiroyuki Ohba</subfield><subfield code="e">verfasserin</subfield><subfield code="4">aut</subfield></datafield><datafield tag="700" ind1="0" ind2=" "><subfield code="a">Kibo Ote</subfield><subfield code="e">verfasserin</subfield><subfield code="4">aut</subfield></datafield><datafield tag="700" ind1="0" ind2=" "><subfield code="a">Atsushi Teramoto</subfield><subfield code="e">verfasserin</subfield><subfield code="4">aut</subfield></datafield><datafield tag="700" ind1="0" ind2=" "><subfield code="a">Hideo Tsukada</subfield><subfield code="e">verfasserin</subfield><subfield code="4">aut</subfield></datafield><datafield tag="773" ind1="0" ind2="8"><subfield code="i">In</subfield><subfield code="t">IEEE Access</subfield><subfield code="d">IEEE, 2014</subfield><subfield code="g">7(2019), Seite 96594-96603</subfield><subfield code="w">(DE-627)728440385</subfield><subfield code="w">(DE-600)2687964-5</subfield><subfield code="x">21693536</subfield><subfield code="7">nnns</subfield></datafield><datafield tag="773" ind1="1" ind2="8"><subfield code="g">volume:7</subfield><subfield code="g">year:2019</subfield><subfield code="g">pages:96594-96603</subfield></datafield><datafield tag="856" ind1="4" ind2="0"><subfield code="u">https://doi.org/10.1109/ACCESS.2019.2929230</subfield><subfield code="z">kostenfrei</subfield></datafield><datafield tag="856" ind1="4" ind2="0"><subfield code="u">https://doaj.org/article/d4701094f5b94878b8831baaf553f1e4</subfield><subfield code="z">kostenfrei</subfield></datafield><datafield tag="856" ind1="4" ind2="0"><subfield code="u">https://ieeexplore.ieee.org/document/8764327/</subfield><subfield code="z">kostenfrei</subfield></datafield><datafield tag="856" ind1="4" ind2="2"><subfield code="u">https://doaj.org/toc/2169-3536</subfield><subfield code="y">Journal toc</subfield><subfield code="z">kostenfrei</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_USEFLAG_A</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">SYSFLAG_A</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_DOAJ</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">SSG-OLC-PHA</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_11</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_20</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_22</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_23</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_24</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_31</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_39</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_40</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_60</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_62</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_63</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_65</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_69</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_70</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_73</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_95</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_105</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_110</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_151</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_161</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_170</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_213</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_230</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_285</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_293</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_370</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_602</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2014</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4012</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4037</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4112</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4125</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4126</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4249</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4305</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4306</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4307</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4313</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4322</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4323</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4324</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4325</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4335</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4338</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4367</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4700</subfield></datafield><datafield tag="951" ind1=" " ind2=" "><subfield code="a">AR</subfield></datafield><datafield tag="952" ind1=" " ind2=" "><subfield code="d">7</subfield><subfield code="j">2019</subfield><subfield code="h">96594-96603</subfield></datafield></record></collection>
|
score |
7.400832 |