Dual-constraint burst image denoising method
Abstract Deep learning has proven to be an effective mechanism for computer vision tasks, especially for image denoising and burst image denoising. In this paper, we focus on solving the burst image denoising problem and aim to generate a single clean image from a burst of noisy images. We propose t...
Ausführliche Beschreibung
Autor*in: |
Zhang, Dan [verfasserIn] |
---|
Format: |
E-Artikel |
---|---|
Sprache: |
Englisch |
Erschienen: |
2022 |
---|
Anmerkung: |
© Zhejiang University Press 2022 |
---|
Übergeordnetes Werk: |
Enthalten in: Journal of Zhejiang University - Hangzhou : Zhejiang Univ. Press, 2010, 23(2022), 2 vom: 24. Jan., Seite 220-233 |
---|---|
Übergeordnetes Werk: |
volume:23 ; year:2022 ; number:2 ; day:24 ; month:01 ; pages:220-233 |
Links: |
---|
DOI / URN: |
10.1631/FITEE.2000353 |
---|
Katalog-ID: |
SPR050563041 |
---|
LEADER | 01000naa a22002652 4500 | ||
---|---|---|---|
001 | SPR050563041 | ||
003 | DE-627 | ||
005 | 20230507133501.0 | ||
007 | cr uuu---uuuuu | ||
008 | 230507s2022 xx |||||o 00| ||eng c | ||
024 | 7 | |a 10.1631/FITEE.2000353 |2 doi | |
035 | |a (DE-627)SPR050563041 | ||
035 | |a (SPR)FITEE.2000353-e | ||
040 | |a DE-627 |b ger |c DE-627 |e rakwb | ||
041 | |a eng | ||
100 | 1 | |a Zhang, Dan |e verfasserin |0 (orcid)0000-0002-5033-8128 |4 aut | |
245 | 1 | 0 | |a Dual-constraint burst image denoising method |
264 | 1 | |c 2022 | |
336 | |a Text |b txt |2 rdacontent | ||
337 | |a Computermedien |b c |2 rdamedia | ||
338 | |a Online-Ressource |b cr |2 rdacarrier | ||
500 | |a © Zhejiang University Press 2022 | ||
520 | |a Abstract Deep learning has proven to be an effective mechanism for computer vision tasks, especially for image denoising and burst image denoising. In this paper, we focus on solving the burst image denoising problem and aim to generate a single clean image from a burst of noisy images. We propose to combine the power of block matching and 3D filtering (BM3D) and a convolutional neural network (CNN) for burst image denoising. In particular, we design a CNN with a divide-and-conquer strategy. First, we employ BM3D to preprocess the noisy burst images. Then, the preprocessed images and noisy images are fed separately into two parallel CNN branches. The two branches produce somewhat different results. Finally, we use a light CNN block to combine the two outputs. In addition, we improve the performance by optimizing the two branches using two different constraints: a signal constraint and a noise constraint. One maps a clean signal, and the other maps the noise distribution. In addition, we adopt block matching in the network to avoid frame misalignment. Experimental results on synthetic and real noisy images show that our algorithm is competitive with other algorithms. | ||
700 | 1 | |a Zhao, Lei |0 (orcid)0000-0003-4791-454X |4 aut | |
700 | 1 | |a Xu, Duanqing |4 aut | |
700 | 1 | |a Lu, Dongming |4 aut | |
773 | 0 | 8 | |i Enthalten in |t Journal of Zhejiang University |d Hangzhou : Zhejiang Univ. Press, 2010 |g 23(2022), 2 vom: 24. Jan., Seite 220-233 |w (DE-627)618789693 |w (DE-600)2537865-X |x 1869-196X |7 nnns |
773 | 1 | 8 | |g volume:23 |g year:2022 |g number:2 |g day:24 |g month:01 |g pages:220-233 |
856 | 4 | 0 | |u https://dx.doi.org/10.1631/FITEE.2000353 |z lizenzpflichtig |3 Volltext |
912 | |a GBV_USEFLAG_A | ||
912 | |a SYSFLAG_A | ||
912 | |a GBV_SPRINGER | ||
912 | |a GBV_ILN_20 | ||
912 | |a GBV_ILN_22 | ||
912 | |a GBV_ILN_23 | ||
912 | |a GBV_ILN_24 | ||
912 | |a GBV_ILN_31 | ||
912 | |a GBV_ILN_32 | ||
912 | |a GBV_ILN_39 | ||
912 | |a GBV_ILN_40 | ||
912 | |a GBV_ILN_60 | ||
912 | |a GBV_ILN_62 | ||
912 | |a GBV_ILN_65 | ||
912 | |a GBV_ILN_69 | ||
912 | |a GBV_ILN_70 | ||
912 | |a GBV_ILN_73 | ||
912 | |a GBV_ILN_74 | ||
912 | |a GBV_ILN_90 | ||
912 | |a GBV_ILN_95 | ||
912 | |a GBV_ILN_100 | ||
912 | |a GBV_ILN_105 | ||
912 | |a GBV_ILN_110 | ||
912 | |a GBV_ILN_120 | ||
912 | |a GBV_ILN_121 | ||
912 | |a GBV_ILN_138 | ||
912 | |a GBV_ILN_152 | ||
912 | |a GBV_ILN_161 | ||
912 | |a GBV_ILN_171 | ||
912 | |a GBV_ILN_187 | ||
912 | |a GBV_ILN_206 | ||
912 | |a GBV_ILN_224 | ||
912 | |a GBV_ILN_250 | ||
912 | |a GBV_ILN_281 | ||
912 | |a GBV_ILN_285 | ||
912 | |a GBV_ILN_293 | ||
912 | |a GBV_ILN_370 | ||
912 | |a GBV_ILN_602 | ||
912 | |a GBV_ILN_647 | ||
912 | |a GBV_ILN_702 | ||
912 | |a GBV_ILN_2001 | ||
912 | |a GBV_ILN_2003 | ||
912 | |a GBV_ILN_2005 | ||
912 | |a GBV_ILN_2007 | ||
912 | |a GBV_ILN_2009 | ||
912 | |a GBV_ILN_2014 | ||
912 | |a GBV_ILN_2015 | ||
912 | |a GBV_ILN_2018 | ||
912 | |a GBV_ILN_2021 | ||
912 | |a GBV_ILN_2025 | ||
912 | |a GBV_ILN_2026 | ||
912 | |a GBV_ILN_2031 | ||
912 | |a GBV_ILN_2034 | ||
912 | |a GBV_ILN_2037 | ||
912 | |a GBV_ILN_2039 | ||
912 | |a GBV_ILN_2044 | ||
912 | |a GBV_ILN_2059 | ||
912 | |a GBV_ILN_2064 | ||
912 | |a GBV_ILN_2068 | ||
912 | |a GBV_ILN_2106 | ||
912 | |a GBV_ILN_2108 | ||
912 | |a GBV_ILN_2111 | ||
912 | |a GBV_ILN_2116 | ||
912 | |a GBV_ILN_2118 | ||
912 | |a GBV_ILN_2119 | ||
912 | |a GBV_ILN_2122 | ||
912 | |a GBV_ILN_2129 | ||
912 | |a GBV_ILN_2143 | ||
912 | |a GBV_ILN_2144 | ||
912 | |a GBV_ILN_2147 | ||
912 | |a GBV_ILN_2148 | ||
912 | |a GBV_ILN_2152 | ||
912 | |a GBV_ILN_2153 | ||
912 | |a GBV_ILN_2190 | ||
951 | |a AR | ||
952 | |d 23 |j 2022 |e 2 |b 24 |c 01 |h 220-233 |
author_variant |
d z dz l z lz d x dx d l dl |
---|---|
matchkey_str |
article:1869196X:2022----::ulosritusiaee |
hierarchy_sort_str |
2022 |
publishDate |
2022 |
allfields |
10.1631/FITEE.2000353 doi (DE-627)SPR050563041 (SPR)FITEE.2000353-e DE-627 ger DE-627 rakwb eng Zhang, Dan verfasserin (orcid)0000-0002-5033-8128 aut Dual-constraint burst image denoising method 2022 Text txt rdacontent Computermedien c rdamedia Online-Ressource cr rdacarrier © Zhejiang University Press 2022 Abstract Deep learning has proven to be an effective mechanism for computer vision tasks, especially for image denoising and burst image denoising. In this paper, we focus on solving the burst image denoising problem and aim to generate a single clean image from a burst of noisy images. We propose to combine the power of block matching and 3D filtering (BM3D) and a convolutional neural network (CNN) for burst image denoising. In particular, we design a CNN with a divide-and-conquer strategy. First, we employ BM3D to preprocess the noisy burst images. Then, the preprocessed images and noisy images are fed separately into two parallel CNN branches. The two branches produce somewhat different results. Finally, we use a light CNN block to combine the two outputs. In addition, we improve the performance by optimizing the two branches using two different constraints: a signal constraint and a noise constraint. One maps a clean signal, and the other maps the noise distribution. In addition, we adopt block matching in the network to avoid frame misalignment. Experimental results on synthetic and real noisy images show that our algorithm is competitive with other algorithms. Zhao, Lei (orcid)0000-0003-4791-454X aut Xu, Duanqing aut Lu, Dongming aut Enthalten in Journal of Zhejiang University Hangzhou : Zhejiang Univ. Press, 2010 23(2022), 2 vom: 24. Jan., Seite 220-233 (DE-627)618789693 (DE-600)2537865-X 1869-196X nnns volume:23 year:2022 number:2 day:24 month:01 pages:220-233 https://dx.doi.org/10.1631/FITEE.2000353 lizenzpflichtig Volltext GBV_USEFLAG_A SYSFLAG_A GBV_SPRINGER GBV_ILN_20 GBV_ILN_22 GBV_ILN_23 GBV_ILN_24 GBV_ILN_31 GBV_ILN_32 GBV_ILN_39 GBV_ILN_40 GBV_ILN_60 GBV_ILN_62 GBV_ILN_65 GBV_ILN_69 GBV_ILN_70 GBV_ILN_73 GBV_ILN_74 GBV_ILN_90 GBV_ILN_95 GBV_ILN_100 GBV_ILN_105 GBV_ILN_110 GBV_ILN_120 GBV_ILN_121 GBV_ILN_138 GBV_ILN_152 GBV_ILN_161 GBV_ILN_171 GBV_ILN_187 GBV_ILN_206 GBV_ILN_224 GBV_ILN_250 GBV_ILN_281 GBV_ILN_285 GBV_ILN_293 GBV_ILN_370 GBV_ILN_602 GBV_ILN_647 GBV_ILN_702 GBV_ILN_2001 GBV_ILN_2003 GBV_ILN_2005 GBV_ILN_2007 GBV_ILN_2009 GBV_ILN_2014 GBV_ILN_2015 GBV_ILN_2018 GBV_ILN_2021 GBV_ILN_2025 GBV_ILN_2026 GBV_ILN_2031 GBV_ILN_2034 GBV_ILN_2037 GBV_ILN_2039 GBV_ILN_2044 GBV_ILN_2059 GBV_ILN_2064 GBV_ILN_2068 GBV_ILN_2106 GBV_ILN_2108 GBV_ILN_2111 GBV_ILN_2116 GBV_ILN_2118 GBV_ILN_2119 GBV_ILN_2122 GBV_ILN_2129 GBV_ILN_2143 GBV_ILN_2144 GBV_ILN_2147 GBV_ILN_2148 GBV_ILN_2152 GBV_ILN_2153 GBV_ILN_2190 AR 23 2022 2 24 01 220-233 |
spelling |
10.1631/FITEE.2000353 doi (DE-627)SPR050563041 (SPR)FITEE.2000353-e DE-627 ger DE-627 rakwb eng Zhang, Dan verfasserin (orcid)0000-0002-5033-8128 aut Dual-constraint burst image denoising method 2022 Text txt rdacontent Computermedien c rdamedia Online-Ressource cr rdacarrier © Zhejiang University Press 2022 Abstract Deep learning has proven to be an effective mechanism for computer vision tasks, especially for image denoising and burst image denoising. In this paper, we focus on solving the burst image denoising problem and aim to generate a single clean image from a burst of noisy images. We propose to combine the power of block matching and 3D filtering (BM3D) and a convolutional neural network (CNN) for burst image denoising. In particular, we design a CNN with a divide-and-conquer strategy. First, we employ BM3D to preprocess the noisy burst images. Then, the preprocessed images and noisy images are fed separately into two parallel CNN branches. The two branches produce somewhat different results. Finally, we use a light CNN block to combine the two outputs. In addition, we improve the performance by optimizing the two branches using two different constraints: a signal constraint and a noise constraint. One maps a clean signal, and the other maps the noise distribution. In addition, we adopt block matching in the network to avoid frame misalignment. Experimental results on synthetic and real noisy images show that our algorithm is competitive with other algorithms. Zhao, Lei (orcid)0000-0003-4791-454X aut Xu, Duanqing aut Lu, Dongming aut Enthalten in Journal of Zhejiang University Hangzhou : Zhejiang Univ. Press, 2010 23(2022), 2 vom: 24. Jan., Seite 220-233 (DE-627)618789693 (DE-600)2537865-X 1869-196X nnns volume:23 year:2022 number:2 day:24 month:01 pages:220-233 https://dx.doi.org/10.1631/FITEE.2000353 lizenzpflichtig Volltext GBV_USEFLAG_A SYSFLAG_A GBV_SPRINGER GBV_ILN_20 GBV_ILN_22 GBV_ILN_23 GBV_ILN_24 GBV_ILN_31 GBV_ILN_32 GBV_ILN_39 GBV_ILN_40 GBV_ILN_60 GBV_ILN_62 GBV_ILN_65 GBV_ILN_69 GBV_ILN_70 GBV_ILN_73 GBV_ILN_74 GBV_ILN_90 GBV_ILN_95 GBV_ILN_100 GBV_ILN_105 GBV_ILN_110 GBV_ILN_120 GBV_ILN_121 GBV_ILN_138 GBV_ILN_152 GBV_ILN_161 GBV_ILN_171 GBV_ILN_187 GBV_ILN_206 GBV_ILN_224 GBV_ILN_250 GBV_ILN_281 GBV_ILN_285 GBV_ILN_293 GBV_ILN_370 GBV_ILN_602 GBV_ILN_647 GBV_ILN_702 GBV_ILN_2001 GBV_ILN_2003 GBV_ILN_2005 GBV_ILN_2007 GBV_ILN_2009 GBV_ILN_2014 GBV_ILN_2015 GBV_ILN_2018 GBV_ILN_2021 GBV_ILN_2025 GBV_ILN_2026 GBV_ILN_2031 GBV_ILN_2034 GBV_ILN_2037 GBV_ILN_2039 GBV_ILN_2044 GBV_ILN_2059 GBV_ILN_2064 GBV_ILN_2068 GBV_ILN_2106 GBV_ILN_2108 GBV_ILN_2111 GBV_ILN_2116 GBV_ILN_2118 GBV_ILN_2119 GBV_ILN_2122 GBV_ILN_2129 GBV_ILN_2143 GBV_ILN_2144 GBV_ILN_2147 GBV_ILN_2148 GBV_ILN_2152 GBV_ILN_2153 GBV_ILN_2190 AR 23 2022 2 24 01 220-233 |
allfields_unstemmed |
10.1631/FITEE.2000353 doi (DE-627)SPR050563041 (SPR)FITEE.2000353-e DE-627 ger DE-627 rakwb eng Zhang, Dan verfasserin (orcid)0000-0002-5033-8128 aut Dual-constraint burst image denoising method 2022 Text txt rdacontent Computermedien c rdamedia Online-Ressource cr rdacarrier © Zhejiang University Press 2022 Abstract Deep learning has proven to be an effective mechanism for computer vision tasks, especially for image denoising and burst image denoising. In this paper, we focus on solving the burst image denoising problem and aim to generate a single clean image from a burst of noisy images. We propose to combine the power of block matching and 3D filtering (BM3D) and a convolutional neural network (CNN) for burst image denoising. In particular, we design a CNN with a divide-and-conquer strategy. First, we employ BM3D to preprocess the noisy burst images. Then, the preprocessed images and noisy images are fed separately into two parallel CNN branches. The two branches produce somewhat different results. Finally, we use a light CNN block to combine the two outputs. In addition, we improve the performance by optimizing the two branches using two different constraints: a signal constraint and a noise constraint. One maps a clean signal, and the other maps the noise distribution. In addition, we adopt block matching in the network to avoid frame misalignment. Experimental results on synthetic and real noisy images show that our algorithm is competitive with other algorithms. Zhao, Lei (orcid)0000-0003-4791-454X aut Xu, Duanqing aut Lu, Dongming aut Enthalten in Journal of Zhejiang University Hangzhou : Zhejiang Univ. Press, 2010 23(2022), 2 vom: 24. Jan., Seite 220-233 (DE-627)618789693 (DE-600)2537865-X 1869-196X nnns volume:23 year:2022 number:2 day:24 month:01 pages:220-233 https://dx.doi.org/10.1631/FITEE.2000353 lizenzpflichtig Volltext GBV_USEFLAG_A SYSFLAG_A GBV_SPRINGER GBV_ILN_20 GBV_ILN_22 GBV_ILN_23 GBV_ILN_24 GBV_ILN_31 GBV_ILN_32 GBV_ILN_39 GBV_ILN_40 GBV_ILN_60 GBV_ILN_62 GBV_ILN_65 GBV_ILN_69 GBV_ILN_70 GBV_ILN_73 GBV_ILN_74 GBV_ILN_90 GBV_ILN_95 GBV_ILN_100 GBV_ILN_105 GBV_ILN_110 GBV_ILN_120 GBV_ILN_121 GBV_ILN_138 GBV_ILN_152 GBV_ILN_161 GBV_ILN_171 GBV_ILN_187 GBV_ILN_206 GBV_ILN_224 GBV_ILN_250 GBV_ILN_281 GBV_ILN_285 GBV_ILN_293 GBV_ILN_370 GBV_ILN_602 GBV_ILN_647 GBV_ILN_702 GBV_ILN_2001 GBV_ILN_2003 GBV_ILN_2005 GBV_ILN_2007 GBV_ILN_2009 GBV_ILN_2014 GBV_ILN_2015 GBV_ILN_2018 GBV_ILN_2021 GBV_ILN_2025 GBV_ILN_2026 GBV_ILN_2031 GBV_ILN_2034 GBV_ILN_2037 GBV_ILN_2039 GBV_ILN_2044 GBV_ILN_2059 GBV_ILN_2064 GBV_ILN_2068 GBV_ILN_2106 GBV_ILN_2108 GBV_ILN_2111 GBV_ILN_2116 GBV_ILN_2118 GBV_ILN_2119 GBV_ILN_2122 GBV_ILN_2129 GBV_ILN_2143 GBV_ILN_2144 GBV_ILN_2147 GBV_ILN_2148 GBV_ILN_2152 GBV_ILN_2153 GBV_ILN_2190 AR 23 2022 2 24 01 220-233 |
allfieldsGer |
10.1631/FITEE.2000353 doi (DE-627)SPR050563041 (SPR)FITEE.2000353-e DE-627 ger DE-627 rakwb eng Zhang, Dan verfasserin (orcid)0000-0002-5033-8128 aut Dual-constraint burst image denoising method 2022 Text txt rdacontent Computermedien c rdamedia Online-Ressource cr rdacarrier © Zhejiang University Press 2022 Abstract Deep learning has proven to be an effective mechanism for computer vision tasks, especially for image denoising and burst image denoising. In this paper, we focus on solving the burst image denoising problem and aim to generate a single clean image from a burst of noisy images. We propose to combine the power of block matching and 3D filtering (BM3D) and a convolutional neural network (CNN) for burst image denoising. In particular, we design a CNN with a divide-and-conquer strategy. First, we employ BM3D to preprocess the noisy burst images. Then, the preprocessed images and noisy images are fed separately into two parallel CNN branches. The two branches produce somewhat different results. Finally, we use a light CNN block to combine the two outputs. In addition, we improve the performance by optimizing the two branches using two different constraints: a signal constraint and a noise constraint. One maps a clean signal, and the other maps the noise distribution. In addition, we adopt block matching in the network to avoid frame misalignment. Experimental results on synthetic and real noisy images show that our algorithm is competitive with other algorithms. Zhao, Lei (orcid)0000-0003-4791-454X aut Xu, Duanqing aut Lu, Dongming aut Enthalten in Journal of Zhejiang University Hangzhou : Zhejiang Univ. Press, 2010 23(2022), 2 vom: 24. Jan., Seite 220-233 (DE-627)618789693 (DE-600)2537865-X 1869-196X nnns volume:23 year:2022 number:2 day:24 month:01 pages:220-233 https://dx.doi.org/10.1631/FITEE.2000353 lizenzpflichtig Volltext GBV_USEFLAG_A SYSFLAG_A GBV_SPRINGER GBV_ILN_20 GBV_ILN_22 GBV_ILN_23 GBV_ILN_24 GBV_ILN_31 GBV_ILN_32 GBV_ILN_39 GBV_ILN_40 GBV_ILN_60 GBV_ILN_62 GBV_ILN_65 GBV_ILN_69 GBV_ILN_70 GBV_ILN_73 GBV_ILN_74 GBV_ILN_90 GBV_ILN_95 GBV_ILN_100 GBV_ILN_105 GBV_ILN_110 GBV_ILN_120 GBV_ILN_121 GBV_ILN_138 GBV_ILN_152 GBV_ILN_161 GBV_ILN_171 GBV_ILN_187 GBV_ILN_206 GBV_ILN_224 GBV_ILN_250 GBV_ILN_281 GBV_ILN_285 GBV_ILN_293 GBV_ILN_370 GBV_ILN_602 GBV_ILN_647 GBV_ILN_702 GBV_ILN_2001 GBV_ILN_2003 GBV_ILN_2005 GBV_ILN_2007 GBV_ILN_2009 GBV_ILN_2014 GBV_ILN_2015 GBV_ILN_2018 GBV_ILN_2021 GBV_ILN_2025 GBV_ILN_2026 GBV_ILN_2031 GBV_ILN_2034 GBV_ILN_2037 GBV_ILN_2039 GBV_ILN_2044 GBV_ILN_2059 GBV_ILN_2064 GBV_ILN_2068 GBV_ILN_2106 GBV_ILN_2108 GBV_ILN_2111 GBV_ILN_2116 GBV_ILN_2118 GBV_ILN_2119 GBV_ILN_2122 GBV_ILN_2129 GBV_ILN_2143 GBV_ILN_2144 GBV_ILN_2147 GBV_ILN_2148 GBV_ILN_2152 GBV_ILN_2153 GBV_ILN_2190 AR 23 2022 2 24 01 220-233 |
allfieldsSound |
10.1631/FITEE.2000353 doi (DE-627)SPR050563041 (SPR)FITEE.2000353-e DE-627 ger DE-627 rakwb eng Zhang, Dan verfasserin (orcid)0000-0002-5033-8128 aut Dual-constraint burst image denoising method 2022 Text txt rdacontent Computermedien c rdamedia Online-Ressource cr rdacarrier © Zhejiang University Press 2022 Abstract Deep learning has proven to be an effective mechanism for computer vision tasks, especially for image denoising and burst image denoising. In this paper, we focus on solving the burst image denoising problem and aim to generate a single clean image from a burst of noisy images. We propose to combine the power of block matching and 3D filtering (BM3D) and a convolutional neural network (CNN) for burst image denoising. In particular, we design a CNN with a divide-and-conquer strategy. First, we employ BM3D to preprocess the noisy burst images. Then, the preprocessed images and noisy images are fed separately into two parallel CNN branches. The two branches produce somewhat different results. Finally, we use a light CNN block to combine the two outputs. In addition, we improve the performance by optimizing the two branches using two different constraints: a signal constraint and a noise constraint. One maps a clean signal, and the other maps the noise distribution. In addition, we adopt block matching in the network to avoid frame misalignment. Experimental results on synthetic and real noisy images show that our algorithm is competitive with other algorithms. Zhao, Lei (orcid)0000-0003-4791-454X aut Xu, Duanqing aut Lu, Dongming aut Enthalten in Journal of Zhejiang University Hangzhou : Zhejiang Univ. Press, 2010 23(2022), 2 vom: 24. Jan., Seite 220-233 (DE-627)618789693 (DE-600)2537865-X 1869-196X nnns volume:23 year:2022 number:2 day:24 month:01 pages:220-233 https://dx.doi.org/10.1631/FITEE.2000353 lizenzpflichtig Volltext GBV_USEFLAG_A SYSFLAG_A GBV_SPRINGER GBV_ILN_20 GBV_ILN_22 GBV_ILN_23 GBV_ILN_24 GBV_ILN_31 GBV_ILN_32 GBV_ILN_39 GBV_ILN_40 GBV_ILN_60 GBV_ILN_62 GBV_ILN_65 GBV_ILN_69 GBV_ILN_70 GBV_ILN_73 GBV_ILN_74 GBV_ILN_90 GBV_ILN_95 GBV_ILN_100 GBV_ILN_105 GBV_ILN_110 GBV_ILN_120 GBV_ILN_121 GBV_ILN_138 GBV_ILN_152 GBV_ILN_161 GBV_ILN_171 GBV_ILN_187 GBV_ILN_206 GBV_ILN_224 GBV_ILN_250 GBV_ILN_281 GBV_ILN_285 GBV_ILN_293 GBV_ILN_370 GBV_ILN_602 GBV_ILN_647 GBV_ILN_702 GBV_ILN_2001 GBV_ILN_2003 GBV_ILN_2005 GBV_ILN_2007 GBV_ILN_2009 GBV_ILN_2014 GBV_ILN_2015 GBV_ILN_2018 GBV_ILN_2021 GBV_ILN_2025 GBV_ILN_2026 GBV_ILN_2031 GBV_ILN_2034 GBV_ILN_2037 GBV_ILN_2039 GBV_ILN_2044 GBV_ILN_2059 GBV_ILN_2064 GBV_ILN_2068 GBV_ILN_2106 GBV_ILN_2108 GBV_ILN_2111 GBV_ILN_2116 GBV_ILN_2118 GBV_ILN_2119 GBV_ILN_2122 GBV_ILN_2129 GBV_ILN_2143 GBV_ILN_2144 GBV_ILN_2147 GBV_ILN_2148 GBV_ILN_2152 GBV_ILN_2153 GBV_ILN_2190 AR 23 2022 2 24 01 220-233 |
language |
English |
source |
Enthalten in Journal of Zhejiang University 23(2022), 2 vom: 24. Jan., Seite 220-233 volume:23 year:2022 number:2 day:24 month:01 pages:220-233 |
sourceStr |
Enthalten in Journal of Zhejiang University 23(2022), 2 vom: 24. Jan., Seite 220-233 volume:23 year:2022 number:2 day:24 month:01 pages:220-233 |
format_phy_str_mv |
Article |
institution |
findex.gbv.de |
isfreeaccess_bool |
false |
container_title |
Journal of Zhejiang University |
authorswithroles_txt_mv |
Zhang, Dan @@aut@@ Zhao, Lei @@aut@@ Xu, Duanqing @@aut@@ Lu, Dongming @@aut@@ |
publishDateDaySort_date |
2022-01-24T00:00:00Z |
hierarchy_top_id |
618789693 |
id |
SPR050563041 |
language_de |
englisch |
fullrecord |
<?xml version="1.0" encoding="UTF-8"?><collection xmlns="http://www.loc.gov/MARC21/slim"><record><leader>01000naa a22002652 4500</leader><controlfield tag="001">SPR050563041</controlfield><controlfield tag="003">DE-627</controlfield><controlfield tag="005">20230507133501.0</controlfield><controlfield tag="007">cr uuu---uuuuu</controlfield><controlfield tag="008">230507s2022 xx |||||o 00| ||eng c</controlfield><datafield tag="024" ind1="7" ind2=" "><subfield code="a">10.1631/FITEE.2000353</subfield><subfield code="2">doi</subfield></datafield><datafield tag="035" ind1=" " ind2=" "><subfield code="a">(DE-627)SPR050563041</subfield></datafield><datafield tag="035" ind1=" " ind2=" "><subfield code="a">(SPR)FITEE.2000353-e</subfield></datafield><datafield tag="040" ind1=" " ind2=" "><subfield code="a">DE-627</subfield><subfield code="b">ger</subfield><subfield code="c">DE-627</subfield><subfield code="e">rakwb</subfield></datafield><datafield tag="041" ind1=" " ind2=" "><subfield code="a">eng</subfield></datafield><datafield tag="100" ind1="1" ind2=" "><subfield code="a">Zhang, Dan</subfield><subfield code="e">verfasserin</subfield><subfield code="0">(orcid)0000-0002-5033-8128</subfield><subfield code="4">aut</subfield></datafield><datafield tag="245" ind1="1" ind2="0"><subfield code="a">Dual-constraint burst image denoising method</subfield></datafield><datafield tag="264" ind1=" " ind2="1"><subfield code="c">2022</subfield></datafield><datafield tag="336" ind1=" " ind2=" "><subfield code="a">Text</subfield><subfield code="b">txt</subfield><subfield code="2">rdacontent</subfield></datafield><datafield tag="337" ind1=" " ind2=" "><subfield code="a">Computermedien</subfield><subfield code="b">c</subfield><subfield code="2">rdamedia</subfield></datafield><datafield tag="338" ind1=" " ind2=" "><subfield code="a">Online-Ressource</subfield><subfield code="b">cr</subfield><subfield code="2">rdacarrier</subfield></datafield><datafield tag="500" ind1=" " ind2=" "><subfield code="a">© Zhejiang University Press 2022</subfield></datafield><datafield tag="520" ind1=" " ind2=" "><subfield code="a">Abstract Deep learning has proven to be an effective mechanism for computer vision tasks, especially for image denoising and burst image denoising. In this paper, we focus on solving the burst image denoising problem and aim to generate a single clean image from a burst of noisy images. We propose to combine the power of block matching and 3D filtering (BM3D) and a convolutional neural network (CNN) for burst image denoising. In particular, we design a CNN with a divide-and-conquer strategy. First, we employ BM3D to preprocess the noisy burst images. Then, the preprocessed images and noisy images are fed separately into two parallel CNN branches. The two branches produce somewhat different results. Finally, we use a light CNN block to combine the two outputs. In addition, we improve the performance by optimizing the two branches using two different constraints: a signal constraint and a noise constraint. One maps a clean signal, and the other maps the noise distribution. In addition, we adopt block matching in the network to avoid frame misalignment. Experimental results on synthetic and real noisy images show that our algorithm is competitive with other algorithms.</subfield></datafield><datafield tag="700" ind1="1" ind2=" "><subfield code="a">Zhao, Lei</subfield><subfield code="0">(orcid)0000-0003-4791-454X</subfield><subfield code="4">aut</subfield></datafield><datafield tag="700" ind1="1" ind2=" "><subfield code="a">Xu, Duanqing</subfield><subfield code="4">aut</subfield></datafield><datafield tag="700" ind1="1" ind2=" "><subfield code="a">Lu, Dongming</subfield><subfield code="4">aut</subfield></datafield><datafield tag="773" ind1="0" ind2="8"><subfield code="i">Enthalten in</subfield><subfield code="t">Journal of Zhejiang University</subfield><subfield code="d">Hangzhou : Zhejiang Univ. Press, 2010</subfield><subfield code="g">23(2022), 2 vom: 24. Jan., Seite 220-233</subfield><subfield code="w">(DE-627)618789693</subfield><subfield code="w">(DE-600)2537865-X</subfield><subfield code="x">1869-196X</subfield><subfield code="7">nnns</subfield></datafield><datafield tag="773" ind1="1" ind2="8"><subfield code="g">volume:23</subfield><subfield code="g">year:2022</subfield><subfield code="g">number:2</subfield><subfield code="g">day:24</subfield><subfield code="g">month:01</subfield><subfield code="g">pages:220-233</subfield></datafield><datafield tag="856" ind1="4" ind2="0"><subfield code="u">https://dx.doi.org/10.1631/FITEE.2000353</subfield><subfield code="z">lizenzpflichtig</subfield><subfield code="3">Volltext</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_USEFLAG_A</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">SYSFLAG_A</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_SPRINGER</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_20</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_22</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_23</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_24</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_31</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_32</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_39</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_40</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_60</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_62</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_65</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_69</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_70</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_73</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_74</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_90</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_95</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_100</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_105</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_110</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_120</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_121</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_138</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_152</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_161</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_171</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_187</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_206</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_224</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_250</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_281</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_285</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_293</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_370</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_602</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_647</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_702</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2001</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2003</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2005</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2007</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2009</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2014</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2015</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2018</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2021</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2025</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2026</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2031</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2034</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2037</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2039</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2044</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2059</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2064</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2068</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2106</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2108</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2111</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2116</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2118</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2119</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2122</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2129</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2143</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2144</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2147</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2148</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2152</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2153</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2190</subfield></datafield><datafield tag="951" ind1=" " ind2=" "><subfield code="a">AR</subfield></datafield><datafield tag="952" ind1=" " ind2=" "><subfield code="d">23</subfield><subfield code="j">2022</subfield><subfield code="e">2</subfield><subfield code="b">24</subfield><subfield code="c">01</subfield><subfield code="h">220-233</subfield></datafield></record></collection>
|
author |
Zhang, Dan |
spellingShingle |
Zhang, Dan Dual-constraint burst image denoising method |
authorStr |
Zhang, Dan |
ppnlink_with_tag_str_mv |
@@773@@(DE-627)618789693 |
format |
electronic Article |
delete_txt_mv |
keep |
author_role |
aut aut aut aut |
collection |
springer |
remote_str |
true |
illustrated |
Not Illustrated |
issn |
1869-196X |
topic_title |
Dual-constraint burst image denoising method |
format_facet |
Elektronische Aufsätze Aufsätze Elektronische Ressource |
format_main_str_mv |
Text Zeitschrift/Artikel |
carriertype_str_mv |
cr |
hierarchy_parent_title |
Journal of Zhejiang University |
hierarchy_parent_id |
618789693 |
hierarchy_top_title |
Journal of Zhejiang University |
isfreeaccess_txt |
false |
familylinks_str_mv |
(DE-627)618789693 (DE-600)2537865-X |
title |
Dual-constraint burst image denoising method |
ctrlnum |
(DE-627)SPR050563041 (SPR)FITEE.2000353-e |
title_full |
Dual-constraint burst image denoising method |
author_sort |
Zhang, Dan |
journal |
Journal of Zhejiang University |
journalStr |
Journal of Zhejiang University |
lang_code |
eng |
isOA_bool |
false |
recordtype |
marc |
publishDateSort |
2022 |
contenttype_str_mv |
txt |
container_start_page |
220 |
author_browse |
Zhang, Dan Zhao, Lei Xu, Duanqing Lu, Dongming |
container_volume |
23 |
format_se |
Elektronische Aufsätze |
author-letter |
Zhang, Dan |
doi_str_mv |
10.1631/FITEE.2000353 |
normlink |
(ORCID)0000-0002-5033-8128 (ORCID)0000-0003-4791-454X |
normlink_prefix_str_mv |
(orcid)0000-0002-5033-8128 (orcid)0000-0003-4791-454X |
title_sort |
dual-constraint burst image denoising method |
title_auth |
Dual-constraint burst image denoising method |
abstract |
Abstract Deep learning has proven to be an effective mechanism for computer vision tasks, especially for image denoising and burst image denoising. In this paper, we focus on solving the burst image denoising problem and aim to generate a single clean image from a burst of noisy images. We propose to combine the power of block matching and 3D filtering (BM3D) and a convolutional neural network (CNN) for burst image denoising. In particular, we design a CNN with a divide-and-conquer strategy. First, we employ BM3D to preprocess the noisy burst images. Then, the preprocessed images and noisy images are fed separately into two parallel CNN branches. The two branches produce somewhat different results. Finally, we use a light CNN block to combine the two outputs. In addition, we improve the performance by optimizing the two branches using two different constraints: a signal constraint and a noise constraint. One maps a clean signal, and the other maps the noise distribution. In addition, we adopt block matching in the network to avoid frame misalignment. Experimental results on synthetic and real noisy images show that our algorithm is competitive with other algorithms. © Zhejiang University Press 2022 |
abstractGer |
Abstract Deep learning has proven to be an effective mechanism for computer vision tasks, especially for image denoising and burst image denoising. In this paper, we focus on solving the burst image denoising problem and aim to generate a single clean image from a burst of noisy images. We propose to combine the power of block matching and 3D filtering (BM3D) and a convolutional neural network (CNN) for burst image denoising. In particular, we design a CNN with a divide-and-conquer strategy. First, we employ BM3D to preprocess the noisy burst images. Then, the preprocessed images and noisy images are fed separately into two parallel CNN branches. The two branches produce somewhat different results. Finally, we use a light CNN block to combine the two outputs. In addition, we improve the performance by optimizing the two branches using two different constraints: a signal constraint and a noise constraint. One maps a clean signal, and the other maps the noise distribution. In addition, we adopt block matching in the network to avoid frame misalignment. Experimental results on synthetic and real noisy images show that our algorithm is competitive with other algorithms. © Zhejiang University Press 2022 |
abstract_unstemmed |
Abstract Deep learning has proven to be an effective mechanism for computer vision tasks, especially for image denoising and burst image denoising. In this paper, we focus on solving the burst image denoising problem and aim to generate a single clean image from a burst of noisy images. We propose to combine the power of block matching and 3D filtering (BM3D) and a convolutional neural network (CNN) for burst image denoising. In particular, we design a CNN with a divide-and-conquer strategy. First, we employ BM3D to preprocess the noisy burst images. Then, the preprocessed images and noisy images are fed separately into two parallel CNN branches. The two branches produce somewhat different results. Finally, we use a light CNN block to combine the two outputs. In addition, we improve the performance by optimizing the two branches using two different constraints: a signal constraint and a noise constraint. One maps a clean signal, and the other maps the noise distribution. In addition, we adopt block matching in the network to avoid frame misalignment. Experimental results on synthetic and real noisy images show that our algorithm is competitive with other algorithms. © Zhejiang University Press 2022 |
collection_details |
GBV_USEFLAG_A SYSFLAG_A GBV_SPRINGER GBV_ILN_20 GBV_ILN_22 GBV_ILN_23 GBV_ILN_24 GBV_ILN_31 GBV_ILN_32 GBV_ILN_39 GBV_ILN_40 GBV_ILN_60 GBV_ILN_62 GBV_ILN_65 GBV_ILN_69 GBV_ILN_70 GBV_ILN_73 GBV_ILN_74 GBV_ILN_90 GBV_ILN_95 GBV_ILN_100 GBV_ILN_105 GBV_ILN_110 GBV_ILN_120 GBV_ILN_121 GBV_ILN_138 GBV_ILN_152 GBV_ILN_161 GBV_ILN_171 GBV_ILN_187 GBV_ILN_206 GBV_ILN_224 GBV_ILN_250 GBV_ILN_281 GBV_ILN_285 GBV_ILN_293 GBV_ILN_370 GBV_ILN_602 GBV_ILN_647 GBV_ILN_702 GBV_ILN_2001 GBV_ILN_2003 GBV_ILN_2005 GBV_ILN_2007 GBV_ILN_2009 GBV_ILN_2014 GBV_ILN_2015 GBV_ILN_2018 GBV_ILN_2021 GBV_ILN_2025 GBV_ILN_2026 GBV_ILN_2031 GBV_ILN_2034 GBV_ILN_2037 GBV_ILN_2039 GBV_ILN_2044 GBV_ILN_2059 GBV_ILN_2064 GBV_ILN_2068 GBV_ILN_2106 GBV_ILN_2108 GBV_ILN_2111 GBV_ILN_2116 GBV_ILN_2118 GBV_ILN_2119 GBV_ILN_2122 GBV_ILN_2129 GBV_ILN_2143 GBV_ILN_2144 GBV_ILN_2147 GBV_ILN_2148 GBV_ILN_2152 GBV_ILN_2153 GBV_ILN_2190 |
container_issue |
2 |
title_short |
Dual-constraint burst image denoising method |
url |
https://dx.doi.org/10.1631/FITEE.2000353 |
remote_bool |
true |
author2 |
Zhao, Lei Xu, Duanqing Lu, Dongming |
author2Str |
Zhao, Lei Xu, Duanqing Lu, Dongming |
ppnlink |
618789693 |
mediatype_str_mv |
c |
isOA_txt |
false |
hochschulschrift_bool |
false |
doi_str |
10.1631/FITEE.2000353 |
up_date |
2024-07-03T16:20:24.848Z |
_version_ |
1803575492468539392 |
fullrecord_marcxml |
<?xml version="1.0" encoding="UTF-8"?><collection xmlns="http://www.loc.gov/MARC21/slim"><record><leader>01000naa a22002652 4500</leader><controlfield tag="001">SPR050563041</controlfield><controlfield tag="003">DE-627</controlfield><controlfield tag="005">20230507133501.0</controlfield><controlfield tag="007">cr uuu---uuuuu</controlfield><controlfield tag="008">230507s2022 xx |||||o 00| ||eng c</controlfield><datafield tag="024" ind1="7" ind2=" "><subfield code="a">10.1631/FITEE.2000353</subfield><subfield code="2">doi</subfield></datafield><datafield tag="035" ind1=" " ind2=" "><subfield code="a">(DE-627)SPR050563041</subfield></datafield><datafield tag="035" ind1=" " ind2=" "><subfield code="a">(SPR)FITEE.2000353-e</subfield></datafield><datafield tag="040" ind1=" " ind2=" "><subfield code="a">DE-627</subfield><subfield code="b">ger</subfield><subfield code="c">DE-627</subfield><subfield code="e">rakwb</subfield></datafield><datafield tag="041" ind1=" " ind2=" "><subfield code="a">eng</subfield></datafield><datafield tag="100" ind1="1" ind2=" "><subfield code="a">Zhang, Dan</subfield><subfield code="e">verfasserin</subfield><subfield code="0">(orcid)0000-0002-5033-8128</subfield><subfield code="4">aut</subfield></datafield><datafield tag="245" ind1="1" ind2="0"><subfield code="a">Dual-constraint burst image denoising method</subfield></datafield><datafield tag="264" ind1=" " ind2="1"><subfield code="c">2022</subfield></datafield><datafield tag="336" ind1=" " ind2=" "><subfield code="a">Text</subfield><subfield code="b">txt</subfield><subfield code="2">rdacontent</subfield></datafield><datafield tag="337" ind1=" " ind2=" "><subfield code="a">Computermedien</subfield><subfield code="b">c</subfield><subfield code="2">rdamedia</subfield></datafield><datafield tag="338" ind1=" " ind2=" "><subfield code="a">Online-Ressource</subfield><subfield code="b">cr</subfield><subfield code="2">rdacarrier</subfield></datafield><datafield tag="500" ind1=" " ind2=" "><subfield code="a">© Zhejiang University Press 2022</subfield></datafield><datafield tag="520" ind1=" " ind2=" "><subfield code="a">Abstract Deep learning has proven to be an effective mechanism for computer vision tasks, especially for image denoising and burst image denoising. In this paper, we focus on solving the burst image denoising problem and aim to generate a single clean image from a burst of noisy images. We propose to combine the power of block matching and 3D filtering (BM3D) and a convolutional neural network (CNN) for burst image denoising. In particular, we design a CNN with a divide-and-conquer strategy. First, we employ BM3D to preprocess the noisy burst images. Then, the preprocessed images and noisy images are fed separately into two parallel CNN branches. The two branches produce somewhat different results. Finally, we use a light CNN block to combine the two outputs. In addition, we improve the performance by optimizing the two branches using two different constraints: a signal constraint and a noise constraint. One maps a clean signal, and the other maps the noise distribution. In addition, we adopt block matching in the network to avoid frame misalignment. Experimental results on synthetic and real noisy images show that our algorithm is competitive with other algorithms.</subfield></datafield><datafield tag="700" ind1="1" ind2=" "><subfield code="a">Zhao, Lei</subfield><subfield code="0">(orcid)0000-0003-4791-454X</subfield><subfield code="4">aut</subfield></datafield><datafield tag="700" ind1="1" ind2=" "><subfield code="a">Xu, Duanqing</subfield><subfield code="4">aut</subfield></datafield><datafield tag="700" ind1="1" ind2=" "><subfield code="a">Lu, Dongming</subfield><subfield code="4">aut</subfield></datafield><datafield tag="773" ind1="0" ind2="8"><subfield code="i">Enthalten in</subfield><subfield code="t">Journal of Zhejiang University</subfield><subfield code="d">Hangzhou : Zhejiang Univ. Press, 2010</subfield><subfield code="g">23(2022), 2 vom: 24. Jan., Seite 220-233</subfield><subfield code="w">(DE-627)618789693</subfield><subfield code="w">(DE-600)2537865-X</subfield><subfield code="x">1869-196X</subfield><subfield code="7">nnns</subfield></datafield><datafield tag="773" ind1="1" ind2="8"><subfield code="g">volume:23</subfield><subfield code="g">year:2022</subfield><subfield code="g">number:2</subfield><subfield code="g">day:24</subfield><subfield code="g">month:01</subfield><subfield code="g">pages:220-233</subfield></datafield><datafield tag="856" ind1="4" ind2="0"><subfield code="u">https://dx.doi.org/10.1631/FITEE.2000353</subfield><subfield code="z">lizenzpflichtig</subfield><subfield code="3">Volltext</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_USEFLAG_A</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">SYSFLAG_A</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_SPRINGER</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_20</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_22</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_23</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_24</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_31</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_32</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_39</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_40</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_60</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_62</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_65</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_69</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_70</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_73</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_74</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_90</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_95</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_100</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_105</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_110</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_120</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_121</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_138</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_152</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_161</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_171</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_187</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_206</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_224</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_250</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_281</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_285</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_293</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_370</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_602</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_647</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_702</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2001</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2003</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2005</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2007</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2009</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2014</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2015</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2018</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2021</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2025</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2026</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2031</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2034</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2037</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2039</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2044</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2059</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2064</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2068</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2106</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2108</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2111</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2116</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2118</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2119</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2122</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2129</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2143</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2144</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2147</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2148</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2152</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2153</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2190</subfield></datafield><datafield tag="951" ind1=" " ind2=" "><subfield code="a">AR</subfield></datafield><datafield tag="952" ind1=" " ind2=" "><subfield code="d">23</subfield><subfield code="j">2022</subfield><subfield code="e">2</subfield><subfield code="b">24</subfield><subfield code="c">01</subfield><subfield code="h">220-233</subfield></datafield></record></collection>
|
score |
7.40014 |