Simultaneous Deep Stereo Matching and Dehazing with Feature Attention
Abstract Unveiling the dense correspondence under the haze layer remains a challenging task, since the scattering effects result in less distinctive image features. Contrarily, dehazing is often confused by the airlight-albedo ambiguity which cannot be resolved independently at each pixel. In this p...
Ausführliche Beschreibung
Autor*in: |
Song, Taeyong [verfasserIn] |
---|
Format: |
Artikel |
---|---|
Sprache: |
Englisch |
Erschienen: |
2020 |
---|
Schlagwörter: |
---|
Anmerkung: |
© Springer Science+Business Media, LLC, part of Springer Nature 2020 |
---|
Übergeordnetes Werk: |
Enthalten in: International journal of computer vision - Springer US, 1987, 128(2020), 4 vom: 21. Jan., Seite 799-817 |
---|---|
Übergeordnetes Werk: |
volume:128 ; year:2020 ; number:4 ; day:21 ; month:01 ; pages:799-817 |
Links: |
---|
DOI / URN: |
10.1007/s11263-020-01294-2 |
---|
Katalog-ID: |
OLC2057753867 |
---|
LEADER | 01000caa a22002652 4500 | ||
---|---|---|---|
001 | OLC2057753867 | ||
003 | DE-627 | ||
005 | 20230504132124.0 | ||
007 | tu | ||
008 | 200820s2020 xx ||||| 00| ||eng c | ||
024 | 7 | |a 10.1007/s11263-020-01294-2 |2 doi | |
035 | |a (DE-627)OLC2057753867 | ||
035 | |a (DE-He213)s11263-020-01294-2-p | ||
040 | |a DE-627 |b ger |c DE-627 |e rakwb | ||
041 | |a eng | ||
082 | 0 | 4 | |a 004 |q VZ |
100 | 1 | |a Song, Taeyong |e verfasserin |4 aut | |
245 | 1 | 0 | |a Simultaneous Deep Stereo Matching and Dehazing with Feature Attention |
264 | 1 | |c 2020 | |
336 | |a Text |b txt |2 rdacontent | ||
337 | |a ohne Hilfsmittel zu benutzen |b n |2 rdamedia | ||
338 | |a Band |b nc |2 rdacarrier | ||
500 | |a © Springer Science+Business Media, LLC, part of Springer Nature 2020 | ||
520 | |a Abstract Unveiling the dense correspondence under the haze layer remains a challenging task, since the scattering effects result in less distinctive image features. Contrarily, dehazing is often confused by the airlight-albedo ambiguity which cannot be resolved independently at each pixel. In this paper, we introduce a deep convolutional neural network that simultaneously estimates a disparity and clear image from a hazy stereo image pair. Both tasks are synergistically formulated by fusing depth information from the matching cost and haze transmission. To learn the optimal fusion of depth-related features, we present a novel encoder-decoder architecture that extends the core idea of attention mechanism to the simultaneous stereo matching and dehazing. As a result, our method estimates high-quality disparity for the stereo images in scattering media, and produces appearance images with enhanced visibility. Finally, we further propose an effective strategy for adaptation to camera-captured images by distilling the cross-domain knowledge. Experiments on both synthetic and real-world scenarios including comparisons with state-of-the-art methods demonstrate the effectiveness and flexibility of our approach. | ||
650 | 4 | |a Stereo matching | |
650 | 4 | |a Dehazing | |
650 | 4 | |a CNN | |
650 | 4 | |a Multi-task learning | |
650 | 4 | |a Knowledge distillation | |
650 | 4 | |a Stereo confidence | |
650 | 4 | |a Adverse weather condition | |
700 | 1 | |a Kim, Youngjung |4 aut | |
700 | 1 | |a Oh, Changjae |4 aut | |
700 | 1 | |a Jang, Hyunsung |4 aut | |
700 | 1 | |a Ha, Namkoo |4 aut | |
700 | 1 | |a Sohn, Kwanghoon |4 aut | |
773 | 0 | 8 | |i Enthalten in |t International journal of computer vision |d Springer US, 1987 |g 128(2020), 4 vom: 21. Jan., Seite 799-817 |w (DE-627)129354252 |w (DE-600)155895-X |w (DE-576)018081428 |x 0920-5691 |7 nnns |
773 | 1 | 8 | |g volume:128 |g year:2020 |g number:4 |g day:21 |g month:01 |g pages:799-817 |
856 | 4 | 1 | |u https://doi.org/10.1007/s11263-020-01294-2 |z lizenzpflichtig |3 Volltext |
912 | |a GBV_USEFLAG_A | ||
912 | |a SYSFLAG_A | ||
912 | |a GBV_OLC | ||
912 | |a SSG-OLC-MAT | ||
912 | |a GBV_ILN_2244 | ||
951 | |a AR | ||
952 | |d 128 |j 2020 |e 4 |b 21 |c 01 |h 799-817 |
author_variant |
t s ts y k yk c o co h j hj n h nh k s ks |
---|---|
matchkey_str |
article:09205691:2020----::iutnoseptroacigndhznwt |
hierarchy_sort_str |
2020 |
publishDate |
2020 |
allfields |
10.1007/s11263-020-01294-2 doi (DE-627)OLC2057753867 (DE-He213)s11263-020-01294-2-p DE-627 ger DE-627 rakwb eng 004 VZ Song, Taeyong verfasserin aut Simultaneous Deep Stereo Matching and Dehazing with Feature Attention 2020 Text txt rdacontent ohne Hilfsmittel zu benutzen n rdamedia Band nc rdacarrier © Springer Science+Business Media, LLC, part of Springer Nature 2020 Abstract Unveiling the dense correspondence under the haze layer remains a challenging task, since the scattering effects result in less distinctive image features. Contrarily, dehazing is often confused by the airlight-albedo ambiguity which cannot be resolved independently at each pixel. In this paper, we introduce a deep convolutional neural network that simultaneously estimates a disparity and clear image from a hazy stereo image pair. Both tasks are synergistically formulated by fusing depth information from the matching cost and haze transmission. To learn the optimal fusion of depth-related features, we present a novel encoder-decoder architecture that extends the core idea of attention mechanism to the simultaneous stereo matching and dehazing. As a result, our method estimates high-quality disparity for the stereo images in scattering media, and produces appearance images with enhanced visibility. Finally, we further propose an effective strategy for adaptation to camera-captured images by distilling the cross-domain knowledge. Experiments on both synthetic and real-world scenarios including comparisons with state-of-the-art methods demonstrate the effectiveness and flexibility of our approach. Stereo matching Dehazing CNN Multi-task learning Knowledge distillation Stereo confidence Adverse weather condition Kim, Youngjung aut Oh, Changjae aut Jang, Hyunsung aut Ha, Namkoo aut Sohn, Kwanghoon aut Enthalten in International journal of computer vision Springer US, 1987 128(2020), 4 vom: 21. Jan., Seite 799-817 (DE-627)129354252 (DE-600)155895-X (DE-576)018081428 0920-5691 nnns volume:128 year:2020 number:4 day:21 month:01 pages:799-817 https://doi.org/10.1007/s11263-020-01294-2 lizenzpflichtig Volltext GBV_USEFLAG_A SYSFLAG_A GBV_OLC SSG-OLC-MAT GBV_ILN_2244 AR 128 2020 4 21 01 799-817 |
spelling |
10.1007/s11263-020-01294-2 doi (DE-627)OLC2057753867 (DE-He213)s11263-020-01294-2-p DE-627 ger DE-627 rakwb eng 004 VZ Song, Taeyong verfasserin aut Simultaneous Deep Stereo Matching and Dehazing with Feature Attention 2020 Text txt rdacontent ohne Hilfsmittel zu benutzen n rdamedia Band nc rdacarrier © Springer Science+Business Media, LLC, part of Springer Nature 2020 Abstract Unveiling the dense correspondence under the haze layer remains a challenging task, since the scattering effects result in less distinctive image features. Contrarily, dehazing is often confused by the airlight-albedo ambiguity which cannot be resolved independently at each pixel. In this paper, we introduce a deep convolutional neural network that simultaneously estimates a disparity and clear image from a hazy stereo image pair. Both tasks are synergistically formulated by fusing depth information from the matching cost and haze transmission. To learn the optimal fusion of depth-related features, we present a novel encoder-decoder architecture that extends the core idea of attention mechanism to the simultaneous stereo matching and dehazing. As a result, our method estimates high-quality disparity for the stereo images in scattering media, and produces appearance images with enhanced visibility. Finally, we further propose an effective strategy for adaptation to camera-captured images by distilling the cross-domain knowledge. Experiments on both synthetic and real-world scenarios including comparisons with state-of-the-art methods demonstrate the effectiveness and flexibility of our approach. Stereo matching Dehazing CNN Multi-task learning Knowledge distillation Stereo confidence Adverse weather condition Kim, Youngjung aut Oh, Changjae aut Jang, Hyunsung aut Ha, Namkoo aut Sohn, Kwanghoon aut Enthalten in International journal of computer vision Springer US, 1987 128(2020), 4 vom: 21. Jan., Seite 799-817 (DE-627)129354252 (DE-600)155895-X (DE-576)018081428 0920-5691 nnns volume:128 year:2020 number:4 day:21 month:01 pages:799-817 https://doi.org/10.1007/s11263-020-01294-2 lizenzpflichtig Volltext GBV_USEFLAG_A SYSFLAG_A GBV_OLC SSG-OLC-MAT GBV_ILN_2244 AR 128 2020 4 21 01 799-817 |
allfields_unstemmed |
10.1007/s11263-020-01294-2 doi (DE-627)OLC2057753867 (DE-He213)s11263-020-01294-2-p DE-627 ger DE-627 rakwb eng 004 VZ Song, Taeyong verfasserin aut Simultaneous Deep Stereo Matching and Dehazing with Feature Attention 2020 Text txt rdacontent ohne Hilfsmittel zu benutzen n rdamedia Band nc rdacarrier © Springer Science+Business Media, LLC, part of Springer Nature 2020 Abstract Unveiling the dense correspondence under the haze layer remains a challenging task, since the scattering effects result in less distinctive image features. Contrarily, dehazing is often confused by the airlight-albedo ambiguity which cannot be resolved independently at each pixel. In this paper, we introduce a deep convolutional neural network that simultaneously estimates a disparity and clear image from a hazy stereo image pair. Both tasks are synergistically formulated by fusing depth information from the matching cost and haze transmission. To learn the optimal fusion of depth-related features, we present a novel encoder-decoder architecture that extends the core idea of attention mechanism to the simultaneous stereo matching and dehazing. As a result, our method estimates high-quality disparity for the stereo images in scattering media, and produces appearance images with enhanced visibility. Finally, we further propose an effective strategy for adaptation to camera-captured images by distilling the cross-domain knowledge. Experiments on both synthetic and real-world scenarios including comparisons with state-of-the-art methods demonstrate the effectiveness and flexibility of our approach. Stereo matching Dehazing CNN Multi-task learning Knowledge distillation Stereo confidence Adverse weather condition Kim, Youngjung aut Oh, Changjae aut Jang, Hyunsung aut Ha, Namkoo aut Sohn, Kwanghoon aut Enthalten in International journal of computer vision Springer US, 1987 128(2020), 4 vom: 21. Jan., Seite 799-817 (DE-627)129354252 (DE-600)155895-X (DE-576)018081428 0920-5691 nnns volume:128 year:2020 number:4 day:21 month:01 pages:799-817 https://doi.org/10.1007/s11263-020-01294-2 lizenzpflichtig Volltext GBV_USEFLAG_A SYSFLAG_A GBV_OLC SSG-OLC-MAT GBV_ILN_2244 AR 128 2020 4 21 01 799-817 |
allfieldsGer |
10.1007/s11263-020-01294-2 doi (DE-627)OLC2057753867 (DE-He213)s11263-020-01294-2-p DE-627 ger DE-627 rakwb eng 004 VZ Song, Taeyong verfasserin aut Simultaneous Deep Stereo Matching and Dehazing with Feature Attention 2020 Text txt rdacontent ohne Hilfsmittel zu benutzen n rdamedia Band nc rdacarrier © Springer Science+Business Media, LLC, part of Springer Nature 2020 Abstract Unveiling the dense correspondence under the haze layer remains a challenging task, since the scattering effects result in less distinctive image features. Contrarily, dehazing is often confused by the airlight-albedo ambiguity which cannot be resolved independently at each pixel. In this paper, we introduce a deep convolutional neural network that simultaneously estimates a disparity and clear image from a hazy stereo image pair. Both tasks are synergistically formulated by fusing depth information from the matching cost and haze transmission. To learn the optimal fusion of depth-related features, we present a novel encoder-decoder architecture that extends the core idea of attention mechanism to the simultaneous stereo matching and dehazing. As a result, our method estimates high-quality disparity for the stereo images in scattering media, and produces appearance images with enhanced visibility. Finally, we further propose an effective strategy for adaptation to camera-captured images by distilling the cross-domain knowledge. Experiments on both synthetic and real-world scenarios including comparisons with state-of-the-art methods demonstrate the effectiveness and flexibility of our approach. Stereo matching Dehazing CNN Multi-task learning Knowledge distillation Stereo confidence Adverse weather condition Kim, Youngjung aut Oh, Changjae aut Jang, Hyunsung aut Ha, Namkoo aut Sohn, Kwanghoon aut Enthalten in International journal of computer vision Springer US, 1987 128(2020), 4 vom: 21. Jan., Seite 799-817 (DE-627)129354252 (DE-600)155895-X (DE-576)018081428 0920-5691 nnns volume:128 year:2020 number:4 day:21 month:01 pages:799-817 https://doi.org/10.1007/s11263-020-01294-2 lizenzpflichtig Volltext GBV_USEFLAG_A SYSFLAG_A GBV_OLC SSG-OLC-MAT GBV_ILN_2244 AR 128 2020 4 21 01 799-817 |
allfieldsSound |
10.1007/s11263-020-01294-2 doi (DE-627)OLC2057753867 (DE-He213)s11263-020-01294-2-p DE-627 ger DE-627 rakwb eng 004 VZ Song, Taeyong verfasserin aut Simultaneous Deep Stereo Matching and Dehazing with Feature Attention 2020 Text txt rdacontent ohne Hilfsmittel zu benutzen n rdamedia Band nc rdacarrier © Springer Science+Business Media, LLC, part of Springer Nature 2020 Abstract Unveiling the dense correspondence under the haze layer remains a challenging task, since the scattering effects result in less distinctive image features. Contrarily, dehazing is often confused by the airlight-albedo ambiguity which cannot be resolved independently at each pixel. In this paper, we introduce a deep convolutional neural network that simultaneously estimates a disparity and clear image from a hazy stereo image pair. Both tasks are synergistically formulated by fusing depth information from the matching cost and haze transmission. To learn the optimal fusion of depth-related features, we present a novel encoder-decoder architecture that extends the core idea of attention mechanism to the simultaneous stereo matching and dehazing. As a result, our method estimates high-quality disparity for the stereo images in scattering media, and produces appearance images with enhanced visibility. Finally, we further propose an effective strategy for adaptation to camera-captured images by distilling the cross-domain knowledge. Experiments on both synthetic and real-world scenarios including comparisons with state-of-the-art methods demonstrate the effectiveness and flexibility of our approach. Stereo matching Dehazing CNN Multi-task learning Knowledge distillation Stereo confidence Adverse weather condition Kim, Youngjung aut Oh, Changjae aut Jang, Hyunsung aut Ha, Namkoo aut Sohn, Kwanghoon aut Enthalten in International journal of computer vision Springer US, 1987 128(2020), 4 vom: 21. Jan., Seite 799-817 (DE-627)129354252 (DE-600)155895-X (DE-576)018081428 0920-5691 nnns volume:128 year:2020 number:4 day:21 month:01 pages:799-817 https://doi.org/10.1007/s11263-020-01294-2 lizenzpflichtig Volltext GBV_USEFLAG_A SYSFLAG_A GBV_OLC SSG-OLC-MAT GBV_ILN_2244 AR 128 2020 4 21 01 799-817 |
language |
English |
source |
Enthalten in International journal of computer vision 128(2020), 4 vom: 21. Jan., Seite 799-817 volume:128 year:2020 number:4 day:21 month:01 pages:799-817 |
sourceStr |
Enthalten in International journal of computer vision 128(2020), 4 vom: 21. Jan., Seite 799-817 volume:128 year:2020 number:4 day:21 month:01 pages:799-817 |
format_phy_str_mv |
Article |
institution |
findex.gbv.de |
topic_facet |
Stereo matching Dehazing CNN Multi-task learning Knowledge distillation Stereo confidence Adverse weather condition |
dewey-raw |
004 |
isfreeaccess_bool |
false |
container_title |
International journal of computer vision |
authorswithroles_txt_mv |
Song, Taeyong @@aut@@ Kim, Youngjung @@aut@@ Oh, Changjae @@aut@@ Jang, Hyunsung @@aut@@ Ha, Namkoo @@aut@@ Sohn, Kwanghoon @@aut@@ |
publishDateDaySort_date |
2020-01-21T00:00:00Z |
hierarchy_top_id |
129354252 |
dewey-sort |
14 |
id |
OLC2057753867 |
language_de |
englisch |
fullrecord |
<?xml version="1.0" encoding="UTF-8"?><collection xmlns="http://www.loc.gov/MARC21/slim"><record><leader>01000caa a22002652 4500</leader><controlfield tag="001">OLC2057753867</controlfield><controlfield tag="003">DE-627</controlfield><controlfield tag="005">20230504132124.0</controlfield><controlfield tag="007">tu</controlfield><controlfield tag="008">200820s2020 xx ||||| 00| ||eng c</controlfield><datafield tag="024" ind1="7" ind2=" "><subfield code="a">10.1007/s11263-020-01294-2</subfield><subfield code="2">doi</subfield></datafield><datafield tag="035" ind1=" " ind2=" "><subfield code="a">(DE-627)OLC2057753867</subfield></datafield><datafield tag="035" ind1=" " ind2=" "><subfield code="a">(DE-He213)s11263-020-01294-2-p</subfield></datafield><datafield tag="040" ind1=" " ind2=" "><subfield code="a">DE-627</subfield><subfield code="b">ger</subfield><subfield code="c">DE-627</subfield><subfield code="e">rakwb</subfield></datafield><datafield tag="041" ind1=" " ind2=" "><subfield code="a">eng</subfield></datafield><datafield tag="082" ind1="0" ind2="4"><subfield code="a">004</subfield><subfield code="q">VZ</subfield></datafield><datafield tag="100" ind1="1" ind2=" "><subfield code="a">Song, Taeyong</subfield><subfield code="e">verfasserin</subfield><subfield code="4">aut</subfield></datafield><datafield tag="245" ind1="1" ind2="0"><subfield code="a">Simultaneous Deep Stereo Matching and Dehazing with Feature Attention</subfield></datafield><datafield tag="264" ind1=" " ind2="1"><subfield code="c">2020</subfield></datafield><datafield tag="336" ind1=" " ind2=" "><subfield code="a">Text</subfield><subfield code="b">txt</subfield><subfield code="2">rdacontent</subfield></datafield><datafield tag="337" ind1=" " ind2=" "><subfield code="a">ohne Hilfsmittel zu benutzen</subfield><subfield code="b">n</subfield><subfield code="2">rdamedia</subfield></datafield><datafield tag="338" ind1=" " ind2=" "><subfield code="a">Band</subfield><subfield code="b">nc</subfield><subfield code="2">rdacarrier</subfield></datafield><datafield tag="500" ind1=" " ind2=" "><subfield code="a">© Springer Science+Business Media, LLC, part of Springer Nature 2020</subfield></datafield><datafield tag="520" ind1=" " ind2=" "><subfield code="a">Abstract Unveiling the dense correspondence under the haze layer remains a challenging task, since the scattering effects result in less distinctive image features. Contrarily, dehazing is often confused by the airlight-albedo ambiguity which cannot be resolved independently at each pixel. In this paper, we introduce a deep convolutional neural network that simultaneously estimates a disparity and clear image from a hazy stereo image pair. Both tasks are synergistically formulated by fusing depth information from the matching cost and haze transmission. To learn the optimal fusion of depth-related features, we present a novel encoder-decoder architecture that extends the core idea of attention mechanism to the simultaneous stereo matching and dehazing. As a result, our method estimates high-quality disparity for the stereo images in scattering media, and produces appearance images with enhanced visibility. Finally, we further propose an effective strategy for adaptation to camera-captured images by distilling the cross-domain knowledge. Experiments on both synthetic and real-world scenarios including comparisons with state-of-the-art methods demonstrate the effectiveness and flexibility of our approach.</subfield></datafield><datafield tag="650" ind1=" " ind2="4"><subfield code="a">Stereo matching</subfield></datafield><datafield tag="650" ind1=" " ind2="4"><subfield code="a">Dehazing</subfield></datafield><datafield tag="650" ind1=" " ind2="4"><subfield code="a">CNN</subfield></datafield><datafield tag="650" ind1=" " ind2="4"><subfield code="a">Multi-task learning</subfield></datafield><datafield tag="650" ind1=" " ind2="4"><subfield code="a">Knowledge distillation</subfield></datafield><datafield tag="650" ind1=" " ind2="4"><subfield code="a">Stereo confidence</subfield></datafield><datafield tag="650" ind1=" " ind2="4"><subfield code="a">Adverse weather condition</subfield></datafield><datafield tag="700" ind1="1" ind2=" "><subfield code="a">Kim, Youngjung</subfield><subfield code="4">aut</subfield></datafield><datafield tag="700" ind1="1" ind2=" "><subfield code="a">Oh, Changjae</subfield><subfield code="4">aut</subfield></datafield><datafield tag="700" ind1="1" ind2=" "><subfield code="a">Jang, Hyunsung</subfield><subfield code="4">aut</subfield></datafield><datafield tag="700" ind1="1" ind2=" "><subfield code="a">Ha, Namkoo</subfield><subfield code="4">aut</subfield></datafield><datafield tag="700" ind1="1" ind2=" "><subfield code="a">Sohn, Kwanghoon</subfield><subfield code="4">aut</subfield></datafield><datafield tag="773" ind1="0" ind2="8"><subfield code="i">Enthalten in</subfield><subfield code="t">International journal of computer vision</subfield><subfield code="d">Springer US, 1987</subfield><subfield code="g">128(2020), 4 vom: 21. Jan., Seite 799-817</subfield><subfield code="w">(DE-627)129354252</subfield><subfield code="w">(DE-600)155895-X</subfield><subfield code="w">(DE-576)018081428</subfield><subfield code="x">0920-5691</subfield><subfield code="7">nnns</subfield></datafield><datafield tag="773" ind1="1" ind2="8"><subfield code="g">volume:128</subfield><subfield code="g">year:2020</subfield><subfield code="g">number:4</subfield><subfield code="g">day:21</subfield><subfield code="g">month:01</subfield><subfield code="g">pages:799-817</subfield></datafield><datafield tag="856" ind1="4" ind2="1"><subfield code="u">https://doi.org/10.1007/s11263-020-01294-2</subfield><subfield code="z">lizenzpflichtig</subfield><subfield code="3">Volltext</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_USEFLAG_A</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">SYSFLAG_A</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_OLC</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">SSG-OLC-MAT</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2244</subfield></datafield><datafield tag="951" ind1=" " ind2=" "><subfield code="a">AR</subfield></datafield><datafield tag="952" ind1=" " ind2=" "><subfield code="d">128</subfield><subfield code="j">2020</subfield><subfield code="e">4</subfield><subfield code="b">21</subfield><subfield code="c">01</subfield><subfield code="h">799-817</subfield></datafield></record></collection>
|
author |
Song, Taeyong |
spellingShingle |
Song, Taeyong ddc 004 misc Stereo matching misc Dehazing misc CNN misc Multi-task learning misc Knowledge distillation misc Stereo confidence misc Adverse weather condition Simultaneous Deep Stereo Matching and Dehazing with Feature Attention |
authorStr |
Song, Taeyong |
ppnlink_with_tag_str_mv |
@@773@@(DE-627)129354252 |
format |
Article |
dewey-ones |
004 - Data processing & computer science |
delete_txt_mv |
keep |
author_role |
aut aut aut aut aut aut |
collection |
OLC |
remote_str |
false |
illustrated |
Not Illustrated |
issn |
0920-5691 |
topic_title |
004 VZ Simultaneous Deep Stereo Matching and Dehazing with Feature Attention Stereo matching Dehazing CNN Multi-task learning Knowledge distillation Stereo confidence Adverse weather condition |
topic |
ddc 004 misc Stereo matching misc Dehazing misc CNN misc Multi-task learning misc Knowledge distillation misc Stereo confidence misc Adverse weather condition |
topic_unstemmed |
ddc 004 misc Stereo matching misc Dehazing misc CNN misc Multi-task learning misc Knowledge distillation misc Stereo confidence misc Adverse weather condition |
topic_browse |
ddc 004 misc Stereo matching misc Dehazing misc CNN misc Multi-task learning misc Knowledge distillation misc Stereo confidence misc Adverse weather condition |
format_facet |
Aufsätze Gedruckte Aufsätze |
format_main_str_mv |
Text Zeitschrift/Artikel |
carriertype_str_mv |
nc |
hierarchy_parent_title |
International journal of computer vision |
hierarchy_parent_id |
129354252 |
dewey-tens |
000 - Computer science, knowledge & systems |
hierarchy_top_title |
International journal of computer vision |
isfreeaccess_txt |
false |
familylinks_str_mv |
(DE-627)129354252 (DE-600)155895-X (DE-576)018081428 |
title |
Simultaneous Deep Stereo Matching and Dehazing with Feature Attention |
ctrlnum |
(DE-627)OLC2057753867 (DE-He213)s11263-020-01294-2-p |
title_full |
Simultaneous Deep Stereo Matching and Dehazing with Feature Attention |
author_sort |
Song, Taeyong |
journal |
International journal of computer vision |
journalStr |
International journal of computer vision |
lang_code |
eng |
isOA_bool |
false |
dewey-hundreds |
000 - Computer science, information & general works |
recordtype |
marc |
publishDateSort |
2020 |
contenttype_str_mv |
txt |
container_start_page |
799 |
author_browse |
Song, Taeyong Kim, Youngjung Oh, Changjae Jang, Hyunsung Ha, Namkoo Sohn, Kwanghoon |
container_volume |
128 |
class |
004 VZ |
format_se |
Aufsätze |
author-letter |
Song, Taeyong |
doi_str_mv |
10.1007/s11263-020-01294-2 |
dewey-full |
004 |
title_sort |
simultaneous deep stereo matching and dehazing with feature attention |
title_auth |
Simultaneous Deep Stereo Matching and Dehazing with Feature Attention |
abstract |
Abstract Unveiling the dense correspondence under the haze layer remains a challenging task, since the scattering effects result in less distinctive image features. Contrarily, dehazing is often confused by the airlight-albedo ambiguity which cannot be resolved independently at each pixel. In this paper, we introduce a deep convolutional neural network that simultaneously estimates a disparity and clear image from a hazy stereo image pair. Both tasks are synergistically formulated by fusing depth information from the matching cost and haze transmission. To learn the optimal fusion of depth-related features, we present a novel encoder-decoder architecture that extends the core idea of attention mechanism to the simultaneous stereo matching and dehazing. As a result, our method estimates high-quality disparity for the stereo images in scattering media, and produces appearance images with enhanced visibility. Finally, we further propose an effective strategy for adaptation to camera-captured images by distilling the cross-domain knowledge. Experiments on both synthetic and real-world scenarios including comparisons with state-of-the-art methods demonstrate the effectiveness and flexibility of our approach. © Springer Science+Business Media, LLC, part of Springer Nature 2020 |
abstractGer |
Abstract Unveiling the dense correspondence under the haze layer remains a challenging task, since the scattering effects result in less distinctive image features. Contrarily, dehazing is often confused by the airlight-albedo ambiguity which cannot be resolved independently at each pixel. In this paper, we introduce a deep convolutional neural network that simultaneously estimates a disparity and clear image from a hazy stereo image pair. Both tasks are synergistically formulated by fusing depth information from the matching cost and haze transmission. To learn the optimal fusion of depth-related features, we present a novel encoder-decoder architecture that extends the core idea of attention mechanism to the simultaneous stereo matching and dehazing. As a result, our method estimates high-quality disparity for the stereo images in scattering media, and produces appearance images with enhanced visibility. Finally, we further propose an effective strategy for adaptation to camera-captured images by distilling the cross-domain knowledge. Experiments on both synthetic and real-world scenarios including comparisons with state-of-the-art methods demonstrate the effectiveness and flexibility of our approach. © Springer Science+Business Media, LLC, part of Springer Nature 2020 |
abstract_unstemmed |
Abstract Unveiling the dense correspondence under the haze layer remains a challenging task, since the scattering effects result in less distinctive image features. Contrarily, dehazing is often confused by the airlight-albedo ambiguity which cannot be resolved independently at each pixel. In this paper, we introduce a deep convolutional neural network that simultaneously estimates a disparity and clear image from a hazy stereo image pair. Both tasks are synergistically formulated by fusing depth information from the matching cost and haze transmission. To learn the optimal fusion of depth-related features, we present a novel encoder-decoder architecture that extends the core idea of attention mechanism to the simultaneous stereo matching and dehazing. As a result, our method estimates high-quality disparity for the stereo images in scattering media, and produces appearance images with enhanced visibility. Finally, we further propose an effective strategy for adaptation to camera-captured images by distilling the cross-domain knowledge. Experiments on both synthetic and real-world scenarios including comparisons with state-of-the-art methods demonstrate the effectiveness and flexibility of our approach. © Springer Science+Business Media, LLC, part of Springer Nature 2020 |
collection_details |
GBV_USEFLAG_A SYSFLAG_A GBV_OLC SSG-OLC-MAT GBV_ILN_2244 |
container_issue |
4 |
title_short |
Simultaneous Deep Stereo Matching and Dehazing with Feature Attention |
url |
https://doi.org/10.1007/s11263-020-01294-2 |
remote_bool |
false |
author2 |
Kim, Youngjung Oh, Changjae Jang, Hyunsung Ha, Namkoo Sohn, Kwanghoon |
author2Str |
Kim, Youngjung Oh, Changjae Jang, Hyunsung Ha, Namkoo Sohn, Kwanghoon |
ppnlink |
129354252 |
mediatype_str_mv |
n |
isOA_txt |
false |
hochschulschrift_bool |
false |
doi_str |
10.1007/s11263-020-01294-2 |
up_date |
2024-07-03T16:10:37.751Z |
_version_ |
1803574876835938304 |
fullrecord_marcxml |
<?xml version="1.0" encoding="UTF-8"?><collection xmlns="http://www.loc.gov/MARC21/slim"><record><leader>01000caa a22002652 4500</leader><controlfield tag="001">OLC2057753867</controlfield><controlfield tag="003">DE-627</controlfield><controlfield tag="005">20230504132124.0</controlfield><controlfield tag="007">tu</controlfield><controlfield tag="008">200820s2020 xx ||||| 00| ||eng c</controlfield><datafield tag="024" ind1="7" ind2=" "><subfield code="a">10.1007/s11263-020-01294-2</subfield><subfield code="2">doi</subfield></datafield><datafield tag="035" ind1=" " ind2=" "><subfield code="a">(DE-627)OLC2057753867</subfield></datafield><datafield tag="035" ind1=" " ind2=" "><subfield code="a">(DE-He213)s11263-020-01294-2-p</subfield></datafield><datafield tag="040" ind1=" " ind2=" "><subfield code="a">DE-627</subfield><subfield code="b">ger</subfield><subfield code="c">DE-627</subfield><subfield code="e">rakwb</subfield></datafield><datafield tag="041" ind1=" " ind2=" "><subfield code="a">eng</subfield></datafield><datafield tag="082" ind1="0" ind2="4"><subfield code="a">004</subfield><subfield code="q">VZ</subfield></datafield><datafield tag="100" ind1="1" ind2=" "><subfield code="a">Song, Taeyong</subfield><subfield code="e">verfasserin</subfield><subfield code="4">aut</subfield></datafield><datafield tag="245" ind1="1" ind2="0"><subfield code="a">Simultaneous Deep Stereo Matching and Dehazing with Feature Attention</subfield></datafield><datafield tag="264" ind1=" " ind2="1"><subfield code="c">2020</subfield></datafield><datafield tag="336" ind1=" " ind2=" "><subfield code="a">Text</subfield><subfield code="b">txt</subfield><subfield code="2">rdacontent</subfield></datafield><datafield tag="337" ind1=" " ind2=" "><subfield code="a">ohne Hilfsmittel zu benutzen</subfield><subfield code="b">n</subfield><subfield code="2">rdamedia</subfield></datafield><datafield tag="338" ind1=" " ind2=" "><subfield code="a">Band</subfield><subfield code="b">nc</subfield><subfield code="2">rdacarrier</subfield></datafield><datafield tag="500" ind1=" " ind2=" "><subfield code="a">© Springer Science+Business Media, LLC, part of Springer Nature 2020</subfield></datafield><datafield tag="520" ind1=" " ind2=" "><subfield code="a">Abstract Unveiling the dense correspondence under the haze layer remains a challenging task, since the scattering effects result in less distinctive image features. Contrarily, dehazing is often confused by the airlight-albedo ambiguity which cannot be resolved independently at each pixel. In this paper, we introduce a deep convolutional neural network that simultaneously estimates a disparity and clear image from a hazy stereo image pair. Both tasks are synergistically formulated by fusing depth information from the matching cost and haze transmission. To learn the optimal fusion of depth-related features, we present a novel encoder-decoder architecture that extends the core idea of attention mechanism to the simultaneous stereo matching and dehazing. As a result, our method estimates high-quality disparity for the stereo images in scattering media, and produces appearance images with enhanced visibility. Finally, we further propose an effective strategy for adaptation to camera-captured images by distilling the cross-domain knowledge. Experiments on both synthetic and real-world scenarios including comparisons with state-of-the-art methods demonstrate the effectiveness and flexibility of our approach.</subfield></datafield><datafield tag="650" ind1=" " ind2="4"><subfield code="a">Stereo matching</subfield></datafield><datafield tag="650" ind1=" " ind2="4"><subfield code="a">Dehazing</subfield></datafield><datafield tag="650" ind1=" " ind2="4"><subfield code="a">CNN</subfield></datafield><datafield tag="650" ind1=" " ind2="4"><subfield code="a">Multi-task learning</subfield></datafield><datafield tag="650" ind1=" " ind2="4"><subfield code="a">Knowledge distillation</subfield></datafield><datafield tag="650" ind1=" " ind2="4"><subfield code="a">Stereo confidence</subfield></datafield><datafield tag="650" ind1=" " ind2="4"><subfield code="a">Adverse weather condition</subfield></datafield><datafield tag="700" ind1="1" ind2=" "><subfield code="a">Kim, Youngjung</subfield><subfield code="4">aut</subfield></datafield><datafield tag="700" ind1="1" ind2=" "><subfield code="a">Oh, Changjae</subfield><subfield code="4">aut</subfield></datafield><datafield tag="700" ind1="1" ind2=" "><subfield code="a">Jang, Hyunsung</subfield><subfield code="4">aut</subfield></datafield><datafield tag="700" ind1="1" ind2=" "><subfield code="a">Ha, Namkoo</subfield><subfield code="4">aut</subfield></datafield><datafield tag="700" ind1="1" ind2=" "><subfield code="a">Sohn, Kwanghoon</subfield><subfield code="4">aut</subfield></datafield><datafield tag="773" ind1="0" ind2="8"><subfield code="i">Enthalten in</subfield><subfield code="t">International journal of computer vision</subfield><subfield code="d">Springer US, 1987</subfield><subfield code="g">128(2020), 4 vom: 21. Jan., Seite 799-817</subfield><subfield code="w">(DE-627)129354252</subfield><subfield code="w">(DE-600)155895-X</subfield><subfield code="w">(DE-576)018081428</subfield><subfield code="x">0920-5691</subfield><subfield code="7">nnns</subfield></datafield><datafield tag="773" ind1="1" ind2="8"><subfield code="g">volume:128</subfield><subfield code="g">year:2020</subfield><subfield code="g">number:4</subfield><subfield code="g">day:21</subfield><subfield code="g">month:01</subfield><subfield code="g">pages:799-817</subfield></datafield><datafield tag="856" ind1="4" ind2="1"><subfield code="u">https://doi.org/10.1007/s11263-020-01294-2</subfield><subfield code="z">lizenzpflichtig</subfield><subfield code="3">Volltext</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_USEFLAG_A</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">SYSFLAG_A</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_OLC</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">SSG-OLC-MAT</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2244</subfield></datafield><datafield tag="951" ind1=" " ind2=" "><subfield code="a">AR</subfield></datafield><datafield tag="952" ind1=" " ind2=" "><subfield code="d">128</subfield><subfield code="j">2020</subfield><subfield code="e">4</subfield><subfield code="b">21</subfield><subfield code="c">01</subfield><subfield code="h">799-817</subfield></datafield></record></collection>
|
score |
7.397993 |