The separation of reflected and transparent layers from real-world image sequence
Abstract This paper presents an optimal method for the separation of reflected and transparent layers from real-world scene images. Whereas past research has been applied to indoor environments and static cameras, our technique can be used for outdoor scenes and motion cameras. The method is based o...
Ausführliche Beschreibung
Autor*in: |
Oo, Thanda [verfasserIn] |
---|
Format: |
Artikel |
---|---|
Sprache: |
Englisch |
Erschienen: |
2006 |
---|
Schlagwörter: |
---|
Anmerkung: |
© Springer-Verlag 2006 |
---|
Übergeordnetes Werk: |
Enthalten in: Machine vision and applications - Springer-Verlag, 1988, 18(2006), 1 vom: 03. Okt., Seite 17-24 |
---|---|
Übergeordnetes Werk: |
volume:18 ; year:2006 ; number:1 ; day:03 ; month:10 ; pages:17-24 |
Links: |
---|
DOI / URN: |
10.1007/s00138-006-0043-1 |
---|
Katalog-ID: |
OLC2074623461 |
---|
LEADER | 01000caa a22002652 4500 | ||
---|---|---|---|
001 | OLC2074623461 | ||
003 | DE-627 | ||
005 | 20230401063125.0 | ||
007 | tu | ||
008 | 200820s2006 xx ||||| 00| ||eng c | ||
024 | 7 | |a 10.1007/s00138-006-0043-1 |2 doi | |
035 | |a (DE-627)OLC2074623461 | ||
035 | |a (DE-He213)s00138-006-0043-1-p | ||
040 | |a DE-627 |b ger |c DE-627 |e rakwb | ||
041 | |a eng | ||
082 | 0 | 4 | |a 004 |q VZ |
084 | |a 11 |2 ssgn | ||
100 | 1 | |a Oo, Thanda |e verfasserin |4 aut | |
245 | 1 | 0 | |a The separation of reflected and transparent layers from real-world image sequence |
264 | 1 | |c 2006 | |
336 | |a Text |b txt |2 rdacontent | ||
337 | |a ohne Hilfsmittel zu benutzen |b n |2 rdamedia | ||
338 | |a Band |b nc |2 rdacarrier | ||
500 | |a © Springer-Verlag 2006 | ||
520 | |a Abstract This paper presents an optimal method for the separation of reflected and transparent layers from real-world scene images. Whereas past research has been applied to indoor environments and static cameras, our technique can be used for outdoor scenes and motion cameras. The method is based on spatio-temporal analysis, especially using epipolar plane images (EPI). The edge and color information of EPI has been used to segment the areas on EPIs efficiently and separate the reflected and transparent layers. This method can be used for refining building textures by removing reflections from captured images for the purpose of city modeling. | ||
650 | 4 | |a EPI analysis | |
650 | 4 | |a Layer extraction | |
650 | 4 | |a Reflectance and transparency | |
650 | 4 | |a City modeling | |
700 | 1 | |a Kawasaki, Hiroshi |4 aut | |
700 | 1 | |a Ohsawa, Yutaka |4 aut | |
700 | 1 | |a Ikeuchi, Katsushi |4 aut | |
773 | 0 | 8 | |i Enthalten in |t Machine vision and applications |d Springer-Verlag, 1988 |g 18(2006), 1 vom: 03. Okt., Seite 17-24 |w (DE-627)129248843 |w (DE-600)59385-0 |w (DE-576)017944139 |x 0932-8092 |7 nnns |
773 | 1 | 8 | |g volume:18 |g year:2006 |g number:1 |g day:03 |g month:10 |g pages:17-24 |
856 | 4 | 1 | |u https://doi.org/10.1007/s00138-006-0043-1 |z lizenzpflichtig |3 Volltext |
912 | |a GBV_USEFLAG_A | ||
912 | |a SYSFLAG_A | ||
912 | |a GBV_OLC | ||
912 | |a SSG-OLC-MAT | ||
912 | |a GBV_ILN_21 | ||
912 | |a GBV_ILN_32 | ||
912 | |a GBV_ILN_40 | ||
912 | |a GBV_ILN_70 | ||
912 | |a GBV_ILN_100 | ||
912 | |a GBV_ILN_2015 | ||
912 | |a GBV_ILN_2018 | ||
912 | |a GBV_ILN_2020 | ||
912 | |a GBV_ILN_4116 | ||
912 | |a GBV_ILN_4277 | ||
912 | |a GBV_ILN_4307 | ||
912 | |a GBV_ILN_4313 | ||
951 | |a AR | ||
952 | |d 18 |j 2006 |e 1 |b 03 |c 10 |h 17-24 |
author_variant |
t o to h k hk y o yo k i ki |
---|---|
matchkey_str |
article:09328092:2006----::hsprtoorfetdntasaetaesrme |
hierarchy_sort_str |
2006 |
publishDate |
2006 |
allfields |
10.1007/s00138-006-0043-1 doi (DE-627)OLC2074623461 (DE-He213)s00138-006-0043-1-p DE-627 ger DE-627 rakwb eng 004 VZ 11 ssgn Oo, Thanda verfasserin aut The separation of reflected and transparent layers from real-world image sequence 2006 Text txt rdacontent ohne Hilfsmittel zu benutzen n rdamedia Band nc rdacarrier © Springer-Verlag 2006 Abstract This paper presents an optimal method for the separation of reflected and transparent layers from real-world scene images. Whereas past research has been applied to indoor environments and static cameras, our technique can be used for outdoor scenes and motion cameras. The method is based on spatio-temporal analysis, especially using epipolar plane images (EPI). The edge and color information of EPI has been used to segment the areas on EPIs efficiently and separate the reflected and transparent layers. This method can be used for refining building textures by removing reflections from captured images for the purpose of city modeling. EPI analysis Layer extraction Reflectance and transparency City modeling Kawasaki, Hiroshi aut Ohsawa, Yutaka aut Ikeuchi, Katsushi aut Enthalten in Machine vision and applications Springer-Verlag, 1988 18(2006), 1 vom: 03. Okt., Seite 17-24 (DE-627)129248843 (DE-600)59385-0 (DE-576)017944139 0932-8092 nnns volume:18 year:2006 number:1 day:03 month:10 pages:17-24 https://doi.org/10.1007/s00138-006-0043-1 lizenzpflichtig Volltext GBV_USEFLAG_A SYSFLAG_A GBV_OLC SSG-OLC-MAT GBV_ILN_21 GBV_ILN_32 GBV_ILN_40 GBV_ILN_70 GBV_ILN_100 GBV_ILN_2015 GBV_ILN_2018 GBV_ILN_2020 GBV_ILN_4116 GBV_ILN_4277 GBV_ILN_4307 GBV_ILN_4313 AR 18 2006 1 03 10 17-24 |
spelling |
10.1007/s00138-006-0043-1 doi (DE-627)OLC2074623461 (DE-He213)s00138-006-0043-1-p DE-627 ger DE-627 rakwb eng 004 VZ 11 ssgn Oo, Thanda verfasserin aut The separation of reflected and transparent layers from real-world image sequence 2006 Text txt rdacontent ohne Hilfsmittel zu benutzen n rdamedia Band nc rdacarrier © Springer-Verlag 2006 Abstract This paper presents an optimal method for the separation of reflected and transparent layers from real-world scene images. Whereas past research has been applied to indoor environments and static cameras, our technique can be used for outdoor scenes and motion cameras. The method is based on spatio-temporal analysis, especially using epipolar plane images (EPI). The edge and color information of EPI has been used to segment the areas on EPIs efficiently and separate the reflected and transparent layers. This method can be used for refining building textures by removing reflections from captured images for the purpose of city modeling. EPI analysis Layer extraction Reflectance and transparency City modeling Kawasaki, Hiroshi aut Ohsawa, Yutaka aut Ikeuchi, Katsushi aut Enthalten in Machine vision and applications Springer-Verlag, 1988 18(2006), 1 vom: 03. Okt., Seite 17-24 (DE-627)129248843 (DE-600)59385-0 (DE-576)017944139 0932-8092 nnns volume:18 year:2006 number:1 day:03 month:10 pages:17-24 https://doi.org/10.1007/s00138-006-0043-1 lizenzpflichtig Volltext GBV_USEFLAG_A SYSFLAG_A GBV_OLC SSG-OLC-MAT GBV_ILN_21 GBV_ILN_32 GBV_ILN_40 GBV_ILN_70 GBV_ILN_100 GBV_ILN_2015 GBV_ILN_2018 GBV_ILN_2020 GBV_ILN_4116 GBV_ILN_4277 GBV_ILN_4307 GBV_ILN_4313 AR 18 2006 1 03 10 17-24 |
allfields_unstemmed |
10.1007/s00138-006-0043-1 doi (DE-627)OLC2074623461 (DE-He213)s00138-006-0043-1-p DE-627 ger DE-627 rakwb eng 004 VZ 11 ssgn Oo, Thanda verfasserin aut The separation of reflected and transparent layers from real-world image sequence 2006 Text txt rdacontent ohne Hilfsmittel zu benutzen n rdamedia Band nc rdacarrier © Springer-Verlag 2006 Abstract This paper presents an optimal method for the separation of reflected and transparent layers from real-world scene images. Whereas past research has been applied to indoor environments and static cameras, our technique can be used for outdoor scenes and motion cameras. The method is based on spatio-temporal analysis, especially using epipolar plane images (EPI). The edge and color information of EPI has been used to segment the areas on EPIs efficiently and separate the reflected and transparent layers. This method can be used for refining building textures by removing reflections from captured images for the purpose of city modeling. EPI analysis Layer extraction Reflectance and transparency City modeling Kawasaki, Hiroshi aut Ohsawa, Yutaka aut Ikeuchi, Katsushi aut Enthalten in Machine vision and applications Springer-Verlag, 1988 18(2006), 1 vom: 03. Okt., Seite 17-24 (DE-627)129248843 (DE-600)59385-0 (DE-576)017944139 0932-8092 nnns volume:18 year:2006 number:1 day:03 month:10 pages:17-24 https://doi.org/10.1007/s00138-006-0043-1 lizenzpflichtig Volltext GBV_USEFLAG_A SYSFLAG_A GBV_OLC SSG-OLC-MAT GBV_ILN_21 GBV_ILN_32 GBV_ILN_40 GBV_ILN_70 GBV_ILN_100 GBV_ILN_2015 GBV_ILN_2018 GBV_ILN_2020 GBV_ILN_4116 GBV_ILN_4277 GBV_ILN_4307 GBV_ILN_4313 AR 18 2006 1 03 10 17-24 |
allfieldsGer |
10.1007/s00138-006-0043-1 doi (DE-627)OLC2074623461 (DE-He213)s00138-006-0043-1-p DE-627 ger DE-627 rakwb eng 004 VZ 11 ssgn Oo, Thanda verfasserin aut The separation of reflected and transparent layers from real-world image sequence 2006 Text txt rdacontent ohne Hilfsmittel zu benutzen n rdamedia Band nc rdacarrier © Springer-Verlag 2006 Abstract This paper presents an optimal method for the separation of reflected and transparent layers from real-world scene images. Whereas past research has been applied to indoor environments and static cameras, our technique can be used for outdoor scenes and motion cameras. The method is based on spatio-temporal analysis, especially using epipolar plane images (EPI). The edge and color information of EPI has been used to segment the areas on EPIs efficiently and separate the reflected and transparent layers. This method can be used for refining building textures by removing reflections from captured images for the purpose of city modeling. EPI analysis Layer extraction Reflectance and transparency City modeling Kawasaki, Hiroshi aut Ohsawa, Yutaka aut Ikeuchi, Katsushi aut Enthalten in Machine vision and applications Springer-Verlag, 1988 18(2006), 1 vom: 03. Okt., Seite 17-24 (DE-627)129248843 (DE-600)59385-0 (DE-576)017944139 0932-8092 nnns volume:18 year:2006 number:1 day:03 month:10 pages:17-24 https://doi.org/10.1007/s00138-006-0043-1 lizenzpflichtig Volltext GBV_USEFLAG_A SYSFLAG_A GBV_OLC SSG-OLC-MAT GBV_ILN_21 GBV_ILN_32 GBV_ILN_40 GBV_ILN_70 GBV_ILN_100 GBV_ILN_2015 GBV_ILN_2018 GBV_ILN_2020 GBV_ILN_4116 GBV_ILN_4277 GBV_ILN_4307 GBV_ILN_4313 AR 18 2006 1 03 10 17-24 |
allfieldsSound |
10.1007/s00138-006-0043-1 doi (DE-627)OLC2074623461 (DE-He213)s00138-006-0043-1-p DE-627 ger DE-627 rakwb eng 004 VZ 11 ssgn Oo, Thanda verfasserin aut The separation of reflected and transparent layers from real-world image sequence 2006 Text txt rdacontent ohne Hilfsmittel zu benutzen n rdamedia Band nc rdacarrier © Springer-Verlag 2006 Abstract This paper presents an optimal method for the separation of reflected and transparent layers from real-world scene images. Whereas past research has been applied to indoor environments and static cameras, our technique can be used for outdoor scenes and motion cameras. The method is based on spatio-temporal analysis, especially using epipolar plane images (EPI). The edge and color information of EPI has been used to segment the areas on EPIs efficiently and separate the reflected and transparent layers. This method can be used for refining building textures by removing reflections from captured images for the purpose of city modeling. EPI analysis Layer extraction Reflectance and transparency City modeling Kawasaki, Hiroshi aut Ohsawa, Yutaka aut Ikeuchi, Katsushi aut Enthalten in Machine vision and applications Springer-Verlag, 1988 18(2006), 1 vom: 03. Okt., Seite 17-24 (DE-627)129248843 (DE-600)59385-0 (DE-576)017944139 0932-8092 nnns volume:18 year:2006 number:1 day:03 month:10 pages:17-24 https://doi.org/10.1007/s00138-006-0043-1 lizenzpflichtig Volltext GBV_USEFLAG_A SYSFLAG_A GBV_OLC SSG-OLC-MAT GBV_ILN_21 GBV_ILN_32 GBV_ILN_40 GBV_ILN_70 GBV_ILN_100 GBV_ILN_2015 GBV_ILN_2018 GBV_ILN_2020 GBV_ILN_4116 GBV_ILN_4277 GBV_ILN_4307 GBV_ILN_4313 AR 18 2006 1 03 10 17-24 |
language |
English |
source |
Enthalten in Machine vision and applications 18(2006), 1 vom: 03. Okt., Seite 17-24 volume:18 year:2006 number:1 day:03 month:10 pages:17-24 |
sourceStr |
Enthalten in Machine vision and applications 18(2006), 1 vom: 03. Okt., Seite 17-24 volume:18 year:2006 number:1 day:03 month:10 pages:17-24 |
format_phy_str_mv |
Article |
institution |
findex.gbv.de |
topic_facet |
EPI analysis Layer extraction Reflectance and transparency City modeling |
dewey-raw |
004 |
isfreeaccess_bool |
false |
container_title |
Machine vision and applications |
authorswithroles_txt_mv |
Oo, Thanda @@aut@@ Kawasaki, Hiroshi @@aut@@ Ohsawa, Yutaka @@aut@@ Ikeuchi, Katsushi @@aut@@ |
publishDateDaySort_date |
2006-10-03T00:00:00Z |
hierarchy_top_id |
129248843 |
dewey-sort |
14 |
id |
OLC2074623461 |
language_de |
englisch |
fullrecord |
<?xml version="1.0" encoding="UTF-8"?><collection xmlns="http://www.loc.gov/MARC21/slim"><record><leader>01000caa a22002652 4500</leader><controlfield tag="001">OLC2074623461</controlfield><controlfield tag="003">DE-627</controlfield><controlfield tag="005">20230401063125.0</controlfield><controlfield tag="007">tu</controlfield><controlfield tag="008">200820s2006 xx ||||| 00| ||eng c</controlfield><datafield tag="024" ind1="7" ind2=" "><subfield code="a">10.1007/s00138-006-0043-1</subfield><subfield code="2">doi</subfield></datafield><datafield tag="035" ind1=" " ind2=" "><subfield code="a">(DE-627)OLC2074623461</subfield></datafield><datafield tag="035" ind1=" " ind2=" "><subfield code="a">(DE-He213)s00138-006-0043-1-p</subfield></datafield><datafield tag="040" ind1=" " ind2=" "><subfield code="a">DE-627</subfield><subfield code="b">ger</subfield><subfield code="c">DE-627</subfield><subfield code="e">rakwb</subfield></datafield><datafield tag="041" ind1=" " ind2=" "><subfield code="a">eng</subfield></datafield><datafield tag="082" ind1="0" ind2="4"><subfield code="a">004</subfield><subfield code="q">VZ</subfield></datafield><datafield tag="084" ind1=" " ind2=" "><subfield code="a">11</subfield><subfield code="2">ssgn</subfield></datafield><datafield tag="100" ind1="1" ind2=" "><subfield code="a">Oo, Thanda</subfield><subfield code="e">verfasserin</subfield><subfield code="4">aut</subfield></datafield><datafield tag="245" ind1="1" ind2="0"><subfield code="a">The separation of reflected and transparent layers from real-world image sequence</subfield></datafield><datafield tag="264" ind1=" " ind2="1"><subfield code="c">2006</subfield></datafield><datafield tag="336" ind1=" " ind2=" "><subfield code="a">Text</subfield><subfield code="b">txt</subfield><subfield code="2">rdacontent</subfield></datafield><datafield tag="337" ind1=" " ind2=" "><subfield code="a">ohne Hilfsmittel zu benutzen</subfield><subfield code="b">n</subfield><subfield code="2">rdamedia</subfield></datafield><datafield tag="338" ind1=" " ind2=" "><subfield code="a">Band</subfield><subfield code="b">nc</subfield><subfield code="2">rdacarrier</subfield></datafield><datafield tag="500" ind1=" " ind2=" "><subfield code="a">© Springer-Verlag 2006</subfield></datafield><datafield tag="520" ind1=" " ind2=" "><subfield code="a">Abstract This paper presents an optimal method for the separation of reflected and transparent layers from real-world scene images. Whereas past research has been applied to indoor environments and static cameras, our technique can be used for outdoor scenes and motion cameras. The method is based on spatio-temporal analysis, especially using epipolar plane images (EPI). The edge and color information of EPI has been used to segment the areas on EPIs efficiently and separate the reflected and transparent layers. This method can be used for refining building textures by removing reflections from captured images for the purpose of city modeling.</subfield></datafield><datafield tag="650" ind1=" " ind2="4"><subfield code="a">EPI analysis</subfield></datafield><datafield tag="650" ind1=" " ind2="4"><subfield code="a">Layer extraction</subfield></datafield><datafield tag="650" ind1=" " ind2="4"><subfield code="a">Reflectance and transparency</subfield></datafield><datafield tag="650" ind1=" " ind2="4"><subfield code="a">City modeling</subfield></datafield><datafield tag="700" ind1="1" ind2=" "><subfield code="a">Kawasaki, Hiroshi</subfield><subfield code="4">aut</subfield></datafield><datafield tag="700" ind1="1" ind2=" "><subfield code="a">Ohsawa, Yutaka</subfield><subfield code="4">aut</subfield></datafield><datafield tag="700" ind1="1" ind2=" "><subfield code="a">Ikeuchi, Katsushi</subfield><subfield code="4">aut</subfield></datafield><datafield tag="773" ind1="0" ind2="8"><subfield code="i">Enthalten in</subfield><subfield code="t">Machine vision and applications</subfield><subfield code="d">Springer-Verlag, 1988</subfield><subfield code="g">18(2006), 1 vom: 03. Okt., Seite 17-24</subfield><subfield code="w">(DE-627)129248843</subfield><subfield code="w">(DE-600)59385-0</subfield><subfield code="w">(DE-576)017944139</subfield><subfield code="x">0932-8092</subfield><subfield code="7">nnns</subfield></datafield><datafield tag="773" ind1="1" ind2="8"><subfield code="g">volume:18</subfield><subfield code="g">year:2006</subfield><subfield code="g">number:1</subfield><subfield code="g">day:03</subfield><subfield code="g">month:10</subfield><subfield code="g">pages:17-24</subfield></datafield><datafield tag="856" ind1="4" ind2="1"><subfield code="u">https://doi.org/10.1007/s00138-006-0043-1</subfield><subfield code="z">lizenzpflichtig</subfield><subfield code="3">Volltext</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_USEFLAG_A</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">SYSFLAG_A</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_OLC</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">SSG-OLC-MAT</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_21</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_32</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_40</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_70</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_100</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2015</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2018</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2020</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4116</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4277</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4307</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4313</subfield></datafield><datafield tag="951" ind1=" " ind2=" "><subfield code="a">AR</subfield></datafield><datafield tag="952" ind1=" " ind2=" "><subfield code="d">18</subfield><subfield code="j">2006</subfield><subfield code="e">1</subfield><subfield code="b">03</subfield><subfield code="c">10</subfield><subfield code="h">17-24</subfield></datafield></record></collection>
|
author |
Oo, Thanda |
spellingShingle |
Oo, Thanda ddc 004 ssgn 11 misc EPI analysis misc Layer extraction misc Reflectance and transparency misc City modeling The separation of reflected and transparent layers from real-world image sequence |
authorStr |
Oo, Thanda |
ppnlink_with_tag_str_mv |
@@773@@(DE-627)129248843 |
format |
Article |
dewey-ones |
004 - Data processing & computer science |
delete_txt_mv |
keep |
author_role |
aut aut aut aut |
collection |
OLC |
remote_str |
false |
illustrated |
Not Illustrated |
issn |
0932-8092 |
topic_title |
004 VZ 11 ssgn The separation of reflected and transparent layers from real-world image sequence EPI analysis Layer extraction Reflectance and transparency City modeling |
topic |
ddc 004 ssgn 11 misc EPI analysis misc Layer extraction misc Reflectance and transparency misc City modeling |
topic_unstemmed |
ddc 004 ssgn 11 misc EPI analysis misc Layer extraction misc Reflectance and transparency misc City modeling |
topic_browse |
ddc 004 ssgn 11 misc EPI analysis misc Layer extraction misc Reflectance and transparency misc City modeling |
format_facet |
Aufsätze Gedruckte Aufsätze |
format_main_str_mv |
Text Zeitschrift/Artikel |
carriertype_str_mv |
nc |
hierarchy_parent_title |
Machine vision and applications |
hierarchy_parent_id |
129248843 |
dewey-tens |
000 - Computer science, knowledge & systems |
hierarchy_top_title |
Machine vision and applications |
isfreeaccess_txt |
false |
familylinks_str_mv |
(DE-627)129248843 (DE-600)59385-0 (DE-576)017944139 |
title |
The separation of reflected and transparent layers from real-world image sequence |
ctrlnum |
(DE-627)OLC2074623461 (DE-He213)s00138-006-0043-1-p |
title_full |
The separation of reflected and transparent layers from real-world image sequence |
author_sort |
Oo, Thanda |
journal |
Machine vision and applications |
journalStr |
Machine vision and applications |
lang_code |
eng |
isOA_bool |
false |
dewey-hundreds |
000 - Computer science, information & general works |
recordtype |
marc |
publishDateSort |
2006 |
contenttype_str_mv |
txt |
container_start_page |
17 |
author_browse |
Oo, Thanda Kawasaki, Hiroshi Ohsawa, Yutaka Ikeuchi, Katsushi |
container_volume |
18 |
class |
004 VZ 11 ssgn |
format_se |
Aufsätze |
author-letter |
Oo, Thanda |
doi_str_mv |
10.1007/s00138-006-0043-1 |
dewey-full |
004 |
title_sort |
the separation of reflected and transparent layers from real-world image sequence |
title_auth |
The separation of reflected and transparent layers from real-world image sequence |
abstract |
Abstract This paper presents an optimal method for the separation of reflected and transparent layers from real-world scene images. Whereas past research has been applied to indoor environments and static cameras, our technique can be used for outdoor scenes and motion cameras. The method is based on spatio-temporal analysis, especially using epipolar plane images (EPI). The edge and color information of EPI has been used to segment the areas on EPIs efficiently and separate the reflected and transparent layers. This method can be used for refining building textures by removing reflections from captured images for the purpose of city modeling. © Springer-Verlag 2006 |
abstractGer |
Abstract This paper presents an optimal method for the separation of reflected and transparent layers from real-world scene images. Whereas past research has been applied to indoor environments and static cameras, our technique can be used for outdoor scenes and motion cameras. The method is based on spatio-temporal analysis, especially using epipolar plane images (EPI). The edge and color information of EPI has been used to segment the areas on EPIs efficiently and separate the reflected and transparent layers. This method can be used for refining building textures by removing reflections from captured images for the purpose of city modeling. © Springer-Verlag 2006 |
abstract_unstemmed |
Abstract This paper presents an optimal method for the separation of reflected and transparent layers from real-world scene images. Whereas past research has been applied to indoor environments and static cameras, our technique can be used for outdoor scenes and motion cameras. The method is based on spatio-temporal analysis, especially using epipolar plane images (EPI). The edge and color information of EPI has been used to segment the areas on EPIs efficiently and separate the reflected and transparent layers. This method can be used for refining building textures by removing reflections from captured images for the purpose of city modeling. © Springer-Verlag 2006 |
collection_details |
GBV_USEFLAG_A SYSFLAG_A GBV_OLC SSG-OLC-MAT GBV_ILN_21 GBV_ILN_32 GBV_ILN_40 GBV_ILN_70 GBV_ILN_100 GBV_ILN_2015 GBV_ILN_2018 GBV_ILN_2020 GBV_ILN_4116 GBV_ILN_4277 GBV_ILN_4307 GBV_ILN_4313 |
container_issue |
1 |
title_short |
The separation of reflected and transparent layers from real-world image sequence |
url |
https://doi.org/10.1007/s00138-006-0043-1 |
remote_bool |
false |
author2 |
Kawasaki, Hiroshi Ohsawa, Yutaka Ikeuchi, Katsushi |
author2Str |
Kawasaki, Hiroshi Ohsawa, Yutaka Ikeuchi, Katsushi |
ppnlink |
129248843 |
mediatype_str_mv |
n |
isOA_txt |
false |
hochschulschrift_bool |
false |
doi_str |
10.1007/s00138-006-0043-1 |
up_date |
2024-07-03T22:52:39.659Z |
_version_ |
1803600170488692736 |
fullrecord_marcxml |
<?xml version="1.0" encoding="UTF-8"?><collection xmlns="http://www.loc.gov/MARC21/slim"><record><leader>01000caa a22002652 4500</leader><controlfield tag="001">OLC2074623461</controlfield><controlfield tag="003">DE-627</controlfield><controlfield tag="005">20230401063125.0</controlfield><controlfield tag="007">tu</controlfield><controlfield tag="008">200820s2006 xx ||||| 00| ||eng c</controlfield><datafield tag="024" ind1="7" ind2=" "><subfield code="a">10.1007/s00138-006-0043-1</subfield><subfield code="2">doi</subfield></datafield><datafield tag="035" ind1=" " ind2=" "><subfield code="a">(DE-627)OLC2074623461</subfield></datafield><datafield tag="035" ind1=" " ind2=" "><subfield code="a">(DE-He213)s00138-006-0043-1-p</subfield></datafield><datafield tag="040" ind1=" " ind2=" "><subfield code="a">DE-627</subfield><subfield code="b">ger</subfield><subfield code="c">DE-627</subfield><subfield code="e">rakwb</subfield></datafield><datafield tag="041" ind1=" " ind2=" "><subfield code="a">eng</subfield></datafield><datafield tag="082" ind1="0" ind2="4"><subfield code="a">004</subfield><subfield code="q">VZ</subfield></datafield><datafield tag="084" ind1=" " ind2=" "><subfield code="a">11</subfield><subfield code="2">ssgn</subfield></datafield><datafield tag="100" ind1="1" ind2=" "><subfield code="a">Oo, Thanda</subfield><subfield code="e">verfasserin</subfield><subfield code="4">aut</subfield></datafield><datafield tag="245" ind1="1" ind2="0"><subfield code="a">The separation of reflected and transparent layers from real-world image sequence</subfield></datafield><datafield tag="264" ind1=" " ind2="1"><subfield code="c">2006</subfield></datafield><datafield tag="336" ind1=" " ind2=" "><subfield code="a">Text</subfield><subfield code="b">txt</subfield><subfield code="2">rdacontent</subfield></datafield><datafield tag="337" ind1=" " ind2=" "><subfield code="a">ohne Hilfsmittel zu benutzen</subfield><subfield code="b">n</subfield><subfield code="2">rdamedia</subfield></datafield><datafield tag="338" ind1=" " ind2=" "><subfield code="a">Band</subfield><subfield code="b">nc</subfield><subfield code="2">rdacarrier</subfield></datafield><datafield tag="500" ind1=" " ind2=" "><subfield code="a">© Springer-Verlag 2006</subfield></datafield><datafield tag="520" ind1=" " ind2=" "><subfield code="a">Abstract This paper presents an optimal method for the separation of reflected and transparent layers from real-world scene images. Whereas past research has been applied to indoor environments and static cameras, our technique can be used for outdoor scenes and motion cameras. The method is based on spatio-temporal analysis, especially using epipolar plane images (EPI). The edge and color information of EPI has been used to segment the areas on EPIs efficiently and separate the reflected and transparent layers. This method can be used for refining building textures by removing reflections from captured images for the purpose of city modeling.</subfield></datafield><datafield tag="650" ind1=" " ind2="4"><subfield code="a">EPI analysis</subfield></datafield><datafield tag="650" ind1=" " ind2="4"><subfield code="a">Layer extraction</subfield></datafield><datafield tag="650" ind1=" " ind2="4"><subfield code="a">Reflectance and transparency</subfield></datafield><datafield tag="650" ind1=" " ind2="4"><subfield code="a">City modeling</subfield></datafield><datafield tag="700" ind1="1" ind2=" "><subfield code="a">Kawasaki, Hiroshi</subfield><subfield code="4">aut</subfield></datafield><datafield tag="700" ind1="1" ind2=" "><subfield code="a">Ohsawa, Yutaka</subfield><subfield code="4">aut</subfield></datafield><datafield tag="700" ind1="1" ind2=" "><subfield code="a">Ikeuchi, Katsushi</subfield><subfield code="4">aut</subfield></datafield><datafield tag="773" ind1="0" ind2="8"><subfield code="i">Enthalten in</subfield><subfield code="t">Machine vision and applications</subfield><subfield code="d">Springer-Verlag, 1988</subfield><subfield code="g">18(2006), 1 vom: 03. Okt., Seite 17-24</subfield><subfield code="w">(DE-627)129248843</subfield><subfield code="w">(DE-600)59385-0</subfield><subfield code="w">(DE-576)017944139</subfield><subfield code="x">0932-8092</subfield><subfield code="7">nnns</subfield></datafield><datafield tag="773" ind1="1" ind2="8"><subfield code="g">volume:18</subfield><subfield code="g">year:2006</subfield><subfield code="g">number:1</subfield><subfield code="g">day:03</subfield><subfield code="g">month:10</subfield><subfield code="g">pages:17-24</subfield></datafield><datafield tag="856" ind1="4" ind2="1"><subfield code="u">https://doi.org/10.1007/s00138-006-0043-1</subfield><subfield code="z">lizenzpflichtig</subfield><subfield code="3">Volltext</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_USEFLAG_A</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">SYSFLAG_A</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_OLC</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">SSG-OLC-MAT</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_21</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_32</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_40</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_70</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_100</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2015</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2018</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2020</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4116</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4277</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4307</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4313</subfield></datafield><datafield tag="951" ind1=" " ind2=" "><subfield code="a">AR</subfield></datafield><datafield tag="952" ind1=" " ind2=" "><subfield code="d">18</subfield><subfield code="j">2006</subfield><subfield code="e">1</subfield><subfield code="b">03</subfield><subfield code="c">10</subfield><subfield code="h">17-24</subfield></datafield></record></collection>
|
score |
7.400304 |