A visible-light and infrared video database for performance evaluation of video/image fusion methods
Abstract In general, the fusion of visible-light and infrared images produces a composite representation where both data are pictured in a single image. The successful development of image/video fusion algorithms relies on realistic infrared/visible-light datasets. To the best of our knowledge, ther...
Ausführliche Beschreibung
Autor*in: |
Ellmauthaler, Andreas [verfasserIn] |
---|
Format: |
Artikel |
---|---|
Sprache: |
Englisch |
Erschienen: |
2017 |
---|
Schlagwörter: |
---|
Anmerkung: |
© Springer Science+Business Media, LLC, part of Springer Nature 2017 |
---|
Übergeordnetes Werk: |
Enthalten in: Multidimensional systems and signal processing - Springer US, 1990, 30(2017), 1 vom: 27. Dez., Seite 119-143 |
---|---|
Übergeordnetes Werk: |
volume:30 ; year:2017 ; number:1 ; day:27 ; month:12 ; pages:119-143 |
Links: |
---|
DOI / URN: |
10.1007/s11045-017-0548-y |
---|
Katalog-ID: |
OLC2048110010 |
---|
LEADER | 01000caa a22002652 4500 | ||
---|---|---|---|
001 | OLC2048110010 | ||
003 | DE-627 | ||
005 | 20230503194622.0 | ||
007 | tu | ||
008 | 200819s2017 xx ||||| 00| ||eng c | ||
024 | 7 | |a 10.1007/s11045-017-0548-y |2 doi | |
035 | |a (DE-627)OLC2048110010 | ||
035 | |a (DE-He213)s11045-017-0548-y-p | ||
040 | |a DE-627 |b ger |c DE-627 |e rakwb | ||
041 | |a eng | ||
082 | 0 | 4 | |a 510 |q VZ |
100 | 1 | |a Ellmauthaler, Andreas |e verfasserin |4 aut | |
245 | 1 | 0 | |a A visible-light and infrared video database for performance evaluation of video/image fusion methods |
264 | 1 | |c 2017 | |
336 | |a Text |b txt |2 rdacontent | ||
337 | |a ohne Hilfsmittel zu benutzen |b n |2 rdamedia | ||
338 | |a Band |b nc |2 rdacarrier | ||
500 | |a © Springer Science+Business Media, LLC, part of Springer Nature 2017 | ||
520 | |a Abstract In general, the fusion of visible-light and infrared images produces a composite representation where both data are pictured in a single image. The successful development of image/video fusion algorithms relies on realistic infrared/visible-light datasets. To the best of our knowledge, there is a particular shortage of databases with registered and synchronized videos from the infrared and visible-light spectra suitable for image/video fusion research. To address this need we recorded an image/video fusion database using infrared and visible-light cameras under varying illumination conditions. Moreover, different scenarios have been defined to better challenge the fusion methods, with various contexts and contents providing a wide variety of meaningful data for fusion purposes, including non-planar scenes, where objects appear on different depth planes. However, there are several difficulties in creating datasets for research in infrared/visible-light image fusion. Camera calibration, registration, and synchronization can be listed as important steps of this task. In particular, image registration between imagery from sensors of different spectral bands imposes additional difficulties, as it is very challenging to solve the correspondence problem between such images. Motivated by these challenges, this work introduces a novel spatiotemporal video registration method capable of generating registered and temporally aligned infrared/visible-light video sequences. The proposed workflow improves the registration accuracy when compared to the state-of-the art. By applying the proposed methodology to the recorded database we have generated the visible-light and infrared video database for image fusion, a publicly available database to be used by the research community to test and benchmark fusion schemes. | ||
650 | 4 | |a Infrared/visible image/video database | |
650 | 4 | |a Image registration | |
650 | 4 | |a Image fusion | |
650 | 4 | |a Camera calibration | |
700 | 1 | |a Pagliari, Carla L. |4 aut | |
700 | 1 | |a da Silva, Eduardo A. B. |4 aut | |
700 | 1 | |a Gois, Jonathan N. |4 aut | |
700 | 1 | |a Neves, Sergio R. |4 aut | |
773 | 0 | 8 | |i Enthalten in |t Multidimensional systems and signal processing |d Springer US, 1990 |g 30(2017), 1 vom: 27. Dez., Seite 119-143 |w (DE-627)130892076 |w (DE-600)1041098-3 |w (DE-576)038686074 |x 0923-6082 |7 nnns |
773 | 1 | 8 | |g volume:30 |g year:2017 |g number:1 |g day:27 |g month:12 |g pages:119-143 |
856 | 4 | 1 | |u https://doi.org/10.1007/s11045-017-0548-y |z lizenzpflichtig |3 Volltext |
912 | |a GBV_USEFLAG_A | ||
912 | |a SYSFLAG_A | ||
912 | |a GBV_OLC | ||
912 | |a SSG-OLC-MAT | ||
912 | |a SSG-OPC-MAT | ||
912 | |a GBV_ILN_70 | ||
951 | |a AR | ||
952 | |d 30 |j 2017 |e 1 |b 27 |c 12 |h 119-143 |
author_variant |
a e ae c l p cl clp s e a b d seab seabd j n g jn jng s r n sr srn |
---|---|
matchkey_str |
article:09236082:2017----::vsbeihadnrrdiedtbsfrefraceautoo |
hierarchy_sort_str |
2017 |
publishDate |
2017 |
allfields |
10.1007/s11045-017-0548-y doi (DE-627)OLC2048110010 (DE-He213)s11045-017-0548-y-p DE-627 ger DE-627 rakwb eng 510 VZ Ellmauthaler, Andreas verfasserin aut A visible-light and infrared video database for performance evaluation of video/image fusion methods 2017 Text txt rdacontent ohne Hilfsmittel zu benutzen n rdamedia Band nc rdacarrier © Springer Science+Business Media, LLC, part of Springer Nature 2017 Abstract In general, the fusion of visible-light and infrared images produces a composite representation where both data are pictured in a single image. The successful development of image/video fusion algorithms relies on realistic infrared/visible-light datasets. To the best of our knowledge, there is a particular shortage of databases with registered and synchronized videos from the infrared and visible-light spectra suitable for image/video fusion research. To address this need we recorded an image/video fusion database using infrared and visible-light cameras under varying illumination conditions. Moreover, different scenarios have been defined to better challenge the fusion methods, with various contexts and contents providing a wide variety of meaningful data for fusion purposes, including non-planar scenes, where objects appear on different depth planes. However, there are several difficulties in creating datasets for research in infrared/visible-light image fusion. Camera calibration, registration, and synchronization can be listed as important steps of this task. In particular, image registration between imagery from sensors of different spectral bands imposes additional difficulties, as it is very challenging to solve the correspondence problem between such images. Motivated by these challenges, this work introduces a novel spatiotemporal video registration method capable of generating registered and temporally aligned infrared/visible-light video sequences. The proposed workflow improves the registration accuracy when compared to the state-of-the art. By applying the proposed methodology to the recorded database we have generated the visible-light and infrared video database for image fusion, a publicly available database to be used by the research community to test and benchmark fusion schemes. Infrared/visible image/video database Image registration Image fusion Camera calibration Pagliari, Carla L. aut da Silva, Eduardo A. B. aut Gois, Jonathan N. aut Neves, Sergio R. aut Enthalten in Multidimensional systems and signal processing Springer US, 1990 30(2017), 1 vom: 27. Dez., Seite 119-143 (DE-627)130892076 (DE-600)1041098-3 (DE-576)038686074 0923-6082 nnns volume:30 year:2017 number:1 day:27 month:12 pages:119-143 https://doi.org/10.1007/s11045-017-0548-y lizenzpflichtig Volltext GBV_USEFLAG_A SYSFLAG_A GBV_OLC SSG-OLC-MAT SSG-OPC-MAT GBV_ILN_70 AR 30 2017 1 27 12 119-143 |
spelling |
10.1007/s11045-017-0548-y doi (DE-627)OLC2048110010 (DE-He213)s11045-017-0548-y-p DE-627 ger DE-627 rakwb eng 510 VZ Ellmauthaler, Andreas verfasserin aut A visible-light and infrared video database for performance evaluation of video/image fusion methods 2017 Text txt rdacontent ohne Hilfsmittel zu benutzen n rdamedia Band nc rdacarrier © Springer Science+Business Media, LLC, part of Springer Nature 2017 Abstract In general, the fusion of visible-light and infrared images produces a composite representation where both data are pictured in a single image. The successful development of image/video fusion algorithms relies on realistic infrared/visible-light datasets. To the best of our knowledge, there is a particular shortage of databases with registered and synchronized videos from the infrared and visible-light spectra suitable for image/video fusion research. To address this need we recorded an image/video fusion database using infrared and visible-light cameras under varying illumination conditions. Moreover, different scenarios have been defined to better challenge the fusion methods, with various contexts and contents providing a wide variety of meaningful data for fusion purposes, including non-planar scenes, where objects appear on different depth planes. However, there are several difficulties in creating datasets for research in infrared/visible-light image fusion. Camera calibration, registration, and synchronization can be listed as important steps of this task. In particular, image registration between imagery from sensors of different spectral bands imposes additional difficulties, as it is very challenging to solve the correspondence problem between such images. Motivated by these challenges, this work introduces a novel spatiotemporal video registration method capable of generating registered and temporally aligned infrared/visible-light video sequences. The proposed workflow improves the registration accuracy when compared to the state-of-the art. By applying the proposed methodology to the recorded database we have generated the visible-light and infrared video database for image fusion, a publicly available database to be used by the research community to test and benchmark fusion schemes. Infrared/visible image/video database Image registration Image fusion Camera calibration Pagliari, Carla L. aut da Silva, Eduardo A. B. aut Gois, Jonathan N. aut Neves, Sergio R. aut Enthalten in Multidimensional systems and signal processing Springer US, 1990 30(2017), 1 vom: 27. Dez., Seite 119-143 (DE-627)130892076 (DE-600)1041098-3 (DE-576)038686074 0923-6082 nnns volume:30 year:2017 number:1 day:27 month:12 pages:119-143 https://doi.org/10.1007/s11045-017-0548-y lizenzpflichtig Volltext GBV_USEFLAG_A SYSFLAG_A GBV_OLC SSG-OLC-MAT SSG-OPC-MAT GBV_ILN_70 AR 30 2017 1 27 12 119-143 |
allfields_unstemmed |
10.1007/s11045-017-0548-y doi (DE-627)OLC2048110010 (DE-He213)s11045-017-0548-y-p DE-627 ger DE-627 rakwb eng 510 VZ Ellmauthaler, Andreas verfasserin aut A visible-light and infrared video database for performance evaluation of video/image fusion methods 2017 Text txt rdacontent ohne Hilfsmittel zu benutzen n rdamedia Band nc rdacarrier © Springer Science+Business Media, LLC, part of Springer Nature 2017 Abstract In general, the fusion of visible-light and infrared images produces a composite representation where both data are pictured in a single image. The successful development of image/video fusion algorithms relies on realistic infrared/visible-light datasets. To the best of our knowledge, there is a particular shortage of databases with registered and synchronized videos from the infrared and visible-light spectra suitable for image/video fusion research. To address this need we recorded an image/video fusion database using infrared and visible-light cameras under varying illumination conditions. Moreover, different scenarios have been defined to better challenge the fusion methods, with various contexts and contents providing a wide variety of meaningful data for fusion purposes, including non-planar scenes, where objects appear on different depth planes. However, there are several difficulties in creating datasets for research in infrared/visible-light image fusion. Camera calibration, registration, and synchronization can be listed as important steps of this task. In particular, image registration between imagery from sensors of different spectral bands imposes additional difficulties, as it is very challenging to solve the correspondence problem between such images. Motivated by these challenges, this work introduces a novel spatiotemporal video registration method capable of generating registered and temporally aligned infrared/visible-light video sequences. The proposed workflow improves the registration accuracy when compared to the state-of-the art. By applying the proposed methodology to the recorded database we have generated the visible-light and infrared video database for image fusion, a publicly available database to be used by the research community to test and benchmark fusion schemes. Infrared/visible image/video database Image registration Image fusion Camera calibration Pagliari, Carla L. aut da Silva, Eduardo A. B. aut Gois, Jonathan N. aut Neves, Sergio R. aut Enthalten in Multidimensional systems and signal processing Springer US, 1990 30(2017), 1 vom: 27. Dez., Seite 119-143 (DE-627)130892076 (DE-600)1041098-3 (DE-576)038686074 0923-6082 nnns volume:30 year:2017 number:1 day:27 month:12 pages:119-143 https://doi.org/10.1007/s11045-017-0548-y lizenzpflichtig Volltext GBV_USEFLAG_A SYSFLAG_A GBV_OLC SSG-OLC-MAT SSG-OPC-MAT GBV_ILN_70 AR 30 2017 1 27 12 119-143 |
allfieldsGer |
10.1007/s11045-017-0548-y doi (DE-627)OLC2048110010 (DE-He213)s11045-017-0548-y-p DE-627 ger DE-627 rakwb eng 510 VZ Ellmauthaler, Andreas verfasserin aut A visible-light and infrared video database for performance evaluation of video/image fusion methods 2017 Text txt rdacontent ohne Hilfsmittel zu benutzen n rdamedia Band nc rdacarrier © Springer Science+Business Media, LLC, part of Springer Nature 2017 Abstract In general, the fusion of visible-light and infrared images produces a composite representation where both data are pictured in a single image. The successful development of image/video fusion algorithms relies on realistic infrared/visible-light datasets. To the best of our knowledge, there is a particular shortage of databases with registered and synchronized videos from the infrared and visible-light spectra suitable for image/video fusion research. To address this need we recorded an image/video fusion database using infrared and visible-light cameras under varying illumination conditions. Moreover, different scenarios have been defined to better challenge the fusion methods, with various contexts and contents providing a wide variety of meaningful data for fusion purposes, including non-planar scenes, where objects appear on different depth planes. However, there are several difficulties in creating datasets for research in infrared/visible-light image fusion. Camera calibration, registration, and synchronization can be listed as important steps of this task. In particular, image registration between imagery from sensors of different spectral bands imposes additional difficulties, as it is very challenging to solve the correspondence problem between such images. Motivated by these challenges, this work introduces a novel spatiotemporal video registration method capable of generating registered and temporally aligned infrared/visible-light video sequences. The proposed workflow improves the registration accuracy when compared to the state-of-the art. By applying the proposed methodology to the recorded database we have generated the visible-light and infrared video database for image fusion, a publicly available database to be used by the research community to test and benchmark fusion schemes. Infrared/visible image/video database Image registration Image fusion Camera calibration Pagliari, Carla L. aut da Silva, Eduardo A. B. aut Gois, Jonathan N. aut Neves, Sergio R. aut Enthalten in Multidimensional systems and signal processing Springer US, 1990 30(2017), 1 vom: 27. Dez., Seite 119-143 (DE-627)130892076 (DE-600)1041098-3 (DE-576)038686074 0923-6082 nnns volume:30 year:2017 number:1 day:27 month:12 pages:119-143 https://doi.org/10.1007/s11045-017-0548-y lizenzpflichtig Volltext GBV_USEFLAG_A SYSFLAG_A GBV_OLC SSG-OLC-MAT SSG-OPC-MAT GBV_ILN_70 AR 30 2017 1 27 12 119-143 |
allfieldsSound |
10.1007/s11045-017-0548-y doi (DE-627)OLC2048110010 (DE-He213)s11045-017-0548-y-p DE-627 ger DE-627 rakwb eng 510 VZ Ellmauthaler, Andreas verfasserin aut A visible-light and infrared video database for performance evaluation of video/image fusion methods 2017 Text txt rdacontent ohne Hilfsmittel zu benutzen n rdamedia Band nc rdacarrier © Springer Science+Business Media, LLC, part of Springer Nature 2017 Abstract In general, the fusion of visible-light and infrared images produces a composite representation where both data are pictured in a single image. The successful development of image/video fusion algorithms relies on realistic infrared/visible-light datasets. To the best of our knowledge, there is a particular shortage of databases with registered and synchronized videos from the infrared and visible-light spectra suitable for image/video fusion research. To address this need we recorded an image/video fusion database using infrared and visible-light cameras under varying illumination conditions. Moreover, different scenarios have been defined to better challenge the fusion methods, with various contexts and contents providing a wide variety of meaningful data for fusion purposes, including non-planar scenes, where objects appear on different depth planes. However, there are several difficulties in creating datasets for research in infrared/visible-light image fusion. Camera calibration, registration, and synchronization can be listed as important steps of this task. In particular, image registration between imagery from sensors of different spectral bands imposes additional difficulties, as it is very challenging to solve the correspondence problem between such images. Motivated by these challenges, this work introduces a novel spatiotemporal video registration method capable of generating registered and temporally aligned infrared/visible-light video sequences. The proposed workflow improves the registration accuracy when compared to the state-of-the art. By applying the proposed methodology to the recorded database we have generated the visible-light and infrared video database for image fusion, a publicly available database to be used by the research community to test and benchmark fusion schemes. Infrared/visible image/video database Image registration Image fusion Camera calibration Pagliari, Carla L. aut da Silva, Eduardo A. B. aut Gois, Jonathan N. aut Neves, Sergio R. aut Enthalten in Multidimensional systems and signal processing Springer US, 1990 30(2017), 1 vom: 27. Dez., Seite 119-143 (DE-627)130892076 (DE-600)1041098-3 (DE-576)038686074 0923-6082 nnns volume:30 year:2017 number:1 day:27 month:12 pages:119-143 https://doi.org/10.1007/s11045-017-0548-y lizenzpflichtig Volltext GBV_USEFLAG_A SYSFLAG_A GBV_OLC SSG-OLC-MAT SSG-OPC-MAT GBV_ILN_70 AR 30 2017 1 27 12 119-143 |
language |
English |
source |
Enthalten in Multidimensional systems and signal processing 30(2017), 1 vom: 27. Dez., Seite 119-143 volume:30 year:2017 number:1 day:27 month:12 pages:119-143 |
sourceStr |
Enthalten in Multidimensional systems and signal processing 30(2017), 1 vom: 27. Dez., Seite 119-143 volume:30 year:2017 number:1 day:27 month:12 pages:119-143 |
format_phy_str_mv |
Article |
institution |
findex.gbv.de |
topic_facet |
Infrared/visible image/video database Image registration Image fusion Camera calibration |
dewey-raw |
510 |
isfreeaccess_bool |
false |
container_title |
Multidimensional systems and signal processing |
authorswithroles_txt_mv |
Ellmauthaler, Andreas @@aut@@ Pagliari, Carla L. @@aut@@ da Silva, Eduardo A. B. @@aut@@ Gois, Jonathan N. @@aut@@ Neves, Sergio R. @@aut@@ |
publishDateDaySort_date |
2017-12-27T00:00:00Z |
hierarchy_top_id |
130892076 |
dewey-sort |
3510 |
id |
OLC2048110010 |
language_de |
englisch |
fullrecord |
<?xml version="1.0" encoding="UTF-8"?><collection xmlns="http://www.loc.gov/MARC21/slim"><record><leader>01000caa a22002652 4500</leader><controlfield tag="001">OLC2048110010</controlfield><controlfield tag="003">DE-627</controlfield><controlfield tag="005">20230503194622.0</controlfield><controlfield tag="007">tu</controlfield><controlfield tag="008">200819s2017 xx ||||| 00| ||eng c</controlfield><datafield tag="024" ind1="7" ind2=" "><subfield code="a">10.1007/s11045-017-0548-y</subfield><subfield code="2">doi</subfield></datafield><datafield tag="035" ind1=" " ind2=" "><subfield code="a">(DE-627)OLC2048110010</subfield></datafield><datafield tag="035" ind1=" " ind2=" "><subfield code="a">(DE-He213)s11045-017-0548-y-p</subfield></datafield><datafield tag="040" ind1=" " ind2=" "><subfield code="a">DE-627</subfield><subfield code="b">ger</subfield><subfield code="c">DE-627</subfield><subfield code="e">rakwb</subfield></datafield><datafield tag="041" ind1=" " ind2=" "><subfield code="a">eng</subfield></datafield><datafield tag="082" ind1="0" ind2="4"><subfield code="a">510</subfield><subfield code="q">VZ</subfield></datafield><datafield tag="100" ind1="1" ind2=" "><subfield code="a">Ellmauthaler, Andreas</subfield><subfield code="e">verfasserin</subfield><subfield code="4">aut</subfield></datafield><datafield tag="245" ind1="1" ind2="0"><subfield code="a">A visible-light and infrared video database for performance evaluation of video/image fusion methods</subfield></datafield><datafield tag="264" ind1=" " ind2="1"><subfield code="c">2017</subfield></datafield><datafield tag="336" ind1=" " ind2=" "><subfield code="a">Text</subfield><subfield code="b">txt</subfield><subfield code="2">rdacontent</subfield></datafield><datafield tag="337" ind1=" " ind2=" "><subfield code="a">ohne Hilfsmittel zu benutzen</subfield><subfield code="b">n</subfield><subfield code="2">rdamedia</subfield></datafield><datafield tag="338" ind1=" " ind2=" "><subfield code="a">Band</subfield><subfield code="b">nc</subfield><subfield code="2">rdacarrier</subfield></datafield><datafield tag="500" ind1=" " ind2=" "><subfield code="a">© Springer Science+Business Media, LLC, part of Springer Nature 2017</subfield></datafield><datafield tag="520" ind1=" " ind2=" "><subfield code="a">Abstract In general, the fusion of visible-light and infrared images produces a composite representation where both data are pictured in a single image. The successful development of image/video fusion algorithms relies on realistic infrared/visible-light datasets. To the best of our knowledge, there is a particular shortage of databases with registered and synchronized videos from the infrared and visible-light spectra suitable for image/video fusion research. To address this need we recorded an image/video fusion database using infrared and visible-light cameras under varying illumination conditions. Moreover, different scenarios have been defined to better challenge the fusion methods, with various contexts and contents providing a wide variety of meaningful data for fusion purposes, including non-planar scenes, where objects appear on different depth planes. However, there are several difficulties in creating datasets for research in infrared/visible-light image fusion. Camera calibration, registration, and synchronization can be listed as important steps of this task. In particular, image registration between imagery from sensors of different spectral bands imposes additional difficulties, as it is very challenging to solve the correspondence problem between such images. Motivated by these challenges, this work introduces a novel spatiotemporal video registration method capable of generating registered and temporally aligned infrared/visible-light video sequences. The proposed workflow improves the registration accuracy when compared to the state-of-the art. By applying the proposed methodology to the recorded database we have generated the visible-light and infrared video database for image fusion, a publicly available database to be used by the research community to test and benchmark fusion schemes.</subfield></datafield><datafield tag="650" ind1=" " ind2="4"><subfield code="a">Infrared/visible image/video database</subfield></datafield><datafield tag="650" ind1=" " ind2="4"><subfield code="a">Image registration</subfield></datafield><datafield tag="650" ind1=" " ind2="4"><subfield code="a">Image fusion</subfield></datafield><datafield tag="650" ind1=" " ind2="4"><subfield code="a">Camera calibration</subfield></datafield><datafield tag="700" ind1="1" ind2=" "><subfield code="a">Pagliari, Carla L.</subfield><subfield code="4">aut</subfield></datafield><datafield tag="700" ind1="1" ind2=" "><subfield code="a">da Silva, Eduardo A. B.</subfield><subfield code="4">aut</subfield></datafield><datafield tag="700" ind1="1" ind2=" "><subfield code="a">Gois, Jonathan N.</subfield><subfield code="4">aut</subfield></datafield><datafield tag="700" ind1="1" ind2=" "><subfield code="a">Neves, Sergio R.</subfield><subfield code="4">aut</subfield></datafield><datafield tag="773" ind1="0" ind2="8"><subfield code="i">Enthalten in</subfield><subfield code="t">Multidimensional systems and signal processing</subfield><subfield code="d">Springer US, 1990</subfield><subfield code="g">30(2017), 1 vom: 27. Dez., Seite 119-143</subfield><subfield code="w">(DE-627)130892076</subfield><subfield code="w">(DE-600)1041098-3</subfield><subfield code="w">(DE-576)038686074</subfield><subfield code="x">0923-6082</subfield><subfield code="7">nnns</subfield></datafield><datafield tag="773" ind1="1" ind2="8"><subfield code="g">volume:30</subfield><subfield code="g">year:2017</subfield><subfield code="g">number:1</subfield><subfield code="g">day:27</subfield><subfield code="g">month:12</subfield><subfield code="g">pages:119-143</subfield></datafield><datafield tag="856" ind1="4" ind2="1"><subfield code="u">https://doi.org/10.1007/s11045-017-0548-y</subfield><subfield code="z">lizenzpflichtig</subfield><subfield code="3">Volltext</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_USEFLAG_A</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">SYSFLAG_A</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_OLC</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">SSG-OLC-MAT</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">SSG-OPC-MAT</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_70</subfield></datafield><datafield tag="951" ind1=" " ind2=" "><subfield code="a">AR</subfield></datafield><datafield tag="952" ind1=" " ind2=" "><subfield code="d">30</subfield><subfield code="j">2017</subfield><subfield code="e">1</subfield><subfield code="b">27</subfield><subfield code="c">12</subfield><subfield code="h">119-143</subfield></datafield></record></collection>
|
author |
Ellmauthaler, Andreas |
spellingShingle |
Ellmauthaler, Andreas ddc 510 misc Infrared/visible image/video database misc Image registration misc Image fusion misc Camera calibration A visible-light and infrared video database for performance evaluation of video/image fusion methods |
authorStr |
Ellmauthaler, Andreas |
ppnlink_with_tag_str_mv |
@@773@@(DE-627)130892076 |
format |
Article |
dewey-ones |
510 - Mathematics |
delete_txt_mv |
keep |
author_role |
aut aut aut aut aut |
collection |
OLC |
remote_str |
false |
illustrated |
Not Illustrated |
issn |
0923-6082 |
topic_title |
510 VZ A visible-light and infrared video database for performance evaluation of video/image fusion methods Infrared/visible image/video database Image registration Image fusion Camera calibration |
topic |
ddc 510 misc Infrared/visible image/video database misc Image registration misc Image fusion misc Camera calibration |
topic_unstemmed |
ddc 510 misc Infrared/visible image/video database misc Image registration misc Image fusion misc Camera calibration |
topic_browse |
ddc 510 misc Infrared/visible image/video database misc Image registration misc Image fusion misc Camera calibration |
format_facet |
Aufsätze Gedruckte Aufsätze |
format_main_str_mv |
Text Zeitschrift/Artikel |
carriertype_str_mv |
nc |
hierarchy_parent_title |
Multidimensional systems and signal processing |
hierarchy_parent_id |
130892076 |
dewey-tens |
510 - Mathematics |
hierarchy_top_title |
Multidimensional systems and signal processing |
isfreeaccess_txt |
false |
familylinks_str_mv |
(DE-627)130892076 (DE-600)1041098-3 (DE-576)038686074 |
title |
A visible-light and infrared video database for performance evaluation of video/image fusion methods |
ctrlnum |
(DE-627)OLC2048110010 (DE-He213)s11045-017-0548-y-p |
title_full |
A visible-light and infrared video database for performance evaluation of video/image fusion methods |
author_sort |
Ellmauthaler, Andreas |
journal |
Multidimensional systems and signal processing |
journalStr |
Multidimensional systems and signal processing |
lang_code |
eng |
isOA_bool |
false |
dewey-hundreds |
500 - Science |
recordtype |
marc |
publishDateSort |
2017 |
contenttype_str_mv |
txt |
container_start_page |
119 |
author_browse |
Ellmauthaler, Andreas Pagliari, Carla L. da Silva, Eduardo A. B. Gois, Jonathan N. Neves, Sergio R. |
container_volume |
30 |
class |
510 VZ |
format_se |
Aufsätze |
author-letter |
Ellmauthaler, Andreas |
doi_str_mv |
10.1007/s11045-017-0548-y |
dewey-full |
510 |
title_sort |
a visible-light and infrared video database for performance evaluation of video/image fusion methods |
title_auth |
A visible-light and infrared video database for performance evaluation of video/image fusion methods |
abstract |
Abstract In general, the fusion of visible-light and infrared images produces a composite representation where both data are pictured in a single image. The successful development of image/video fusion algorithms relies on realistic infrared/visible-light datasets. To the best of our knowledge, there is a particular shortage of databases with registered and synchronized videos from the infrared and visible-light spectra suitable for image/video fusion research. To address this need we recorded an image/video fusion database using infrared and visible-light cameras under varying illumination conditions. Moreover, different scenarios have been defined to better challenge the fusion methods, with various contexts and contents providing a wide variety of meaningful data for fusion purposes, including non-planar scenes, where objects appear on different depth planes. However, there are several difficulties in creating datasets for research in infrared/visible-light image fusion. Camera calibration, registration, and synchronization can be listed as important steps of this task. In particular, image registration between imagery from sensors of different spectral bands imposes additional difficulties, as it is very challenging to solve the correspondence problem between such images. Motivated by these challenges, this work introduces a novel spatiotemporal video registration method capable of generating registered and temporally aligned infrared/visible-light video sequences. The proposed workflow improves the registration accuracy when compared to the state-of-the art. By applying the proposed methodology to the recorded database we have generated the visible-light and infrared video database for image fusion, a publicly available database to be used by the research community to test and benchmark fusion schemes. © Springer Science+Business Media, LLC, part of Springer Nature 2017 |
abstractGer |
Abstract In general, the fusion of visible-light and infrared images produces a composite representation where both data are pictured in a single image. The successful development of image/video fusion algorithms relies on realistic infrared/visible-light datasets. To the best of our knowledge, there is a particular shortage of databases with registered and synchronized videos from the infrared and visible-light spectra suitable for image/video fusion research. To address this need we recorded an image/video fusion database using infrared and visible-light cameras under varying illumination conditions. Moreover, different scenarios have been defined to better challenge the fusion methods, with various contexts and contents providing a wide variety of meaningful data for fusion purposes, including non-planar scenes, where objects appear on different depth planes. However, there are several difficulties in creating datasets for research in infrared/visible-light image fusion. Camera calibration, registration, and synchronization can be listed as important steps of this task. In particular, image registration between imagery from sensors of different spectral bands imposes additional difficulties, as it is very challenging to solve the correspondence problem between such images. Motivated by these challenges, this work introduces a novel spatiotemporal video registration method capable of generating registered and temporally aligned infrared/visible-light video sequences. The proposed workflow improves the registration accuracy when compared to the state-of-the art. By applying the proposed methodology to the recorded database we have generated the visible-light and infrared video database for image fusion, a publicly available database to be used by the research community to test and benchmark fusion schemes. © Springer Science+Business Media, LLC, part of Springer Nature 2017 |
abstract_unstemmed |
Abstract In general, the fusion of visible-light and infrared images produces a composite representation where both data are pictured in a single image. The successful development of image/video fusion algorithms relies on realistic infrared/visible-light datasets. To the best of our knowledge, there is a particular shortage of databases with registered and synchronized videos from the infrared and visible-light spectra suitable for image/video fusion research. To address this need we recorded an image/video fusion database using infrared and visible-light cameras under varying illumination conditions. Moreover, different scenarios have been defined to better challenge the fusion methods, with various contexts and contents providing a wide variety of meaningful data for fusion purposes, including non-planar scenes, where objects appear on different depth planes. However, there are several difficulties in creating datasets for research in infrared/visible-light image fusion. Camera calibration, registration, and synchronization can be listed as important steps of this task. In particular, image registration between imagery from sensors of different spectral bands imposes additional difficulties, as it is very challenging to solve the correspondence problem between such images. Motivated by these challenges, this work introduces a novel spatiotemporal video registration method capable of generating registered and temporally aligned infrared/visible-light video sequences. The proposed workflow improves the registration accuracy when compared to the state-of-the art. By applying the proposed methodology to the recorded database we have generated the visible-light and infrared video database for image fusion, a publicly available database to be used by the research community to test and benchmark fusion schemes. © Springer Science+Business Media, LLC, part of Springer Nature 2017 |
collection_details |
GBV_USEFLAG_A SYSFLAG_A GBV_OLC SSG-OLC-MAT SSG-OPC-MAT GBV_ILN_70 |
container_issue |
1 |
title_short |
A visible-light and infrared video database for performance evaluation of video/image fusion methods |
url |
https://doi.org/10.1007/s11045-017-0548-y |
remote_bool |
false |
author2 |
Pagliari, Carla L. da Silva, Eduardo A. B. Gois, Jonathan N. Neves, Sergio R. |
author2Str |
Pagliari, Carla L. da Silva, Eduardo A. B. Gois, Jonathan N. Neves, Sergio R. |
ppnlink |
130892076 |
mediatype_str_mv |
n |
isOA_txt |
false |
hochschulschrift_bool |
false |
doi_str |
10.1007/s11045-017-0548-y |
up_date |
2024-07-03T17:34:24.479Z |
_version_ |
1803580147739131904 |
fullrecord_marcxml |
<?xml version="1.0" encoding="UTF-8"?><collection xmlns="http://www.loc.gov/MARC21/slim"><record><leader>01000caa a22002652 4500</leader><controlfield tag="001">OLC2048110010</controlfield><controlfield tag="003">DE-627</controlfield><controlfield tag="005">20230503194622.0</controlfield><controlfield tag="007">tu</controlfield><controlfield tag="008">200819s2017 xx ||||| 00| ||eng c</controlfield><datafield tag="024" ind1="7" ind2=" "><subfield code="a">10.1007/s11045-017-0548-y</subfield><subfield code="2">doi</subfield></datafield><datafield tag="035" ind1=" " ind2=" "><subfield code="a">(DE-627)OLC2048110010</subfield></datafield><datafield tag="035" ind1=" " ind2=" "><subfield code="a">(DE-He213)s11045-017-0548-y-p</subfield></datafield><datafield tag="040" ind1=" " ind2=" "><subfield code="a">DE-627</subfield><subfield code="b">ger</subfield><subfield code="c">DE-627</subfield><subfield code="e">rakwb</subfield></datafield><datafield tag="041" ind1=" " ind2=" "><subfield code="a">eng</subfield></datafield><datafield tag="082" ind1="0" ind2="4"><subfield code="a">510</subfield><subfield code="q">VZ</subfield></datafield><datafield tag="100" ind1="1" ind2=" "><subfield code="a">Ellmauthaler, Andreas</subfield><subfield code="e">verfasserin</subfield><subfield code="4">aut</subfield></datafield><datafield tag="245" ind1="1" ind2="0"><subfield code="a">A visible-light and infrared video database for performance evaluation of video/image fusion methods</subfield></datafield><datafield tag="264" ind1=" " ind2="1"><subfield code="c">2017</subfield></datafield><datafield tag="336" ind1=" " ind2=" "><subfield code="a">Text</subfield><subfield code="b">txt</subfield><subfield code="2">rdacontent</subfield></datafield><datafield tag="337" ind1=" " ind2=" "><subfield code="a">ohne Hilfsmittel zu benutzen</subfield><subfield code="b">n</subfield><subfield code="2">rdamedia</subfield></datafield><datafield tag="338" ind1=" " ind2=" "><subfield code="a">Band</subfield><subfield code="b">nc</subfield><subfield code="2">rdacarrier</subfield></datafield><datafield tag="500" ind1=" " ind2=" "><subfield code="a">© Springer Science+Business Media, LLC, part of Springer Nature 2017</subfield></datafield><datafield tag="520" ind1=" " ind2=" "><subfield code="a">Abstract In general, the fusion of visible-light and infrared images produces a composite representation where both data are pictured in a single image. The successful development of image/video fusion algorithms relies on realistic infrared/visible-light datasets. To the best of our knowledge, there is a particular shortage of databases with registered and synchronized videos from the infrared and visible-light spectra suitable for image/video fusion research. To address this need we recorded an image/video fusion database using infrared and visible-light cameras under varying illumination conditions. Moreover, different scenarios have been defined to better challenge the fusion methods, with various contexts and contents providing a wide variety of meaningful data for fusion purposes, including non-planar scenes, where objects appear on different depth planes. However, there are several difficulties in creating datasets for research in infrared/visible-light image fusion. Camera calibration, registration, and synchronization can be listed as important steps of this task. In particular, image registration between imagery from sensors of different spectral bands imposes additional difficulties, as it is very challenging to solve the correspondence problem between such images. Motivated by these challenges, this work introduces a novel spatiotemporal video registration method capable of generating registered and temporally aligned infrared/visible-light video sequences. The proposed workflow improves the registration accuracy when compared to the state-of-the art. By applying the proposed methodology to the recorded database we have generated the visible-light and infrared video database for image fusion, a publicly available database to be used by the research community to test and benchmark fusion schemes.</subfield></datafield><datafield tag="650" ind1=" " ind2="4"><subfield code="a">Infrared/visible image/video database</subfield></datafield><datafield tag="650" ind1=" " ind2="4"><subfield code="a">Image registration</subfield></datafield><datafield tag="650" ind1=" " ind2="4"><subfield code="a">Image fusion</subfield></datafield><datafield tag="650" ind1=" " ind2="4"><subfield code="a">Camera calibration</subfield></datafield><datafield tag="700" ind1="1" ind2=" "><subfield code="a">Pagliari, Carla L.</subfield><subfield code="4">aut</subfield></datafield><datafield tag="700" ind1="1" ind2=" "><subfield code="a">da Silva, Eduardo A. B.</subfield><subfield code="4">aut</subfield></datafield><datafield tag="700" ind1="1" ind2=" "><subfield code="a">Gois, Jonathan N.</subfield><subfield code="4">aut</subfield></datafield><datafield tag="700" ind1="1" ind2=" "><subfield code="a">Neves, Sergio R.</subfield><subfield code="4">aut</subfield></datafield><datafield tag="773" ind1="0" ind2="8"><subfield code="i">Enthalten in</subfield><subfield code="t">Multidimensional systems and signal processing</subfield><subfield code="d">Springer US, 1990</subfield><subfield code="g">30(2017), 1 vom: 27. Dez., Seite 119-143</subfield><subfield code="w">(DE-627)130892076</subfield><subfield code="w">(DE-600)1041098-3</subfield><subfield code="w">(DE-576)038686074</subfield><subfield code="x">0923-6082</subfield><subfield code="7">nnns</subfield></datafield><datafield tag="773" ind1="1" ind2="8"><subfield code="g">volume:30</subfield><subfield code="g">year:2017</subfield><subfield code="g">number:1</subfield><subfield code="g">day:27</subfield><subfield code="g">month:12</subfield><subfield code="g">pages:119-143</subfield></datafield><datafield tag="856" ind1="4" ind2="1"><subfield code="u">https://doi.org/10.1007/s11045-017-0548-y</subfield><subfield code="z">lizenzpflichtig</subfield><subfield code="3">Volltext</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_USEFLAG_A</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">SYSFLAG_A</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_OLC</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">SSG-OLC-MAT</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">SSG-OPC-MAT</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_70</subfield></datafield><datafield tag="951" ind1=" " ind2=" "><subfield code="a">AR</subfield></datafield><datafield tag="952" ind1=" " ind2=" "><subfield code="d">30</subfield><subfield code="j">2017</subfield><subfield code="e">1</subfield><subfield code="b">27</subfield><subfield code="c">12</subfield><subfield code="h">119-143</subfield></datafield></record></collection>
|
score |
7.400646 |