Cross-Modal Saliency Correlation for Image Annotation
Abstract Automatic image annotation is an attractive service for users and administrators of online photo sharing websites. In this paper, we propose an image annotation approach exploiting the crossmodal saliency correlation including visual and textual saliency. For textual saliency, a concept gra...
Ausführliche Beschreibung
Autor*in: |
Gu, Yun [verfasserIn] |
---|
Format: |
Artikel |
---|---|
Sprache: |
Englisch |
Erschienen: |
2016 |
---|
Schlagwörter: |
---|
Anmerkung: |
© Springer Science+Business Media New York 2016 |
---|
Übergeordnetes Werk: |
Enthalten in: Neural processing letters - Springer US, 1994, 45(2016), 3 vom: 02. März, Seite 777-789 |
---|---|
Übergeordnetes Werk: |
volume:45 ; year:2016 ; number:3 ; day:02 ; month:03 ; pages:777-789 |
Links: |
---|
DOI / URN: |
10.1007/s11063-016-9511-4 |
---|
Katalog-ID: |
OLC2044710331 |
---|
LEADER | 01000caa a22002652 4500 | ||
---|---|---|---|
001 | OLC2044710331 | ||
003 | DE-627 | ||
005 | 20230503210256.0 | ||
007 | tu | ||
008 | 200820s2016 xx ||||| 00| ||eng c | ||
024 | 7 | |a 10.1007/s11063-016-9511-4 |2 doi | |
035 | |a (DE-627)OLC2044710331 | ||
035 | |a (DE-He213)s11063-016-9511-4-p | ||
040 | |a DE-627 |b ger |c DE-627 |e rakwb | ||
041 | |a eng | ||
082 | 0 | 4 | |a 000 |q VZ |
100 | 1 | |a Gu, Yun |e verfasserin |4 aut | |
245 | 1 | 0 | |a Cross-Modal Saliency Correlation for Image Annotation |
264 | 1 | |c 2016 | |
336 | |a Text |b txt |2 rdacontent | ||
337 | |a ohne Hilfsmittel zu benutzen |b n |2 rdamedia | ||
338 | |a Band |b nc |2 rdacarrier | ||
500 | |a © Springer Science+Business Media New York 2016 | ||
520 | |a Abstract Automatic image annotation is an attractive service for users and administrators of online photo sharing websites. In this paper, we propose an image annotation approach exploiting the crossmodal saliency correlation including visual and textual saliency. For textual saliency, a concept graph is firstly established based on the association between the labels. Then semantic communities and latent textual saliency are detected; For visual saliency, we adopt a dual-layer BoW (DL-BoW) model integrated with the local features and salient regions of the image. Experiments on MIRFlickr and IAPR TC-12 datasets demonstrate that the proposed method outperforms other state-of-the-art approaches. | ||
650 | 4 | |a Image annotation | |
650 | 4 | |a Visual saliency | |
650 | 4 | |a Textual saliency | |
700 | 1 | |a Xue, Haoyang |4 aut | |
700 | 1 | |a Yang, Jie |4 aut | |
773 | 0 | 8 | |i Enthalten in |t Neural processing letters |d Springer US, 1994 |g 45(2016), 3 vom: 02. März, Seite 777-789 |w (DE-627)198692617 |w (DE-600)1316823-X |w (DE-576)052842762 |x 1370-4621 |7 nnns |
773 | 1 | 8 | |g volume:45 |g year:2016 |g number:3 |g day:02 |g month:03 |g pages:777-789 |
856 | 4 | 1 | |u https://doi.org/10.1007/s11063-016-9511-4 |z lizenzpflichtig |3 Volltext |
912 | |a GBV_USEFLAG_A | ||
912 | |a SYSFLAG_A | ||
912 | |a GBV_OLC | ||
912 | |a SSG-OLC-PSY | ||
912 | |a SSG-OLC-MAT | ||
912 | |a GBV_ILN_70 | ||
951 | |a AR | ||
952 | |d 45 |j 2016 |e 3 |b 02 |c 03 |h 777-789 |
author_variant |
y g yg h x hx j y jy |
---|---|
matchkey_str |
article:13704621:2016----::rsmdlainyorltofr |
hierarchy_sort_str |
2016 |
publishDate |
2016 |
allfields |
10.1007/s11063-016-9511-4 doi (DE-627)OLC2044710331 (DE-He213)s11063-016-9511-4-p DE-627 ger DE-627 rakwb eng 000 VZ Gu, Yun verfasserin aut Cross-Modal Saliency Correlation for Image Annotation 2016 Text txt rdacontent ohne Hilfsmittel zu benutzen n rdamedia Band nc rdacarrier © Springer Science+Business Media New York 2016 Abstract Automatic image annotation is an attractive service for users and administrators of online photo sharing websites. In this paper, we propose an image annotation approach exploiting the crossmodal saliency correlation including visual and textual saliency. For textual saliency, a concept graph is firstly established based on the association between the labels. Then semantic communities and latent textual saliency are detected; For visual saliency, we adopt a dual-layer BoW (DL-BoW) model integrated with the local features and salient regions of the image. Experiments on MIRFlickr and IAPR TC-12 datasets demonstrate that the proposed method outperforms other state-of-the-art approaches. Image annotation Visual saliency Textual saliency Xue, Haoyang aut Yang, Jie aut Enthalten in Neural processing letters Springer US, 1994 45(2016), 3 vom: 02. März, Seite 777-789 (DE-627)198692617 (DE-600)1316823-X (DE-576)052842762 1370-4621 nnns volume:45 year:2016 number:3 day:02 month:03 pages:777-789 https://doi.org/10.1007/s11063-016-9511-4 lizenzpflichtig Volltext GBV_USEFLAG_A SYSFLAG_A GBV_OLC SSG-OLC-PSY SSG-OLC-MAT GBV_ILN_70 AR 45 2016 3 02 03 777-789 |
spelling |
10.1007/s11063-016-9511-4 doi (DE-627)OLC2044710331 (DE-He213)s11063-016-9511-4-p DE-627 ger DE-627 rakwb eng 000 VZ Gu, Yun verfasserin aut Cross-Modal Saliency Correlation for Image Annotation 2016 Text txt rdacontent ohne Hilfsmittel zu benutzen n rdamedia Band nc rdacarrier © Springer Science+Business Media New York 2016 Abstract Automatic image annotation is an attractive service for users and administrators of online photo sharing websites. In this paper, we propose an image annotation approach exploiting the crossmodal saliency correlation including visual and textual saliency. For textual saliency, a concept graph is firstly established based on the association between the labels. Then semantic communities and latent textual saliency are detected; For visual saliency, we adopt a dual-layer BoW (DL-BoW) model integrated with the local features and salient regions of the image. Experiments on MIRFlickr and IAPR TC-12 datasets demonstrate that the proposed method outperforms other state-of-the-art approaches. Image annotation Visual saliency Textual saliency Xue, Haoyang aut Yang, Jie aut Enthalten in Neural processing letters Springer US, 1994 45(2016), 3 vom: 02. März, Seite 777-789 (DE-627)198692617 (DE-600)1316823-X (DE-576)052842762 1370-4621 nnns volume:45 year:2016 number:3 day:02 month:03 pages:777-789 https://doi.org/10.1007/s11063-016-9511-4 lizenzpflichtig Volltext GBV_USEFLAG_A SYSFLAG_A GBV_OLC SSG-OLC-PSY SSG-OLC-MAT GBV_ILN_70 AR 45 2016 3 02 03 777-789 |
allfields_unstemmed |
10.1007/s11063-016-9511-4 doi (DE-627)OLC2044710331 (DE-He213)s11063-016-9511-4-p DE-627 ger DE-627 rakwb eng 000 VZ Gu, Yun verfasserin aut Cross-Modal Saliency Correlation for Image Annotation 2016 Text txt rdacontent ohne Hilfsmittel zu benutzen n rdamedia Band nc rdacarrier © Springer Science+Business Media New York 2016 Abstract Automatic image annotation is an attractive service for users and administrators of online photo sharing websites. In this paper, we propose an image annotation approach exploiting the crossmodal saliency correlation including visual and textual saliency. For textual saliency, a concept graph is firstly established based on the association between the labels. Then semantic communities and latent textual saliency are detected; For visual saliency, we adopt a dual-layer BoW (DL-BoW) model integrated with the local features and salient regions of the image. Experiments on MIRFlickr and IAPR TC-12 datasets demonstrate that the proposed method outperforms other state-of-the-art approaches. Image annotation Visual saliency Textual saliency Xue, Haoyang aut Yang, Jie aut Enthalten in Neural processing letters Springer US, 1994 45(2016), 3 vom: 02. März, Seite 777-789 (DE-627)198692617 (DE-600)1316823-X (DE-576)052842762 1370-4621 nnns volume:45 year:2016 number:3 day:02 month:03 pages:777-789 https://doi.org/10.1007/s11063-016-9511-4 lizenzpflichtig Volltext GBV_USEFLAG_A SYSFLAG_A GBV_OLC SSG-OLC-PSY SSG-OLC-MAT GBV_ILN_70 AR 45 2016 3 02 03 777-789 |
allfieldsGer |
10.1007/s11063-016-9511-4 doi (DE-627)OLC2044710331 (DE-He213)s11063-016-9511-4-p DE-627 ger DE-627 rakwb eng 000 VZ Gu, Yun verfasserin aut Cross-Modal Saliency Correlation for Image Annotation 2016 Text txt rdacontent ohne Hilfsmittel zu benutzen n rdamedia Band nc rdacarrier © Springer Science+Business Media New York 2016 Abstract Automatic image annotation is an attractive service for users and administrators of online photo sharing websites. In this paper, we propose an image annotation approach exploiting the crossmodal saliency correlation including visual and textual saliency. For textual saliency, a concept graph is firstly established based on the association between the labels. Then semantic communities and latent textual saliency are detected; For visual saliency, we adopt a dual-layer BoW (DL-BoW) model integrated with the local features and salient regions of the image. Experiments on MIRFlickr and IAPR TC-12 datasets demonstrate that the proposed method outperforms other state-of-the-art approaches. Image annotation Visual saliency Textual saliency Xue, Haoyang aut Yang, Jie aut Enthalten in Neural processing letters Springer US, 1994 45(2016), 3 vom: 02. März, Seite 777-789 (DE-627)198692617 (DE-600)1316823-X (DE-576)052842762 1370-4621 nnns volume:45 year:2016 number:3 day:02 month:03 pages:777-789 https://doi.org/10.1007/s11063-016-9511-4 lizenzpflichtig Volltext GBV_USEFLAG_A SYSFLAG_A GBV_OLC SSG-OLC-PSY SSG-OLC-MAT GBV_ILN_70 AR 45 2016 3 02 03 777-789 |
allfieldsSound |
10.1007/s11063-016-9511-4 doi (DE-627)OLC2044710331 (DE-He213)s11063-016-9511-4-p DE-627 ger DE-627 rakwb eng 000 VZ Gu, Yun verfasserin aut Cross-Modal Saliency Correlation for Image Annotation 2016 Text txt rdacontent ohne Hilfsmittel zu benutzen n rdamedia Band nc rdacarrier © Springer Science+Business Media New York 2016 Abstract Automatic image annotation is an attractive service for users and administrators of online photo sharing websites. In this paper, we propose an image annotation approach exploiting the crossmodal saliency correlation including visual and textual saliency. For textual saliency, a concept graph is firstly established based on the association between the labels. Then semantic communities and latent textual saliency are detected; For visual saliency, we adopt a dual-layer BoW (DL-BoW) model integrated with the local features and salient regions of the image. Experiments on MIRFlickr and IAPR TC-12 datasets demonstrate that the proposed method outperforms other state-of-the-art approaches. Image annotation Visual saliency Textual saliency Xue, Haoyang aut Yang, Jie aut Enthalten in Neural processing letters Springer US, 1994 45(2016), 3 vom: 02. März, Seite 777-789 (DE-627)198692617 (DE-600)1316823-X (DE-576)052842762 1370-4621 nnns volume:45 year:2016 number:3 day:02 month:03 pages:777-789 https://doi.org/10.1007/s11063-016-9511-4 lizenzpflichtig Volltext GBV_USEFLAG_A SYSFLAG_A GBV_OLC SSG-OLC-PSY SSG-OLC-MAT GBV_ILN_70 AR 45 2016 3 02 03 777-789 |
language |
English |
source |
Enthalten in Neural processing letters 45(2016), 3 vom: 02. März, Seite 777-789 volume:45 year:2016 number:3 day:02 month:03 pages:777-789 |
sourceStr |
Enthalten in Neural processing letters 45(2016), 3 vom: 02. März, Seite 777-789 volume:45 year:2016 number:3 day:02 month:03 pages:777-789 |
format_phy_str_mv |
Article |
institution |
findex.gbv.de |
topic_facet |
Image annotation Visual saliency Textual saliency |
dewey-raw |
000 |
isfreeaccess_bool |
false |
container_title |
Neural processing letters |
authorswithroles_txt_mv |
Gu, Yun @@aut@@ Xue, Haoyang @@aut@@ Yang, Jie @@aut@@ |
publishDateDaySort_date |
2016-03-02T00:00:00Z |
hierarchy_top_id |
198692617 |
dewey-sort |
0 |
id |
OLC2044710331 |
language_de |
englisch |
fullrecord |
<?xml version="1.0" encoding="UTF-8"?><collection xmlns="http://www.loc.gov/MARC21/slim"><record><leader>01000caa a22002652 4500</leader><controlfield tag="001">OLC2044710331</controlfield><controlfield tag="003">DE-627</controlfield><controlfield tag="005">20230503210256.0</controlfield><controlfield tag="007">tu</controlfield><controlfield tag="008">200820s2016 xx ||||| 00| ||eng c</controlfield><datafield tag="024" ind1="7" ind2=" "><subfield code="a">10.1007/s11063-016-9511-4</subfield><subfield code="2">doi</subfield></datafield><datafield tag="035" ind1=" " ind2=" "><subfield code="a">(DE-627)OLC2044710331</subfield></datafield><datafield tag="035" ind1=" " ind2=" "><subfield code="a">(DE-He213)s11063-016-9511-4-p</subfield></datafield><datafield tag="040" ind1=" " ind2=" "><subfield code="a">DE-627</subfield><subfield code="b">ger</subfield><subfield code="c">DE-627</subfield><subfield code="e">rakwb</subfield></datafield><datafield tag="041" ind1=" " ind2=" "><subfield code="a">eng</subfield></datafield><datafield tag="082" ind1="0" ind2="4"><subfield code="a">000</subfield><subfield code="q">VZ</subfield></datafield><datafield tag="100" ind1="1" ind2=" "><subfield code="a">Gu, Yun</subfield><subfield code="e">verfasserin</subfield><subfield code="4">aut</subfield></datafield><datafield tag="245" ind1="1" ind2="0"><subfield code="a">Cross-Modal Saliency Correlation for Image Annotation</subfield></datafield><datafield tag="264" ind1=" " ind2="1"><subfield code="c">2016</subfield></datafield><datafield tag="336" ind1=" " ind2=" "><subfield code="a">Text</subfield><subfield code="b">txt</subfield><subfield code="2">rdacontent</subfield></datafield><datafield tag="337" ind1=" " ind2=" "><subfield code="a">ohne Hilfsmittel zu benutzen</subfield><subfield code="b">n</subfield><subfield code="2">rdamedia</subfield></datafield><datafield tag="338" ind1=" " ind2=" "><subfield code="a">Band</subfield><subfield code="b">nc</subfield><subfield code="2">rdacarrier</subfield></datafield><datafield tag="500" ind1=" " ind2=" "><subfield code="a">© Springer Science+Business Media New York 2016</subfield></datafield><datafield tag="520" ind1=" " ind2=" "><subfield code="a">Abstract Automatic image annotation is an attractive service for users and administrators of online photo sharing websites. In this paper, we propose an image annotation approach exploiting the crossmodal saliency correlation including visual and textual saliency. For textual saliency, a concept graph is firstly established based on the association between the labels. Then semantic communities and latent textual saliency are detected; For visual saliency, we adopt a dual-layer BoW (DL-BoW) model integrated with the local features and salient regions of the image. Experiments on MIRFlickr and IAPR TC-12 datasets demonstrate that the proposed method outperforms other state-of-the-art approaches.</subfield></datafield><datafield tag="650" ind1=" " ind2="4"><subfield code="a">Image annotation</subfield></datafield><datafield tag="650" ind1=" " ind2="4"><subfield code="a">Visual saliency</subfield></datafield><datafield tag="650" ind1=" " ind2="4"><subfield code="a">Textual saliency</subfield></datafield><datafield tag="700" ind1="1" ind2=" "><subfield code="a">Xue, Haoyang</subfield><subfield code="4">aut</subfield></datafield><datafield tag="700" ind1="1" ind2=" "><subfield code="a">Yang, Jie</subfield><subfield code="4">aut</subfield></datafield><datafield tag="773" ind1="0" ind2="8"><subfield code="i">Enthalten in</subfield><subfield code="t">Neural processing letters</subfield><subfield code="d">Springer US, 1994</subfield><subfield code="g">45(2016), 3 vom: 02. März, Seite 777-789</subfield><subfield code="w">(DE-627)198692617</subfield><subfield code="w">(DE-600)1316823-X</subfield><subfield code="w">(DE-576)052842762</subfield><subfield code="x">1370-4621</subfield><subfield code="7">nnns</subfield></datafield><datafield tag="773" ind1="1" ind2="8"><subfield code="g">volume:45</subfield><subfield code="g">year:2016</subfield><subfield code="g">number:3</subfield><subfield code="g">day:02</subfield><subfield code="g">month:03</subfield><subfield code="g">pages:777-789</subfield></datafield><datafield tag="856" ind1="4" ind2="1"><subfield code="u">https://doi.org/10.1007/s11063-016-9511-4</subfield><subfield code="z">lizenzpflichtig</subfield><subfield code="3">Volltext</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_USEFLAG_A</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">SYSFLAG_A</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_OLC</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">SSG-OLC-PSY</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">SSG-OLC-MAT</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_70</subfield></datafield><datafield tag="951" ind1=" " ind2=" "><subfield code="a">AR</subfield></datafield><datafield tag="952" ind1=" " ind2=" "><subfield code="d">45</subfield><subfield code="j">2016</subfield><subfield code="e">3</subfield><subfield code="b">02</subfield><subfield code="c">03</subfield><subfield code="h">777-789</subfield></datafield></record></collection>
|
author |
Gu, Yun |
spellingShingle |
Gu, Yun ddc 000 misc Image annotation misc Visual saliency misc Textual saliency Cross-Modal Saliency Correlation for Image Annotation |
authorStr |
Gu, Yun |
ppnlink_with_tag_str_mv |
@@773@@(DE-627)198692617 |
format |
Article |
dewey-ones |
000 - Computer science, information & general works |
delete_txt_mv |
keep |
author_role |
aut aut aut |
collection |
OLC |
remote_str |
false |
illustrated |
Not Illustrated |
issn |
1370-4621 |
topic_title |
000 VZ Cross-Modal Saliency Correlation for Image Annotation Image annotation Visual saliency Textual saliency |
topic |
ddc 000 misc Image annotation misc Visual saliency misc Textual saliency |
topic_unstemmed |
ddc 000 misc Image annotation misc Visual saliency misc Textual saliency |
topic_browse |
ddc 000 misc Image annotation misc Visual saliency misc Textual saliency |
format_facet |
Aufsätze Gedruckte Aufsätze |
format_main_str_mv |
Text Zeitschrift/Artikel |
carriertype_str_mv |
nc |
hierarchy_parent_title |
Neural processing letters |
hierarchy_parent_id |
198692617 |
dewey-tens |
000 - Computer science, knowledge & systems |
hierarchy_top_title |
Neural processing letters |
isfreeaccess_txt |
false |
familylinks_str_mv |
(DE-627)198692617 (DE-600)1316823-X (DE-576)052842762 |
title |
Cross-Modal Saliency Correlation for Image Annotation |
ctrlnum |
(DE-627)OLC2044710331 (DE-He213)s11063-016-9511-4-p |
title_full |
Cross-Modal Saliency Correlation for Image Annotation |
author_sort |
Gu, Yun |
journal |
Neural processing letters |
journalStr |
Neural processing letters |
lang_code |
eng |
isOA_bool |
false |
dewey-hundreds |
000 - Computer science, information & general works |
recordtype |
marc |
publishDateSort |
2016 |
contenttype_str_mv |
txt |
container_start_page |
777 |
author_browse |
Gu, Yun Xue, Haoyang Yang, Jie |
container_volume |
45 |
class |
000 VZ |
format_se |
Aufsätze |
author-letter |
Gu, Yun |
doi_str_mv |
10.1007/s11063-016-9511-4 |
dewey-full |
000 |
title_sort |
cross-modal saliency correlation for image annotation |
title_auth |
Cross-Modal Saliency Correlation for Image Annotation |
abstract |
Abstract Automatic image annotation is an attractive service for users and administrators of online photo sharing websites. In this paper, we propose an image annotation approach exploiting the crossmodal saliency correlation including visual and textual saliency. For textual saliency, a concept graph is firstly established based on the association between the labels. Then semantic communities and latent textual saliency are detected; For visual saliency, we adopt a dual-layer BoW (DL-BoW) model integrated with the local features and salient regions of the image. Experiments on MIRFlickr and IAPR TC-12 datasets demonstrate that the proposed method outperforms other state-of-the-art approaches. © Springer Science+Business Media New York 2016 |
abstractGer |
Abstract Automatic image annotation is an attractive service for users and administrators of online photo sharing websites. In this paper, we propose an image annotation approach exploiting the crossmodal saliency correlation including visual and textual saliency. For textual saliency, a concept graph is firstly established based on the association between the labels. Then semantic communities and latent textual saliency are detected; For visual saliency, we adopt a dual-layer BoW (DL-BoW) model integrated with the local features and salient regions of the image. Experiments on MIRFlickr and IAPR TC-12 datasets demonstrate that the proposed method outperforms other state-of-the-art approaches. © Springer Science+Business Media New York 2016 |
abstract_unstemmed |
Abstract Automatic image annotation is an attractive service for users and administrators of online photo sharing websites. In this paper, we propose an image annotation approach exploiting the crossmodal saliency correlation including visual and textual saliency. For textual saliency, a concept graph is firstly established based on the association between the labels. Then semantic communities and latent textual saliency are detected; For visual saliency, we adopt a dual-layer BoW (DL-BoW) model integrated with the local features and salient regions of the image. Experiments on MIRFlickr and IAPR TC-12 datasets demonstrate that the proposed method outperforms other state-of-the-art approaches. © Springer Science+Business Media New York 2016 |
collection_details |
GBV_USEFLAG_A SYSFLAG_A GBV_OLC SSG-OLC-PSY SSG-OLC-MAT GBV_ILN_70 |
container_issue |
3 |
title_short |
Cross-Modal Saliency Correlation for Image Annotation |
url |
https://doi.org/10.1007/s11063-016-9511-4 |
remote_bool |
false |
author2 |
Xue, Haoyang Yang, Jie |
author2Str |
Xue, Haoyang Yang, Jie |
ppnlink |
198692617 |
mediatype_str_mv |
n |
isOA_txt |
false |
hochschulschrift_bool |
false |
doi_str |
10.1007/s11063-016-9511-4 |
up_date |
2024-07-04T00:30:57.241Z |
_version_ |
1803606354550587392 |
fullrecord_marcxml |
<?xml version="1.0" encoding="UTF-8"?><collection xmlns="http://www.loc.gov/MARC21/slim"><record><leader>01000caa a22002652 4500</leader><controlfield tag="001">OLC2044710331</controlfield><controlfield tag="003">DE-627</controlfield><controlfield tag="005">20230503210256.0</controlfield><controlfield tag="007">tu</controlfield><controlfield tag="008">200820s2016 xx ||||| 00| ||eng c</controlfield><datafield tag="024" ind1="7" ind2=" "><subfield code="a">10.1007/s11063-016-9511-4</subfield><subfield code="2">doi</subfield></datafield><datafield tag="035" ind1=" " ind2=" "><subfield code="a">(DE-627)OLC2044710331</subfield></datafield><datafield tag="035" ind1=" " ind2=" "><subfield code="a">(DE-He213)s11063-016-9511-4-p</subfield></datafield><datafield tag="040" ind1=" " ind2=" "><subfield code="a">DE-627</subfield><subfield code="b">ger</subfield><subfield code="c">DE-627</subfield><subfield code="e">rakwb</subfield></datafield><datafield tag="041" ind1=" " ind2=" "><subfield code="a">eng</subfield></datafield><datafield tag="082" ind1="0" ind2="4"><subfield code="a">000</subfield><subfield code="q">VZ</subfield></datafield><datafield tag="100" ind1="1" ind2=" "><subfield code="a">Gu, Yun</subfield><subfield code="e">verfasserin</subfield><subfield code="4">aut</subfield></datafield><datafield tag="245" ind1="1" ind2="0"><subfield code="a">Cross-Modal Saliency Correlation for Image Annotation</subfield></datafield><datafield tag="264" ind1=" " ind2="1"><subfield code="c">2016</subfield></datafield><datafield tag="336" ind1=" " ind2=" "><subfield code="a">Text</subfield><subfield code="b">txt</subfield><subfield code="2">rdacontent</subfield></datafield><datafield tag="337" ind1=" " ind2=" "><subfield code="a">ohne Hilfsmittel zu benutzen</subfield><subfield code="b">n</subfield><subfield code="2">rdamedia</subfield></datafield><datafield tag="338" ind1=" " ind2=" "><subfield code="a">Band</subfield><subfield code="b">nc</subfield><subfield code="2">rdacarrier</subfield></datafield><datafield tag="500" ind1=" " ind2=" "><subfield code="a">© Springer Science+Business Media New York 2016</subfield></datafield><datafield tag="520" ind1=" " ind2=" "><subfield code="a">Abstract Automatic image annotation is an attractive service for users and administrators of online photo sharing websites. In this paper, we propose an image annotation approach exploiting the crossmodal saliency correlation including visual and textual saliency. For textual saliency, a concept graph is firstly established based on the association between the labels. Then semantic communities and latent textual saliency are detected; For visual saliency, we adopt a dual-layer BoW (DL-BoW) model integrated with the local features and salient regions of the image. Experiments on MIRFlickr and IAPR TC-12 datasets demonstrate that the proposed method outperforms other state-of-the-art approaches.</subfield></datafield><datafield tag="650" ind1=" " ind2="4"><subfield code="a">Image annotation</subfield></datafield><datafield tag="650" ind1=" " ind2="4"><subfield code="a">Visual saliency</subfield></datafield><datafield tag="650" ind1=" " ind2="4"><subfield code="a">Textual saliency</subfield></datafield><datafield tag="700" ind1="1" ind2=" "><subfield code="a">Xue, Haoyang</subfield><subfield code="4">aut</subfield></datafield><datafield tag="700" ind1="1" ind2=" "><subfield code="a">Yang, Jie</subfield><subfield code="4">aut</subfield></datafield><datafield tag="773" ind1="0" ind2="8"><subfield code="i">Enthalten in</subfield><subfield code="t">Neural processing letters</subfield><subfield code="d">Springer US, 1994</subfield><subfield code="g">45(2016), 3 vom: 02. März, Seite 777-789</subfield><subfield code="w">(DE-627)198692617</subfield><subfield code="w">(DE-600)1316823-X</subfield><subfield code="w">(DE-576)052842762</subfield><subfield code="x">1370-4621</subfield><subfield code="7">nnns</subfield></datafield><datafield tag="773" ind1="1" ind2="8"><subfield code="g">volume:45</subfield><subfield code="g">year:2016</subfield><subfield code="g">number:3</subfield><subfield code="g">day:02</subfield><subfield code="g">month:03</subfield><subfield code="g">pages:777-789</subfield></datafield><datafield tag="856" ind1="4" ind2="1"><subfield code="u">https://doi.org/10.1007/s11063-016-9511-4</subfield><subfield code="z">lizenzpflichtig</subfield><subfield code="3">Volltext</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_USEFLAG_A</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">SYSFLAG_A</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_OLC</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">SSG-OLC-PSY</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">SSG-OLC-MAT</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_70</subfield></datafield><datafield tag="951" ind1=" " ind2=" "><subfield code="a">AR</subfield></datafield><datafield tag="952" ind1=" " ind2=" "><subfield code="d">45</subfield><subfield code="j">2016</subfield><subfield code="e">3</subfield><subfield code="b">02</subfield><subfield code="c">03</subfield><subfield code="h">777-789</subfield></datafield></record></collection>
|
score |
7.3972616 |