Hybrid video emotional tagging using users’ EEG and video content
Abstract In this paper, we propose novel hybrid approaches to annotate videos in valence and arousal spaces by using users’ electroencephalogram (EEG) signals and video content. Firstly, several audio and visual features are extracted from video clips and five frequency features are extracted from e...
Ausführliche Beschreibung
Autor*in: |
Wang, Shangfei [verfasserIn] |
---|
Format: |
Artikel |
---|---|
Sprache: |
Englisch |
Erschienen: |
2013 |
---|
Schlagwörter: |
---|
Anmerkung: |
© Springer Science+Business Media New York 2013 |
---|
Übergeordnetes Werk: |
Enthalten in: Multimedia tools and applications - Springer US, 1995, 72(2013), 2 vom: 10. Apr., Seite 1257-1283 |
---|---|
Übergeordnetes Werk: |
volume:72 ; year:2013 ; number:2 ; day:10 ; month:04 ; pages:1257-1283 |
Links: |
---|
DOI / URN: |
10.1007/s11042-013-1450-8 |
---|
Katalog-ID: |
OLC2035012406 |
---|
LEADER | 01000caa a22002652 4500 | ||
---|---|---|---|
001 | OLC2035012406 | ||
003 | DE-627 | ||
005 | 20230503192705.0 | ||
007 | tu | ||
008 | 200819s2013 xx ||||| 00| ||eng c | ||
024 | 7 | |a 10.1007/s11042-013-1450-8 |2 doi | |
035 | |a (DE-627)OLC2035012406 | ||
035 | |a (DE-He213)s11042-013-1450-8-p | ||
040 | |a DE-627 |b ger |c DE-627 |e rakwb | ||
041 | |a eng | ||
082 | 0 | 4 | |a 070 |a 004 |q VZ |
100 | 1 | |a Wang, Shangfei |e verfasserin |4 aut | |
245 | 1 | 0 | |a Hybrid video emotional tagging using users’ EEG and video content |
264 | 1 | |c 2013 | |
336 | |a Text |b txt |2 rdacontent | ||
337 | |a ohne Hilfsmittel zu benutzen |b n |2 rdamedia | ||
338 | |a Band |b nc |2 rdacarrier | ||
500 | |a © Springer Science+Business Media New York 2013 | ||
520 | |a Abstract In this paper, we propose novel hybrid approaches to annotate videos in valence and arousal spaces by using users’ electroencephalogram (EEG) signals and video content. Firstly, several audio and visual features are extracted from video clips and five frequency features are extracted from each channel of the EEG signals. Secondly, statistical analyses are conducted to explore the relationships among emotional tags, EEG and video features. Thirdly, three Bayesian Networks are constructed to annotate videos by combining the video and EEG features at independent feature-level fusion, decision-level fusion and dependent feature-level fusion. In order to evaluate the effectiveness of our approaches, we designed and conducted the psychophysiological experiment to collect data, including emotion-induced video clips, users’ EEG responses while watching the selected video clips, and emotional video tags collected through participants’ self-report after watching each clip. The experimental results show that the proposed fusion methods outperform the conventional emotional tagging methods that use either video or EEG features alone in both valence and arousal spaces. Moreover, we can narrow down the semantic gap between the low-level video features and the users’ high-level emotional tags with the help of EEG features. | ||
650 | 4 | |a Emotional tagging | |
650 | 4 | |a Videos | |
650 | 4 | |a Independent feature-level fusion | |
650 | 4 | |a Decision-level fusion | |
650 | 4 | |a Dependent feature-level fusion | |
700 | 1 | |a Zhu, Yachen |4 aut | |
700 | 1 | |a Wu, Guobing |4 aut | |
700 | 1 | |a Ji, Qiang |4 aut | |
773 | 0 | 8 | |i Enthalten in |t Multimedia tools and applications |d Springer US, 1995 |g 72(2013), 2 vom: 10. Apr., Seite 1257-1283 |w (DE-627)189064145 |w (DE-600)1287642-2 |w (DE-576)052842126 |x 1380-7501 |7 nnns |
773 | 1 | 8 | |g volume:72 |g year:2013 |g number:2 |g day:10 |g month:04 |g pages:1257-1283 |
856 | 4 | 1 | |u https://doi.org/10.1007/s11042-013-1450-8 |z lizenzpflichtig |3 Volltext |
912 | |a GBV_USEFLAG_A | ||
912 | |a SYSFLAG_A | ||
912 | |a GBV_OLC | ||
912 | |a SSG-OLC-MAT | ||
912 | |a SSG-OLC-BUB | ||
912 | |a SSG-OLC-MKW | ||
912 | |a GBV_ILN_70 | ||
951 | |a AR | ||
952 | |d 72 |j 2013 |e 2 |b 10 |c 04 |h 1257-1283 |
author_variant |
s w sw y z yz g w gw q j qj |
---|---|
matchkey_str |
article:13807501:2013----::yrdieeoinlagnuigsre |
hierarchy_sort_str |
2013 |
publishDate |
2013 |
allfields |
10.1007/s11042-013-1450-8 doi (DE-627)OLC2035012406 (DE-He213)s11042-013-1450-8-p DE-627 ger DE-627 rakwb eng 070 004 VZ Wang, Shangfei verfasserin aut Hybrid video emotional tagging using users’ EEG and video content 2013 Text txt rdacontent ohne Hilfsmittel zu benutzen n rdamedia Band nc rdacarrier © Springer Science+Business Media New York 2013 Abstract In this paper, we propose novel hybrid approaches to annotate videos in valence and arousal spaces by using users’ electroencephalogram (EEG) signals and video content. Firstly, several audio and visual features are extracted from video clips and five frequency features are extracted from each channel of the EEG signals. Secondly, statistical analyses are conducted to explore the relationships among emotional tags, EEG and video features. Thirdly, three Bayesian Networks are constructed to annotate videos by combining the video and EEG features at independent feature-level fusion, decision-level fusion and dependent feature-level fusion. In order to evaluate the effectiveness of our approaches, we designed and conducted the psychophysiological experiment to collect data, including emotion-induced video clips, users’ EEG responses while watching the selected video clips, and emotional video tags collected through participants’ self-report after watching each clip. The experimental results show that the proposed fusion methods outperform the conventional emotional tagging methods that use either video or EEG features alone in both valence and arousal spaces. Moreover, we can narrow down the semantic gap between the low-level video features and the users’ high-level emotional tags with the help of EEG features. Emotional tagging Videos Independent feature-level fusion Decision-level fusion Dependent feature-level fusion Zhu, Yachen aut Wu, Guobing aut Ji, Qiang aut Enthalten in Multimedia tools and applications Springer US, 1995 72(2013), 2 vom: 10. Apr., Seite 1257-1283 (DE-627)189064145 (DE-600)1287642-2 (DE-576)052842126 1380-7501 nnns volume:72 year:2013 number:2 day:10 month:04 pages:1257-1283 https://doi.org/10.1007/s11042-013-1450-8 lizenzpflichtig Volltext GBV_USEFLAG_A SYSFLAG_A GBV_OLC SSG-OLC-MAT SSG-OLC-BUB SSG-OLC-MKW GBV_ILN_70 AR 72 2013 2 10 04 1257-1283 |
spelling |
10.1007/s11042-013-1450-8 doi (DE-627)OLC2035012406 (DE-He213)s11042-013-1450-8-p DE-627 ger DE-627 rakwb eng 070 004 VZ Wang, Shangfei verfasserin aut Hybrid video emotional tagging using users’ EEG and video content 2013 Text txt rdacontent ohne Hilfsmittel zu benutzen n rdamedia Band nc rdacarrier © Springer Science+Business Media New York 2013 Abstract In this paper, we propose novel hybrid approaches to annotate videos in valence and arousal spaces by using users’ electroencephalogram (EEG) signals and video content. Firstly, several audio and visual features are extracted from video clips and five frequency features are extracted from each channel of the EEG signals. Secondly, statistical analyses are conducted to explore the relationships among emotional tags, EEG and video features. Thirdly, three Bayesian Networks are constructed to annotate videos by combining the video and EEG features at independent feature-level fusion, decision-level fusion and dependent feature-level fusion. In order to evaluate the effectiveness of our approaches, we designed and conducted the psychophysiological experiment to collect data, including emotion-induced video clips, users’ EEG responses while watching the selected video clips, and emotional video tags collected through participants’ self-report after watching each clip. The experimental results show that the proposed fusion methods outperform the conventional emotional tagging methods that use either video or EEG features alone in both valence and arousal spaces. Moreover, we can narrow down the semantic gap between the low-level video features and the users’ high-level emotional tags with the help of EEG features. Emotional tagging Videos Independent feature-level fusion Decision-level fusion Dependent feature-level fusion Zhu, Yachen aut Wu, Guobing aut Ji, Qiang aut Enthalten in Multimedia tools and applications Springer US, 1995 72(2013), 2 vom: 10. Apr., Seite 1257-1283 (DE-627)189064145 (DE-600)1287642-2 (DE-576)052842126 1380-7501 nnns volume:72 year:2013 number:2 day:10 month:04 pages:1257-1283 https://doi.org/10.1007/s11042-013-1450-8 lizenzpflichtig Volltext GBV_USEFLAG_A SYSFLAG_A GBV_OLC SSG-OLC-MAT SSG-OLC-BUB SSG-OLC-MKW GBV_ILN_70 AR 72 2013 2 10 04 1257-1283 |
allfields_unstemmed |
10.1007/s11042-013-1450-8 doi (DE-627)OLC2035012406 (DE-He213)s11042-013-1450-8-p DE-627 ger DE-627 rakwb eng 070 004 VZ Wang, Shangfei verfasserin aut Hybrid video emotional tagging using users’ EEG and video content 2013 Text txt rdacontent ohne Hilfsmittel zu benutzen n rdamedia Band nc rdacarrier © Springer Science+Business Media New York 2013 Abstract In this paper, we propose novel hybrid approaches to annotate videos in valence and arousal spaces by using users’ electroencephalogram (EEG) signals and video content. Firstly, several audio and visual features are extracted from video clips and five frequency features are extracted from each channel of the EEG signals. Secondly, statistical analyses are conducted to explore the relationships among emotional tags, EEG and video features. Thirdly, three Bayesian Networks are constructed to annotate videos by combining the video and EEG features at independent feature-level fusion, decision-level fusion and dependent feature-level fusion. In order to evaluate the effectiveness of our approaches, we designed and conducted the psychophysiological experiment to collect data, including emotion-induced video clips, users’ EEG responses while watching the selected video clips, and emotional video tags collected through participants’ self-report after watching each clip. The experimental results show that the proposed fusion methods outperform the conventional emotional tagging methods that use either video or EEG features alone in both valence and arousal spaces. Moreover, we can narrow down the semantic gap between the low-level video features and the users’ high-level emotional tags with the help of EEG features. Emotional tagging Videos Independent feature-level fusion Decision-level fusion Dependent feature-level fusion Zhu, Yachen aut Wu, Guobing aut Ji, Qiang aut Enthalten in Multimedia tools and applications Springer US, 1995 72(2013), 2 vom: 10. Apr., Seite 1257-1283 (DE-627)189064145 (DE-600)1287642-2 (DE-576)052842126 1380-7501 nnns volume:72 year:2013 number:2 day:10 month:04 pages:1257-1283 https://doi.org/10.1007/s11042-013-1450-8 lizenzpflichtig Volltext GBV_USEFLAG_A SYSFLAG_A GBV_OLC SSG-OLC-MAT SSG-OLC-BUB SSG-OLC-MKW GBV_ILN_70 AR 72 2013 2 10 04 1257-1283 |
allfieldsGer |
10.1007/s11042-013-1450-8 doi (DE-627)OLC2035012406 (DE-He213)s11042-013-1450-8-p DE-627 ger DE-627 rakwb eng 070 004 VZ Wang, Shangfei verfasserin aut Hybrid video emotional tagging using users’ EEG and video content 2013 Text txt rdacontent ohne Hilfsmittel zu benutzen n rdamedia Band nc rdacarrier © Springer Science+Business Media New York 2013 Abstract In this paper, we propose novel hybrid approaches to annotate videos in valence and arousal spaces by using users’ electroencephalogram (EEG) signals and video content. Firstly, several audio and visual features are extracted from video clips and five frequency features are extracted from each channel of the EEG signals. Secondly, statistical analyses are conducted to explore the relationships among emotional tags, EEG and video features. Thirdly, three Bayesian Networks are constructed to annotate videos by combining the video and EEG features at independent feature-level fusion, decision-level fusion and dependent feature-level fusion. In order to evaluate the effectiveness of our approaches, we designed and conducted the psychophysiological experiment to collect data, including emotion-induced video clips, users’ EEG responses while watching the selected video clips, and emotional video tags collected through participants’ self-report after watching each clip. The experimental results show that the proposed fusion methods outperform the conventional emotional tagging methods that use either video or EEG features alone in both valence and arousal spaces. Moreover, we can narrow down the semantic gap between the low-level video features and the users’ high-level emotional tags with the help of EEG features. Emotional tagging Videos Independent feature-level fusion Decision-level fusion Dependent feature-level fusion Zhu, Yachen aut Wu, Guobing aut Ji, Qiang aut Enthalten in Multimedia tools and applications Springer US, 1995 72(2013), 2 vom: 10. Apr., Seite 1257-1283 (DE-627)189064145 (DE-600)1287642-2 (DE-576)052842126 1380-7501 nnns volume:72 year:2013 number:2 day:10 month:04 pages:1257-1283 https://doi.org/10.1007/s11042-013-1450-8 lizenzpflichtig Volltext GBV_USEFLAG_A SYSFLAG_A GBV_OLC SSG-OLC-MAT SSG-OLC-BUB SSG-OLC-MKW GBV_ILN_70 AR 72 2013 2 10 04 1257-1283 |
allfieldsSound |
10.1007/s11042-013-1450-8 doi (DE-627)OLC2035012406 (DE-He213)s11042-013-1450-8-p DE-627 ger DE-627 rakwb eng 070 004 VZ Wang, Shangfei verfasserin aut Hybrid video emotional tagging using users’ EEG and video content 2013 Text txt rdacontent ohne Hilfsmittel zu benutzen n rdamedia Band nc rdacarrier © Springer Science+Business Media New York 2013 Abstract In this paper, we propose novel hybrid approaches to annotate videos in valence and arousal spaces by using users’ electroencephalogram (EEG) signals and video content. Firstly, several audio and visual features are extracted from video clips and five frequency features are extracted from each channel of the EEG signals. Secondly, statistical analyses are conducted to explore the relationships among emotional tags, EEG and video features. Thirdly, three Bayesian Networks are constructed to annotate videos by combining the video and EEG features at independent feature-level fusion, decision-level fusion and dependent feature-level fusion. In order to evaluate the effectiveness of our approaches, we designed and conducted the psychophysiological experiment to collect data, including emotion-induced video clips, users’ EEG responses while watching the selected video clips, and emotional video tags collected through participants’ self-report after watching each clip. The experimental results show that the proposed fusion methods outperform the conventional emotional tagging methods that use either video or EEG features alone in both valence and arousal spaces. Moreover, we can narrow down the semantic gap between the low-level video features and the users’ high-level emotional tags with the help of EEG features. Emotional tagging Videos Independent feature-level fusion Decision-level fusion Dependent feature-level fusion Zhu, Yachen aut Wu, Guobing aut Ji, Qiang aut Enthalten in Multimedia tools and applications Springer US, 1995 72(2013), 2 vom: 10. Apr., Seite 1257-1283 (DE-627)189064145 (DE-600)1287642-2 (DE-576)052842126 1380-7501 nnns volume:72 year:2013 number:2 day:10 month:04 pages:1257-1283 https://doi.org/10.1007/s11042-013-1450-8 lizenzpflichtig Volltext GBV_USEFLAG_A SYSFLAG_A GBV_OLC SSG-OLC-MAT SSG-OLC-BUB SSG-OLC-MKW GBV_ILN_70 AR 72 2013 2 10 04 1257-1283 |
language |
English |
source |
Enthalten in Multimedia tools and applications 72(2013), 2 vom: 10. Apr., Seite 1257-1283 volume:72 year:2013 number:2 day:10 month:04 pages:1257-1283 |
sourceStr |
Enthalten in Multimedia tools and applications 72(2013), 2 vom: 10. Apr., Seite 1257-1283 volume:72 year:2013 number:2 day:10 month:04 pages:1257-1283 |
format_phy_str_mv |
Article |
institution |
findex.gbv.de |
topic_facet |
Emotional tagging Videos Independent feature-level fusion Decision-level fusion Dependent feature-level fusion |
dewey-raw |
070 |
isfreeaccess_bool |
false |
container_title |
Multimedia tools and applications |
authorswithroles_txt_mv |
Wang, Shangfei @@aut@@ Zhu, Yachen @@aut@@ Wu, Guobing @@aut@@ Ji, Qiang @@aut@@ |
publishDateDaySort_date |
2013-04-10T00:00:00Z |
hierarchy_top_id |
189064145 |
dewey-sort |
270 |
id |
OLC2035012406 |
language_de |
englisch |
fullrecord |
<?xml version="1.0" encoding="UTF-8"?><collection xmlns="http://www.loc.gov/MARC21/slim"><record><leader>01000caa a22002652 4500</leader><controlfield tag="001">OLC2035012406</controlfield><controlfield tag="003">DE-627</controlfield><controlfield tag="005">20230503192705.0</controlfield><controlfield tag="007">tu</controlfield><controlfield tag="008">200819s2013 xx ||||| 00| ||eng c</controlfield><datafield tag="024" ind1="7" ind2=" "><subfield code="a">10.1007/s11042-013-1450-8</subfield><subfield code="2">doi</subfield></datafield><datafield tag="035" ind1=" " ind2=" "><subfield code="a">(DE-627)OLC2035012406</subfield></datafield><datafield tag="035" ind1=" " ind2=" "><subfield code="a">(DE-He213)s11042-013-1450-8-p</subfield></datafield><datafield tag="040" ind1=" " ind2=" "><subfield code="a">DE-627</subfield><subfield code="b">ger</subfield><subfield code="c">DE-627</subfield><subfield code="e">rakwb</subfield></datafield><datafield tag="041" ind1=" " ind2=" "><subfield code="a">eng</subfield></datafield><datafield tag="082" ind1="0" ind2="4"><subfield code="a">070</subfield><subfield code="a">004</subfield><subfield code="q">VZ</subfield></datafield><datafield tag="100" ind1="1" ind2=" "><subfield code="a">Wang, Shangfei</subfield><subfield code="e">verfasserin</subfield><subfield code="4">aut</subfield></datafield><datafield tag="245" ind1="1" ind2="0"><subfield code="a">Hybrid video emotional tagging using users’ EEG and video content</subfield></datafield><datafield tag="264" ind1=" " ind2="1"><subfield code="c">2013</subfield></datafield><datafield tag="336" ind1=" " ind2=" "><subfield code="a">Text</subfield><subfield code="b">txt</subfield><subfield code="2">rdacontent</subfield></datafield><datafield tag="337" ind1=" " ind2=" "><subfield code="a">ohne Hilfsmittel zu benutzen</subfield><subfield code="b">n</subfield><subfield code="2">rdamedia</subfield></datafield><datafield tag="338" ind1=" " ind2=" "><subfield code="a">Band</subfield><subfield code="b">nc</subfield><subfield code="2">rdacarrier</subfield></datafield><datafield tag="500" ind1=" " ind2=" "><subfield code="a">© Springer Science+Business Media New York 2013</subfield></datafield><datafield tag="520" ind1=" " ind2=" "><subfield code="a">Abstract In this paper, we propose novel hybrid approaches to annotate videos in valence and arousal spaces by using users’ electroencephalogram (EEG) signals and video content. Firstly, several audio and visual features are extracted from video clips and five frequency features are extracted from each channel of the EEG signals. Secondly, statistical analyses are conducted to explore the relationships among emotional tags, EEG and video features. Thirdly, three Bayesian Networks are constructed to annotate videos by combining the video and EEG features at independent feature-level fusion, decision-level fusion and dependent feature-level fusion. In order to evaluate the effectiveness of our approaches, we designed and conducted the psychophysiological experiment to collect data, including emotion-induced video clips, users’ EEG responses while watching the selected video clips, and emotional video tags collected through participants’ self-report after watching each clip. The experimental results show that the proposed fusion methods outperform the conventional emotional tagging methods that use either video or EEG features alone in both valence and arousal spaces. Moreover, we can narrow down the semantic gap between the low-level video features and the users’ high-level emotional tags with the help of EEG features.</subfield></datafield><datafield tag="650" ind1=" " ind2="4"><subfield code="a">Emotional tagging</subfield></datafield><datafield tag="650" ind1=" " ind2="4"><subfield code="a">Videos</subfield></datafield><datafield tag="650" ind1=" " ind2="4"><subfield code="a">Independent feature-level fusion</subfield></datafield><datafield tag="650" ind1=" " ind2="4"><subfield code="a">Decision-level fusion</subfield></datafield><datafield tag="650" ind1=" " ind2="4"><subfield code="a">Dependent feature-level fusion</subfield></datafield><datafield tag="700" ind1="1" ind2=" "><subfield code="a">Zhu, Yachen</subfield><subfield code="4">aut</subfield></datafield><datafield tag="700" ind1="1" ind2=" "><subfield code="a">Wu, Guobing</subfield><subfield code="4">aut</subfield></datafield><datafield tag="700" ind1="1" ind2=" "><subfield code="a">Ji, Qiang</subfield><subfield code="4">aut</subfield></datafield><datafield tag="773" ind1="0" ind2="8"><subfield code="i">Enthalten in</subfield><subfield code="t">Multimedia tools and applications</subfield><subfield code="d">Springer US, 1995</subfield><subfield code="g">72(2013), 2 vom: 10. Apr., Seite 1257-1283</subfield><subfield code="w">(DE-627)189064145</subfield><subfield code="w">(DE-600)1287642-2</subfield><subfield code="w">(DE-576)052842126</subfield><subfield code="x">1380-7501</subfield><subfield code="7">nnns</subfield></datafield><datafield tag="773" ind1="1" ind2="8"><subfield code="g">volume:72</subfield><subfield code="g">year:2013</subfield><subfield code="g">number:2</subfield><subfield code="g">day:10</subfield><subfield code="g">month:04</subfield><subfield code="g">pages:1257-1283</subfield></datafield><datafield tag="856" ind1="4" ind2="1"><subfield code="u">https://doi.org/10.1007/s11042-013-1450-8</subfield><subfield code="z">lizenzpflichtig</subfield><subfield code="3">Volltext</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_USEFLAG_A</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">SYSFLAG_A</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_OLC</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">SSG-OLC-MAT</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">SSG-OLC-BUB</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">SSG-OLC-MKW</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_70</subfield></datafield><datafield tag="951" ind1=" " ind2=" "><subfield code="a">AR</subfield></datafield><datafield tag="952" ind1=" " ind2=" "><subfield code="d">72</subfield><subfield code="j">2013</subfield><subfield code="e">2</subfield><subfield code="b">10</subfield><subfield code="c">04</subfield><subfield code="h">1257-1283</subfield></datafield></record></collection>
|
author |
Wang, Shangfei |
spellingShingle |
Wang, Shangfei ddc 070 misc Emotional tagging misc Videos misc Independent feature-level fusion misc Decision-level fusion misc Dependent feature-level fusion Hybrid video emotional tagging using users’ EEG and video content |
authorStr |
Wang, Shangfei |
ppnlink_with_tag_str_mv |
@@773@@(DE-627)189064145 |
format |
Article |
dewey-ones |
070 - News media, journalism & publishing 004 - Data processing & computer science |
delete_txt_mv |
keep |
author_role |
aut aut aut aut |
collection |
OLC |
remote_str |
false |
illustrated |
Not Illustrated |
issn |
1380-7501 |
topic_title |
070 004 VZ Hybrid video emotional tagging using users’ EEG and video content Emotional tagging Videos Independent feature-level fusion Decision-level fusion Dependent feature-level fusion |
topic |
ddc 070 misc Emotional tagging misc Videos misc Independent feature-level fusion misc Decision-level fusion misc Dependent feature-level fusion |
topic_unstemmed |
ddc 070 misc Emotional tagging misc Videos misc Independent feature-level fusion misc Decision-level fusion misc Dependent feature-level fusion |
topic_browse |
ddc 070 misc Emotional tagging misc Videos misc Independent feature-level fusion misc Decision-level fusion misc Dependent feature-level fusion |
format_facet |
Aufsätze Gedruckte Aufsätze |
format_main_str_mv |
Text Zeitschrift/Artikel |
carriertype_str_mv |
nc |
hierarchy_parent_title |
Multimedia tools and applications |
hierarchy_parent_id |
189064145 |
dewey-tens |
070 - News media, journalism & publishing 000 - Computer science, knowledge & systems |
hierarchy_top_title |
Multimedia tools and applications |
isfreeaccess_txt |
false |
familylinks_str_mv |
(DE-627)189064145 (DE-600)1287642-2 (DE-576)052842126 |
title |
Hybrid video emotional tagging using users’ EEG and video content |
ctrlnum |
(DE-627)OLC2035012406 (DE-He213)s11042-013-1450-8-p |
title_full |
Hybrid video emotional tagging using users’ EEG and video content |
author_sort |
Wang, Shangfei |
journal |
Multimedia tools and applications |
journalStr |
Multimedia tools and applications |
lang_code |
eng |
isOA_bool |
false |
dewey-hundreds |
000 - Computer science, information & general works |
recordtype |
marc |
publishDateSort |
2013 |
contenttype_str_mv |
txt |
container_start_page |
1257 |
author_browse |
Wang, Shangfei Zhu, Yachen Wu, Guobing Ji, Qiang |
container_volume |
72 |
class |
070 004 VZ |
format_se |
Aufsätze |
author-letter |
Wang, Shangfei |
doi_str_mv |
10.1007/s11042-013-1450-8 |
dewey-full |
070 004 |
title_sort |
hybrid video emotional tagging using users’ eeg and video content |
title_auth |
Hybrid video emotional tagging using users’ EEG and video content |
abstract |
Abstract In this paper, we propose novel hybrid approaches to annotate videos in valence and arousal spaces by using users’ electroencephalogram (EEG) signals and video content. Firstly, several audio and visual features are extracted from video clips and five frequency features are extracted from each channel of the EEG signals. Secondly, statistical analyses are conducted to explore the relationships among emotional tags, EEG and video features. Thirdly, three Bayesian Networks are constructed to annotate videos by combining the video and EEG features at independent feature-level fusion, decision-level fusion and dependent feature-level fusion. In order to evaluate the effectiveness of our approaches, we designed and conducted the psychophysiological experiment to collect data, including emotion-induced video clips, users’ EEG responses while watching the selected video clips, and emotional video tags collected through participants’ self-report after watching each clip. The experimental results show that the proposed fusion methods outperform the conventional emotional tagging methods that use either video or EEG features alone in both valence and arousal spaces. Moreover, we can narrow down the semantic gap between the low-level video features and the users’ high-level emotional tags with the help of EEG features. © Springer Science+Business Media New York 2013 |
abstractGer |
Abstract In this paper, we propose novel hybrid approaches to annotate videos in valence and arousal spaces by using users’ electroencephalogram (EEG) signals and video content. Firstly, several audio and visual features are extracted from video clips and five frequency features are extracted from each channel of the EEG signals. Secondly, statistical analyses are conducted to explore the relationships among emotional tags, EEG and video features. Thirdly, three Bayesian Networks are constructed to annotate videos by combining the video and EEG features at independent feature-level fusion, decision-level fusion and dependent feature-level fusion. In order to evaluate the effectiveness of our approaches, we designed and conducted the psychophysiological experiment to collect data, including emotion-induced video clips, users’ EEG responses while watching the selected video clips, and emotional video tags collected through participants’ self-report after watching each clip. The experimental results show that the proposed fusion methods outperform the conventional emotional tagging methods that use either video or EEG features alone in both valence and arousal spaces. Moreover, we can narrow down the semantic gap between the low-level video features and the users’ high-level emotional tags with the help of EEG features. © Springer Science+Business Media New York 2013 |
abstract_unstemmed |
Abstract In this paper, we propose novel hybrid approaches to annotate videos in valence and arousal spaces by using users’ electroencephalogram (EEG) signals and video content. Firstly, several audio and visual features are extracted from video clips and five frequency features are extracted from each channel of the EEG signals. Secondly, statistical analyses are conducted to explore the relationships among emotional tags, EEG and video features. Thirdly, three Bayesian Networks are constructed to annotate videos by combining the video and EEG features at independent feature-level fusion, decision-level fusion and dependent feature-level fusion. In order to evaluate the effectiveness of our approaches, we designed and conducted the psychophysiological experiment to collect data, including emotion-induced video clips, users’ EEG responses while watching the selected video clips, and emotional video tags collected through participants’ self-report after watching each clip. The experimental results show that the proposed fusion methods outperform the conventional emotional tagging methods that use either video or EEG features alone in both valence and arousal spaces. Moreover, we can narrow down the semantic gap between the low-level video features and the users’ high-level emotional tags with the help of EEG features. © Springer Science+Business Media New York 2013 |
collection_details |
GBV_USEFLAG_A SYSFLAG_A GBV_OLC SSG-OLC-MAT SSG-OLC-BUB SSG-OLC-MKW GBV_ILN_70 |
container_issue |
2 |
title_short |
Hybrid video emotional tagging using users’ EEG and video content |
url |
https://doi.org/10.1007/s11042-013-1450-8 |
remote_bool |
false |
author2 |
Zhu, Yachen Wu, Guobing Ji, Qiang |
author2Str |
Zhu, Yachen Wu, Guobing Ji, Qiang |
ppnlink |
189064145 |
mediatype_str_mv |
n |
isOA_txt |
false |
hochschulschrift_bool |
false |
doi_str |
10.1007/s11042-013-1450-8 |
up_date |
2024-07-03T23:25:56.145Z |
_version_ |
1803602263954948096 |
fullrecord_marcxml |
<?xml version="1.0" encoding="UTF-8"?><collection xmlns="http://www.loc.gov/MARC21/slim"><record><leader>01000caa a22002652 4500</leader><controlfield tag="001">OLC2035012406</controlfield><controlfield tag="003">DE-627</controlfield><controlfield tag="005">20230503192705.0</controlfield><controlfield tag="007">tu</controlfield><controlfield tag="008">200819s2013 xx ||||| 00| ||eng c</controlfield><datafield tag="024" ind1="7" ind2=" "><subfield code="a">10.1007/s11042-013-1450-8</subfield><subfield code="2">doi</subfield></datafield><datafield tag="035" ind1=" " ind2=" "><subfield code="a">(DE-627)OLC2035012406</subfield></datafield><datafield tag="035" ind1=" " ind2=" "><subfield code="a">(DE-He213)s11042-013-1450-8-p</subfield></datafield><datafield tag="040" ind1=" " ind2=" "><subfield code="a">DE-627</subfield><subfield code="b">ger</subfield><subfield code="c">DE-627</subfield><subfield code="e">rakwb</subfield></datafield><datafield tag="041" ind1=" " ind2=" "><subfield code="a">eng</subfield></datafield><datafield tag="082" ind1="0" ind2="4"><subfield code="a">070</subfield><subfield code="a">004</subfield><subfield code="q">VZ</subfield></datafield><datafield tag="100" ind1="1" ind2=" "><subfield code="a">Wang, Shangfei</subfield><subfield code="e">verfasserin</subfield><subfield code="4">aut</subfield></datafield><datafield tag="245" ind1="1" ind2="0"><subfield code="a">Hybrid video emotional tagging using users’ EEG and video content</subfield></datafield><datafield tag="264" ind1=" " ind2="1"><subfield code="c">2013</subfield></datafield><datafield tag="336" ind1=" " ind2=" "><subfield code="a">Text</subfield><subfield code="b">txt</subfield><subfield code="2">rdacontent</subfield></datafield><datafield tag="337" ind1=" " ind2=" "><subfield code="a">ohne Hilfsmittel zu benutzen</subfield><subfield code="b">n</subfield><subfield code="2">rdamedia</subfield></datafield><datafield tag="338" ind1=" " ind2=" "><subfield code="a">Band</subfield><subfield code="b">nc</subfield><subfield code="2">rdacarrier</subfield></datafield><datafield tag="500" ind1=" " ind2=" "><subfield code="a">© Springer Science+Business Media New York 2013</subfield></datafield><datafield tag="520" ind1=" " ind2=" "><subfield code="a">Abstract In this paper, we propose novel hybrid approaches to annotate videos in valence and arousal spaces by using users’ electroencephalogram (EEG) signals and video content. Firstly, several audio and visual features are extracted from video clips and five frequency features are extracted from each channel of the EEG signals. Secondly, statistical analyses are conducted to explore the relationships among emotional tags, EEG and video features. Thirdly, three Bayesian Networks are constructed to annotate videos by combining the video and EEG features at independent feature-level fusion, decision-level fusion and dependent feature-level fusion. In order to evaluate the effectiveness of our approaches, we designed and conducted the psychophysiological experiment to collect data, including emotion-induced video clips, users’ EEG responses while watching the selected video clips, and emotional video tags collected through participants’ self-report after watching each clip. The experimental results show that the proposed fusion methods outperform the conventional emotional tagging methods that use either video or EEG features alone in both valence and arousal spaces. Moreover, we can narrow down the semantic gap between the low-level video features and the users’ high-level emotional tags with the help of EEG features.</subfield></datafield><datafield tag="650" ind1=" " ind2="4"><subfield code="a">Emotional tagging</subfield></datafield><datafield tag="650" ind1=" " ind2="4"><subfield code="a">Videos</subfield></datafield><datafield tag="650" ind1=" " ind2="4"><subfield code="a">Independent feature-level fusion</subfield></datafield><datafield tag="650" ind1=" " ind2="4"><subfield code="a">Decision-level fusion</subfield></datafield><datafield tag="650" ind1=" " ind2="4"><subfield code="a">Dependent feature-level fusion</subfield></datafield><datafield tag="700" ind1="1" ind2=" "><subfield code="a">Zhu, Yachen</subfield><subfield code="4">aut</subfield></datafield><datafield tag="700" ind1="1" ind2=" "><subfield code="a">Wu, Guobing</subfield><subfield code="4">aut</subfield></datafield><datafield tag="700" ind1="1" ind2=" "><subfield code="a">Ji, Qiang</subfield><subfield code="4">aut</subfield></datafield><datafield tag="773" ind1="0" ind2="8"><subfield code="i">Enthalten in</subfield><subfield code="t">Multimedia tools and applications</subfield><subfield code="d">Springer US, 1995</subfield><subfield code="g">72(2013), 2 vom: 10. Apr., Seite 1257-1283</subfield><subfield code="w">(DE-627)189064145</subfield><subfield code="w">(DE-600)1287642-2</subfield><subfield code="w">(DE-576)052842126</subfield><subfield code="x">1380-7501</subfield><subfield code="7">nnns</subfield></datafield><datafield tag="773" ind1="1" ind2="8"><subfield code="g">volume:72</subfield><subfield code="g">year:2013</subfield><subfield code="g">number:2</subfield><subfield code="g">day:10</subfield><subfield code="g">month:04</subfield><subfield code="g">pages:1257-1283</subfield></datafield><datafield tag="856" ind1="4" ind2="1"><subfield code="u">https://doi.org/10.1007/s11042-013-1450-8</subfield><subfield code="z">lizenzpflichtig</subfield><subfield code="3">Volltext</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_USEFLAG_A</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">SYSFLAG_A</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_OLC</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">SSG-OLC-MAT</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">SSG-OLC-BUB</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">SSG-OLC-MKW</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_70</subfield></datafield><datafield tag="951" ind1=" " ind2=" "><subfield code="a">AR</subfield></datafield><datafield tag="952" ind1=" " ind2=" "><subfield code="d">72</subfield><subfield code="j">2013</subfield><subfield code="e">2</subfield><subfield code="b">10</subfield><subfield code="c">04</subfield><subfield code="h">1257-1283</subfield></datafield></record></collection>
|
score |
7.399584 |