Deception detection using multimodal fusion approaches
Abstract Automatic deception detection is an important task that has gained a huge interest in different fields due to its potential applications. Particularly, it can improve justice and security in society by helping in detecting deceivers in high-stakes situations across jurisprudence, law enforc...
Ausführliche Beschreibung
Autor*in: |
Chebbi, Safa [verfasserIn] |
---|
Format: |
E-Artikel |
---|---|
Sprache: |
Englisch |
Erschienen: |
2021 |
---|
Schlagwörter: |
---|
Anmerkung: |
© The Author(s), under exclusive licence to Springer Science+Business Media, LLC, part of Springer Nature 2021 |
---|
Übergeordnetes Werk: |
Enthalten in: Multimedia tools and applications - Dordrecht [u.a.] : Springer Science + Business Media B.V, 1995, 82(2021), 9 vom: 29. Juni, Seite 13073-13102 |
---|---|
Übergeordnetes Werk: |
volume:82 ; year:2021 ; number:9 ; day:29 ; month:06 ; pages:13073-13102 |
Links: |
---|
DOI / URN: |
10.1007/s11042-021-11148-9 |
---|
Katalog-ID: |
SPR049771043 |
---|
LEADER | 01000naa a22002652 4500 | ||
---|---|---|---|
001 | SPR049771043 | ||
003 | DE-627 | ||
005 | 20230323113802.0 | ||
007 | cr uuu---uuuuu | ||
008 | 230323s2021 xx |||||o 00| ||eng c | ||
024 | 7 | |a 10.1007/s11042-021-11148-9 |2 doi | |
035 | |a (DE-627)SPR049771043 | ||
035 | |a (SPR)s11042-021-11148-9-e | ||
040 | |a DE-627 |b ger |c DE-627 |e rakwb | ||
041 | |a eng | ||
100 | 1 | |a Chebbi, Safa |e verfasserin |0 (orcid)0000-0002-2671-7144 |4 aut | |
245 | 1 | 0 | |a Deception detection using multimodal fusion approaches |
264 | 1 | |c 2021 | |
336 | |a Text |b txt |2 rdacontent | ||
337 | |a Computermedien |b c |2 rdamedia | ||
338 | |a Online-Ressource |b cr |2 rdacarrier | ||
500 | |a © The Author(s), under exclusive licence to Springer Science+Business Media, LLC, part of Springer Nature 2021 | ||
520 | |a Abstract Automatic deception detection is an important task that has gained a huge interest in different fields due to its potential applications. Particularly, it can improve justice and security in society by helping in detecting deceivers in high-stakes situations across jurisprudence, law enforcement, and national security domains, among others. However, the existing deception detection systems until today are not as accurate as it is expected, which makes their use very risky especially in critical fields. This article outlines an approach for automatically distinguishing between deceit and truth based on audio, video and text modalities and explores the possibility of combining them together in order to detect deception more accurately. First, each modality has been evaluated separately and then a feature and decision-level fusion approaches have been proposed to combine the considered modalities. The proposed feature level fusion approach investigates a diversified feature selection techniques to select the most relevant ones among the whole used feature set, while the decision level fusion approach is based on the belief theory considering information about the certainty degree of each modality. To do so, we used a real-life video dataset of people communicating truthfully or deceptively collected from public american court trials. Unimodal models trained on audio, video and text separately achieved an accuracy rate of 60%, 94% and 58% respectively. When using the feature level fusion approach, the best accuracy deception detection result reaches 93% using only 19 combined features, while a 100% deception recognition rate has been obtained with the decision-level fusion proposed approach, outperforming the results obtained in the literature. | ||
650 | 4 | |a Deception detection |7 (dpeaa)DE-He213 | |
650 | 4 | |a Multimodality |7 (dpeaa)DE-He213 | |
650 | 4 | |a Audio |7 (dpeaa)DE-He213 | |
650 | 4 | |a Video |7 (dpeaa)DE-He213 | |
650 | 4 | |a Text |7 (dpeaa)DE-He213 | |
650 | 4 | |a Feature selection |7 (dpeaa)DE-He213 | |
650 | 4 | |a Feature-level fusion |7 (dpeaa)DE-He213 | |
650 | 4 | |a Decision-level fusion |7 (dpeaa)DE-He213 | |
650 | 4 | |a Belief theory |7 (dpeaa)DE-He213 | |
700 | 1 | |a Jebara, Sofia Ben |4 aut | |
773 | 0 | 8 | |i Enthalten in |t Multimedia tools and applications |d Dordrecht [u.a.] : Springer Science + Business Media B.V, 1995 |g 82(2021), 9 vom: 29. Juni, Seite 13073-13102 |w (DE-627)27135030X |w (DE-600)1479928-5 |x 1573-7721 |7 nnns |
773 | 1 | 8 | |g volume:82 |g year:2021 |g number:9 |g day:29 |g month:06 |g pages:13073-13102 |
856 | 4 | 0 | |u https://dx.doi.org/10.1007/s11042-021-11148-9 |z lizenzpflichtig |3 Volltext |
912 | |a GBV_USEFLAG_A | ||
912 | |a SYSFLAG_A | ||
912 | |a GBV_SPRINGER | ||
912 | |a GBV_ILN_11 | ||
912 | |a GBV_ILN_20 | ||
912 | |a GBV_ILN_22 | ||
912 | |a GBV_ILN_23 | ||
912 | |a GBV_ILN_24 | ||
912 | |a GBV_ILN_31 | ||
912 | |a GBV_ILN_32 | ||
912 | |a GBV_ILN_39 | ||
912 | |a GBV_ILN_40 | ||
912 | |a GBV_ILN_60 | ||
912 | |a GBV_ILN_62 | ||
912 | |a GBV_ILN_63 | ||
912 | |a GBV_ILN_69 | ||
912 | |a GBV_ILN_70 | ||
912 | |a GBV_ILN_73 | ||
912 | |a GBV_ILN_74 | ||
912 | |a GBV_ILN_90 | ||
912 | |a GBV_ILN_95 | ||
912 | |a GBV_ILN_100 | ||
912 | |a GBV_ILN_101 | ||
912 | |a GBV_ILN_105 | ||
912 | |a GBV_ILN_110 | ||
912 | |a GBV_ILN_120 | ||
912 | |a GBV_ILN_138 | ||
912 | |a GBV_ILN_150 | ||
912 | |a GBV_ILN_151 | ||
912 | |a GBV_ILN_152 | ||
912 | |a GBV_ILN_161 | ||
912 | |a GBV_ILN_170 | ||
912 | |a GBV_ILN_171 | ||
912 | |a GBV_ILN_187 | ||
912 | |a GBV_ILN_213 | ||
912 | |a GBV_ILN_224 | ||
912 | |a GBV_ILN_230 | ||
912 | |a GBV_ILN_250 | ||
912 | |a GBV_ILN_281 | ||
912 | |a GBV_ILN_285 | ||
912 | |a GBV_ILN_293 | ||
912 | |a GBV_ILN_370 | ||
912 | |a GBV_ILN_602 | ||
912 | |a GBV_ILN_636 | ||
912 | |a GBV_ILN_702 | ||
912 | |a GBV_ILN_2001 | ||
912 | |a GBV_ILN_2003 | ||
912 | |a GBV_ILN_2004 | ||
912 | |a GBV_ILN_2005 | ||
912 | |a GBV_ILN_2006 | ||
912 | |a GBV_ILN_2007 | ||
912 | |a GBV_ILN_2008 | ||
912 | |a GBV_ILN_2009 | ||
912 | |a GBV_ILN_2010 | ||
912 | |a GBV_ILN_2011 | ||
912 | |a GBV_ILN_2014 | ||
912 | |a GBV_ILN_2015 | ||
912 | |a GBV_ILN_2020 | ||
912 | |a GBV_ILN_2021 | ||
912 | |a GBV_ILN_2025 | ||
912 | |a GBV_ILN_2026 | ||
912 | |a GBV_ILN_2027 | ||
912 | |a GBV_ILN_2031 | ||
912 | |a GBV_ILN_2034 | ||
912 | |a GBV_ILN_2037 | ||
912 | |a GBV_ILN_2038 | ||
912 | |a GBV_ILN_2039 | ||
912 | |a GBV_ILN_2044 | ||
912 | |a GBV_ILN_2048 | ||
912 | |a GBV_ILN_2049 | ||
912 | |a GBV_ILN_2050 | ||
912 | |a GBV_ILN_2055 | ||
912 | |a GBV_ILN_2056 | ||
912 | |a GBV_ILN_2057 | ||
912 | |a GBV_ILN_2059 | ||
912 | |a GBV_ILN_2061 | ||
912 | |a GBV_ILN_2064 | ||
912 | |a GBV_ILN_2065 | ||
912 | |a GBV_ILN_2068 | ||
912 | |a GBV_ILN_2088 | ||
912 | |a GBV_ILN_2093 | ||
912 | |a GBV_ILN_2106 | ||
912 | |a GBV_ILN_2107 | ||
912 | |a GBV_ILN_2108 | ||
912 | |a GBV_ILN_2110 | ||
912 | |a GBV_ILN_2111 | ||
912 | |a GBV_ILN_2112 | ||
912 | |a GBV_ILN_2113 | ||
912 | |a GBV_ILN_2118 | ||
912 | |a GBV_ILN_2119 | ||
912 | |a GBV_ILN_2122 | ||
912 | |a GBV_ILN_2129 | ||
912 | |a GBV_ILN_2143 | ||
912 | |a GBV_ILN_2144 | ||
912 | |a GBV_ILN_2147 | ||
912 | |a GBV_ILN_2148 | ||
912 | |a GBV_ILN_2152 | ||
912 | |a GBV_ILN_2153 | ||
912 | |a GBV_ILN_2188 | ||
912 | |a GBV_ILN_2190 | ||
912 | |a GBV_ILN_2232 | ||
912 | |a GBV_ILN_2336 | ||
912 | |a GBV_ILN_2446 | ||
912 | |a GBV_ILN_2470 | ||
912 | |a GBV_ILN_2472 | ||
912 | |a GBV_ILN_2507 | ||
912 | |a GBV_ILN_2522 | ||
912 | |a GBV_ILN_2548 | ||
912 | |a GBV_ILN_4035 | ||
912 | |a GBV_ILN_4037 | ||
912 | |a GBV_ILN_4046 | ||
912 | |a GBV_ILN_4112 | ||
912 | |a GBV_ILN_4125 | ||
912 | |a GBV_ILN_4126 | ||
912 | |a GBV_ILN_4242 | ||
912 | |a GBV_ILN_4246 | ||
912 | |a GBV_ILN_4249 | ||
912 | |a GBV_ILN_4251 | ||
912 | |a GBV_ILN_4305 | ||
912 | |a GBV_ILN_4306 | ||
912 | |a GBV_ILN_4307 | ||
912 | |a GBV_ILN_4313 | ||
912 | |a GBV_ILN_4322 | ||
912 | |a GBV_ILN_4323 | ||
912 | |a GBV_ILN_4324 | ||
912 | |a GBV_ILN_4325 | ||
912 | |a GBV_ILN_4326 | ||
912 | |a GBV_ILN_4328 | ||
912 | |a GBV_ILN_4333 | ||
912 | |a GBV_ILN_4334 | ||
912 | |a GBV_ILN_4335 | ||
912 | |a GBV_ILN_4336 | ||
912 | |a GBV_ILN_4338 | ||
912 | |a GBV_ILN_4393 | ||
912 | |a GBV_ILN_4700 | ||
951 | |a AR | ||
952 | |d 82 |j 2021 |e 9 |b 29 |c 06 |h 13073-13102 |
author_variant |
s c sc s b j sb sbj |
---|---|
matchkey_str |
article:15737721:2021----::eetodtcinsnmlioafs |
hierarchy_sort_str |
2021 |
publishDate |
2021 |
allfields |
10.1007/s11042-021-11148-9 doi (DE-627)SPR049771043 (SPR)s11042-021-11148-9-e DE-627 ger DE-627 rakwb eng Chebbi, Safa verfasserin (orcid)0000-0002-2671-7144 aut Deception detection using multimodal fusion approaches 2021 Text txt rdacontent Computermedien c rdamedia Online-Ressource cr rdacarrier © The Author(s), under exclusive licence to Springer Science+Business Media, LLC, part of Springer Nature 2021 Abstract Automatic deception detection is an important task that has gained a huge interest in different fields due to its potential applications. Particularly, it can improve justice and security in society by helping in detecting deceivers in high-stakes situations across jurisprudence, law enforcement, and national security domains, among others. However, the existing deception detection systems until today are not as accurate as it is expected, which makes their use very risky especially in critical fields. This article outlines an approach for automatically distinguishing between deceit and truth based on audio, video and text modalities and explores the possibility of combining them together in order to detect deception more accurately. First, each modality has been evaluated separately and then a feature and decision-level fusion approaches have been proposed to combine the considered modalities. The proposed feature level fusion approach investigates a diversified feature selection techniques to select the most relevant ones among the whole used feature set, while the decision level fusion approach is based on the belief theory considering information about the certainty degree of each modality. To do so, we used a real-life video dataset of people communicating truthfully or deceptively collected from public american court trials. Unimodal models trained on audio, video and text separately achieved an accuracy rate of 60%, 94% and 58% respectively. When using the feature level fusion approach, the best accuracy deception detection result reaches 93% using only 19 combined features, while a 100% deception recognition rate has been obtained with the decision-level fusion proposed approach, outperforming the results obtained in the literature. Deception detection (dpeaa)DE-He213 Multimodality (dpeaa)DE-He213 Audio (dpeaa)DE-He213 Video (dpeaa)DE-He213 Text (dpeaa)DE-He213 Feature selection (dpeaa)DE-He213 Feature-level fusion (dpeaa)DE-He213 Decision-level fusion (dpeaa)DE-He213 Belief theory (dpeaa)DE-He213 Jebara, Sofia Ben aut Enthalten in Multimedia tools and applications Dordrecht [u.a.] : Springer Science + Business Media B.V, 1995 82(2021), 9 vom: 29. Juni, Seite 13073-13102 (DE-627)27135030X (DE-600)1479928-5 1573-7721 nnns volume:82 year:2021 number:9 day:29 month:06 pages:13073-13102 https://dx.doi.org/10.1007/s11042-021-11148-9 lizenzpflichtig Volltext GBV_USEFLAG_A SYSFLAG_A GBV_SPRINGER GBV_ILN_11 GBV_ILN_20 GBV_ILN_22 GBV_ILN_23 GBV_ILN_24 GBV_ILN_31 GBV_ILN_32 GBV_ILN_39 GBV_ILN_40 GBV_ILN_60 GBV_ILN_62 GBV_ILN_63 GBV_ILN_69 GBV_ILN_70 GBV_ILN_73 GBV_ILN_74 GBV_ILN_90 GBV_ILN_95 GBV_ILN_100 GBV_ILN_101 GBV_ILN_105 GBV_ILN_110 GBV_ILN_120 GBV_ILN_138 GBV_ILN_150 GBV_ILN_151 GBV_ILN_152 GBV_ILN_161 GBV_ILN_170 GBV_ILN_171 GBV_ILN_187 GBV_ILN_213 GBV_ILN_224 GBV_ILN_230 GBV_ILN_250 GBV_ILN_281 GBV_ILN_285 GBV_ILN_293 GBV_ILN_370 GBV_ILN_602 GBV_ILN_636 GBV_ILN_702 GBV_ILN_2001 GBV_ILN_2003 GBV_ILN_2004 GBV_ILN_2005 GBV_ILN_2006 GBV_ILN_2007 GBV_ILN_2008 GBV_ILN_2009 GBV_ILN_2010 GBV_ILN_2011 GBV_ILN_2014 GBV_ILN_2015 GBV_ILN_2020 GBV_ILN_2021 GBV_ILN_2025 GBV_ILN_2026 GBV_ILN_2027 GBV_ILN_2031 GBV_ILN_2034 GBV_ILN_2037 GBV_ILN_2038 GBV_ILN_2039 GBV_ILN_2044 GBV_ILN_2048 GBV_ILN_2049 GBV_ILN_2050 GBV_ILN_2055 GBV_ILN_2056 GBV_ILN_2057 GBV_ILN_2059 GBV_ILN_2061 GBV_ILN_2064 GBV_ILN_2065 GBV_ILN_2068 GBV_ILN_2088 GBV_ILN_2093 GBV_ILN_2106 GBV_ILN_2107 GBV_ILN_2108 GBV_ILN_2110 GBV_ILN_2111 GBV_ILN_2112 GBV_ILN_2113 GBV_ILN_2118 GBV_ILN_2119 GBV_ILN_2122 GBV_ILN_2129 GBV_ILN_2143 GBV_ILN_2144 GBV_ILN_2147 GBV_ILN_2148 GBV_ILN_2152 GBV_ILN_2153 GBV_ILN_2188 GBV_ILN_2190 GBV_ILN_2232 GBV_ILN_2336 GBV_ILN_2446 GBV_ILN_2470 GBV_ILN_2472 GBV_ILN_2507 GBV_ILN_2522 GBV_ILN_2548 GBV_ILN_4035 GBV_ILN_4037 GBV_ILN_4046 GBV_ILN_4112 GBV_ILN_4125 GBV_ILN_4126 GBV_ILN_4242 GBV_ILN_4246 GBV_ILN_4249 GBV_ILN_4251 GBV_ILN_4305 GBV_ILN_4306 GBV_ILN_4307 GBV_ILN_4313 GBV_ILN_4322 GBV_ILN_4323 GBV_ILN_4324 GBV_ILN_4325 GBV_ILN_4326 GBV_ILN_4328 GBV_ILN_4333 GBV_ILN_4334 GBV_ILN_4335 GBV_ILN_4336 GBV_ILN_4338 GBV_ILN_4393 GBV_ILN_4700 AR 82 2021 9 29 06 13073-13102 |
spelling |
10.1007/s11042-021-11148-9 doi (DE-627)SPR049771043 (SPR)s11042-021-11148-9-e DE-627 ger DE-627 rakwb eng Chebbi, Safa verfasserin (orcid)0000-0002-2671-7144 aut Deception detection using multimodal fusion approaches 2021 Text txt rdacontent Computermedien c rdamedia Online-Ressource cr rdacarrier © The Author(s), under exclusive licence to Springer Science+Business Media, LLC, part of Springer Nature 2021 Abstract Automatic deception detection is an important task that has gained a huge interest in different fields due to its potential applications. Particularly, it can improve justice and security in society by helping in detecting deceivers in high-stakes situations across jurisprudence, law enforcement, and national security domains, among others. However, the existing deception detection systems until today are not as accurate as it is expected, which makes their use very risky especially in critical fields. This article outlines an approach for automatically distinguishing between deceit and truth based on audio, video and text modalities and explores the possibility of combining them together in order to detect deception more accurately. First, each modality has been evaluated separately and then a feature and decision-level fusion approaches have been proposed to combine the considered modalities. The proposed feature level fusion approach investigates a diversified feature selection techniques to select the most relevant ones among the whole used feature set, while the decision level fusion approach is based on the belief theory considering information about the certainty degree of each modality. To do so, we used a real-life video dataset of people communicating truthfully or deceptively collected from public american court trials. Unimodal models trained on audio, video and text separately achieved an accuracy rate of 60%, 94% and 58% respectively. When using the feature level fusion approach, the best accuracy deception detection result reaches 93% using only 19 combined features, while a 100% deception recognition rate has been obtained with the decision-level fusion proposed approach, outperforming the results obtained in the literature. Deception detection (dpeaa)DE-He213 Multimodality (dpeaa)DE-He213 Audio (dpeaa)DE-He213 Video (dpeaa)DE-He213 Text (dpeaa)DE-He213 Feature selection (dpeaa)DE-He213 Feature-level fusion (dpeaa)DE-He213 Decision-level fusion (dpeaa)DE-He213 Belief theory (dpeaa)DE-He213 Jebara, Sofia Ben aut Enthalten in Multimedia tools and applications Dordrecht [u.a.] : Springer Science + Business Media B.V, 1995 82(2021), 9 vom: 29. Juni, Seite 13073-13102 (DE-627)27135030X (DE-600)1479928-5 1573-7721 nnns volume:82 year:2021 number:9 day:29 month:06 pages:13073-13102 https://dx.doi.org/10.1007/s11042-021-11148-9 lizenzpflichtig Volltext GBV_USEFLAG_A SYSFLAG_A GBV_SPRINGER GBV_ILN_11 GBV_ILN_20 GBV_ILN_22 GBV_ILN_23 GBV_ILN_24 GBV_ILN_31 GBV_ILN_32 GBV_ILN_39 GBV_ILN_40 GBV_ILN_60 GBV_ILN_62 GBV_ILN_63 GBV_ILN_69 GBV_ILN_70 GBV_ILN_73 GBV_ILN_74 GBV_ILN_90 GBV_ILN_95 GBV_ILN_100 GBV_ILN_101 GBV_ILN_105 GBV_ILN_110 GBV_ILN_120 GBV_ILN_138 GBV_ILN_150 GBV_ILN_151 GBV_ILN_152 GBV_ILN_161 GBV_ILN_170 GBV_ILN_171 GBV_ILN_187 GBV_ILN_213 GBV_ILN_224 GBV_ILN_230 GBV_ILN_250 GBV_ILN_281 GBV_ILN_285 GBV_ILN_293 GBV_ILN_370 GBV_ILN_602 GBV_ILN_636 GBV_ILN_702 GBV_ILN_2001 GBV_ILN_2003 GBV_ILN_2004 GBV_ILN_2005 GBV_ILN_2006 GBV_ILN_2007 GBV_ILN_2008 GBV_ILN_2009 GBV_ILN_2010 GBV_ILN_2011 GBV_ILN_2014 GBV_ILN_2015 GBV_ILN_2020 GBV_ILN_2021 GBV_ILN_2025 GBV_ILN_2026 GBV_ILN_2027 GBV_ILN_2031 GBV_ILN_2034 GBV_ILN_2037 GBV_ILN_2038 GBV_ILN_2039 GBV_ILN_2044 GBV_ILN_2048 GBV_ILN_2049 GBV_ILN_2050 GBV_ILN_2055 GBV_ILN_2056 GBV_ILN_2057 GBV_ILN_2059 GBV_ILN_2061 GBV_ILN_2064 GBV_ILN_2065 GBV_ILN_2068 GBV_ILN_2088 GBV_ILN_2093 GBV_ILN_2106 GBV_ILN_2107 GBV_ILN_2108 GBV_ILN_2110 GBV_ILN_2111 GBV_ILN_2112 GBV_ILN_2113 GBV_ILN_2118 GBV_ILN_2119 GBV_ILN_2122 GBV_ILN_2129 GBV_ILN_2143 GBV_ILN_2144 GBV_ILN_2147 GBV_ILN_2148 GBV_ILN_2152 GBV_ILN_2153 GBV_ILN_2188 GBV_ILN_2190 GBV_ILN_2232 GBV_ILN_2336 GBV_ILN_2446 GBV_ILN_2470 GBV_ILN_2472 GBV_ILN_2507 GBV_ILN_2522 GBV_ILN_2548 GBV_ILN_4035 GBV_ILN_4037 GBV_ILN_4046 GBV_ILN_4112 GBV_ILN_4125 GBV_ILN_4126 GBV_ILN_4242 GBV_ILN_4246 GBV_ILN_4249 GBV_ILN_4251 GBV_ILN_4305 GBV_ILN_4306 GBV_ILN_4307 GBV_ILN_4313 GBV_ILN_4322 GBV_ILN_4323 GBV_ILN_4324 GBV_ILN_4325 GBV_ILN_4326 GBV_ILN_4328 GBV_ILN_4333 GBV_ILN_4334 GBV_ILN_4335 GBV_ILN_4336 GBV_ILN_4338 GBV_ILN_4393 GBV_ILN_4700 AR 82 2021 9 29 06 13073-13102 |
allfields_unstemmed |
10.1007/s11042-021-11148-9 doi (DE-627)SPR049771043 (SPR)s11042-021-11148-9-e DE-627 ger DE-627 rakwb eng Chebbi, Safa verfasserin (orcid)0000-0002-2671-7144 aut Deception detection using multimodal fusion approaches 2021 Text txt rdacontent Computermedien c rdamedia Online-Ressource cr rdacarrier © The Author(s), under exclusive licence to Springer Science+Business Media, LLC, part of Springer Nature 2021 Abstract Automatic deception detection is an important task that has gained a huge interest in different fields due to its potential applications. Particularly, it can improve justice and security in society by helping in detecting deceivers in high-stakes situations across jurisprudence, law enforcement, and national security domains, among others. However, the existing deception detection systems until today are not as accurate as it is expected, which makes their use very risky especially in critical fields. This article outlines an approach for automatically distinguishing between deceit and truth based on audio, video and text modalities and explores the possibility of combining them together in order to detect deception more accurately. First, each modality has been evaluated separately and then a feature and decision-level fusion approaches have been proposed to combine the considered modalities. The proposed feature level fusion approach investigates a diversified feature selection techniques to select the most relevant ones among the whole used feature set, while the decision level fusion approach is based on the belief theory considering information about the certainty degree of each modality. To do so, we used a real-life video dataset of people communicating truthfully or deceptively collected from public american court trials. Unimodal models trained on audio, video and text separately achieved an accuracy rate of 60%, 94% and 58% respectively. When using the feature level fusion approach, the best accuracy deception detection result reaches 93% using only 19 combined features, while a 100% deception recognition rate has been obtained with the decision-level fusion proposed approach, outperforming the results obtained in the literature. Deception detection (dpeaa)DE-He213 Multimodality (dpeaa)DE-He213 Audio (dpeaa)DE-He213 Video (dpeaa)DE-He213 Text (dpeaa)DE-He213 Feature selection (dpeaa)DE-He213 Feature-level fusion (dpeaa)DE-He213 Decision-level fusion (dpeaa)DE-He213 Belief theory (dpeaa)DE-He213 Jebara, Sofia Ben aut Enthalten in Multimedia tools and applications Dordrecht [u.a.] : Springer Science + Business Media B.V, 1995 82(2021), 9 vom: 29. Juni, Seite 13073-13102 (DE-627)27135030X (DE-600)1479928-5 1573-7721 nnns volume:82 year:2021 number:9 day:29 month:06 pages:13073-13102 https://dx.doi.org/10.1007/s11042-021-11148-9 lizenzpflichtig Volltext GBV_USEFLAG_A SYSFLAG_A GBV_SPRINGER GBV_ILN_11 GBV_ILN_20 GBV_ILN_22 GBV_ILN_23 GBV_ILN_24 GBV_ILN_31 GBV_ILN_32 GBV_ILN_39 GBV_ILN_40 GBV_ILN_60 GBV_ILN_62 GBV_ILN_63 GBV_ILN_69 GBV_ILN_70 GBV_ILN_73 GBV_ILN_74 GBV_ILN_90 GBV_ILN_95 GBV_ILN_100 GBV_ILN_101 GBV_ILN_105 GBV_ILN_110 GBV_ILN_120 GBV_ILN_138 GBV_ILN_150 GBV_ILN_151 GBV_ILN_152 GBV_ILN_161 GBV_ILN_170 GBV_ILN_171 GBV_ILN_187 GBV_ILN_213 GBV_ILN_224 GBV_ILN_230 GBV_ILN_250 GBV_ILN_281 GBV_ILN_285 GBV_ILN_293 GBV_ILN_370 GBV_ILN_602 GBV_ILN_636 GBV_ILN_702 GBV_ILN_2001 GBV_ILN_2003 GBV_ILN_2004 GBV_ILN_2005 GBV_ILN_2006 GBV_ILN_2007 GBV_ILN_2008 GBV_ILN_2009 GBV_ILN_2010 GBV_ILN_2011 GBV_ILN_2014 GBV_ILN_2015 GBV_ILN_2020 GBV_ILN_2021 GBV_ILN_2025 GBV_ILN_2026 GBV_ILN_2027 GBV_ILN_2031 GBV_ILN_2034 GBV_ILN_2037 GBV_ILN_2038 GBV_ILN_2039 GBV_ILN_2044 GBV_ILN_2048 GBV_ILN_2049 GBV_ILN_2050 GBV_ILN_2055 GBV_ILN_2056 GBV_ILN_2057 GBV_ILN_2059 GBV_ILN_2061 GBV_ILN_2064 GBV_ILN_2065 GBV_ILN_2068 GBV_ILN_2088 GBV_ILN_2093 GBV_ILN_2106 GBV_ILN_2107 GBV_ILN_2108 GBV_ILN_2110 GBV_ILN_2111 GBV_ILN_2112 GBV_ILN_2113 GBV_ILN_2118 GBV_ILN_2119 GBV_ILN_2122 GBV_ILN_2129 GBV_ILN_2143 GBV_ILN_2144 GBV_ILN_2147 GBV_ILN_2148 GBV_ILN_2152 GBV_ILN_2153 GBV_ILN_2188 GBV_ILN_2190 GBV_ILN_2232 GBV_ILN_2336 GBV_ILN_2446 GBV_ILN_2470 GBV_ILN_2472 GBV_ILN_2507 GBV_ILN_2522 GBV_ILN_2548 GBV_ILN_4035 GBV_ILN_4037 GBV_ILN_4046 GBV_ILN_4112 GBV_ILN_4125 GBV_ILN_4126 GBV_ILN_4242 GBV_ILN_4246 GBV_ILN_4249 GBV_ILN_4251 GBV_ILN_4305 GBV_ILN_4306 GBV_ILN_4307 GBV_ILN_4313 GBV_ILN_4322 GBV_ILN_4323 GBV_ILN_4324 GBV_ILN_4325 GBV_ILN_4326 GBV_ILN_4328 GBV_ILN_4333 GBV_ILN_4334 GBV_ILN_4335 GBV_ILN_4336 GBV_ILN_4338 GBV_ILN_4393 GBV_ILN_4700 AR 82 2021 9 29 06 13073-13102 |
allfieldsGer |
10.1007/s11042-021-11148-9 doi (DE-627)SPR049771043 (SPR)s11042-021-11148-9-e DE-627 ger DE-627 rakwb eng Chebbi, Safa verfasserin (orcid)0000-0002-2671-7144 aut Deception detection using multimodal fusion approaches 2021 Text txt rdacontent Computermedien c rdamedia Online-Ressource cr rdacarrier © The Author(s), under exclusive licence to Springer Science+Business Media, LLC, part of Springer Nature 2021 Abstract Automatic deception detection is an important task that has gained a huge interest in different fields due to its potential applications. Particularly, it can improve justice and security in society by helping in detecting deceivers in high-stakes situations across jurisprudence, law enforcement, and national security domains, among others. However, the existing deception detection systems until today are not as accurate as it is expected, which makes their use very risky especially in critical fields. This article outlines an approach for automatically distinguishing between deceit and truth based on audio, video and text modalities and explores the possibility of combining them together in order to detect deception more accurately. First, each modality has been evaluated separately and then a feature and decision-level fusion approaches have been proposed to combine the considered modalities. The proposed feature level fusion approach investigates a diversified feature selection techniques to select the most relevant ones among the whole used feature set, while the decision level fusion approach is based on the belief theory considering information about the certainty degree of each modality. To do so, we used a real-life video dataset of people communicating truthfully or deceptively collected from public american court trials. Unimodal models trained on audio, video and text separately achieved an accuracy rate of 60%, 94% and 58% respectively. When using the feature level fusion approach, the best accuracy deception detection result reaches 93% using only 19 combined features, while a 100% deception recognition rate has been obtained with the decision-level fusion proposed approach, outperforming the results obtained in the literature. Deception detection (dpeaa)DE-He213 Multimodality (dpeaa)DE-He213 Audio (dpeaa)DE-He213 Video (dpeaa)DE-He213 Text (dpeaa)DE-He213 Feature selection (dpeaa)DE-He213 Feature-level fusion (dpeaa)DE-He213 Decision-level fusion (dpeaa)DE-He213 Belief theory (dpeaa)DE-He213 Jebara, Sofia Ben aut Enthalten in Multimedia tools and applications Dordrecht [u.a.] : Springer Science + Business Media B.V, 1995 82(2021), 9 vom: 29. Juni, Seite 13073-13102 (DE-627)27135030X (DE-600)1479928-5 1573-7721 nnns volume:82 year:2021 number:9 day:29 month:06 pages:13073-13102 https://dx.doi.org/10.1007/s11042-021-11148-9 lizenzpflichtig Volltext GBV_USEFLAG_A SYSFLAG_A GBV_SPRINGER GBV_ILN_11 GBV_ILN_20 GBV_ILN_22 GBV_ILN_23 GBV_ILN_24 GBV_ILN_31 GBV_ILN_32 GBV_ILN_39 GBV_ILN_40 GBV_ILN_60 GBV_ILN_62 GBV_ILN_63 GBV_ILN_69 GBV_ILN_70 GBV_ILN_73 GBV_ILN_74 GBV_ILN_90 GBV_ILN_95 GBV_ILN_100 GBV_ILN_101 GBV_ILN_105 GBV_ILN_110 GBV_ILN_120 GBV_ILN_138 GBV_ILN_150 GBV_ILN_151 GBV_ILN_152 GBV_ILN_161 GBV_ILN_170 GBV_ILN_171 GBV_ILN_187 GBV_ILN_213 GBV_ILN_224 GBV_ILN_230 GBV_ILN_250 GBV_ILN_281 GBV_ILN_285 GBV_ILN_293 GBV_ILN_370 GBV_ILN_602 GBV_ILN_636 GBV_ILN_702 GBV_ILN_2001 GBV_ILN_2003 GBV_ILN_2004 GBV_ILN_2005 GBV_ILN_2006 GBV_ILN_2007 GBV_ILN_2008 GBV_ILN_2009 GBV_ILN_2010 GBV_ILN_2011 GBV_ILN_2014 GBV_ILN_2015 GBV_ILN_2020 GBV_ILN_2021 GBV_ILN_2025 GBV_ILN_2026 GBV_ILN_2027 GBV_ILN_2031 GBV_ILN_2034 GBV_ILN_2037 GBV_ILN_2038 GBV_ILN_2039 GBV_ILN_2044 GBV_ILN_2048 GBV_ILN_2049 GBV_ILN_2050 GBV_ILN_2055 GBV_ILN_2056 GBV_ILN_2057 GBV_ILN_2059 GBV_ILN_2061 GBV_ILN_2064 GBV_ILN_2065 GBV_ILN_2068 GBV_ILN_2088 GBV_ILN_2093 GBV_ILN_2106 GBV_ILN_2107 GBV_ILN_2108 GBV_ILN_2110 GBV_ILN_2111 GBV_ILN_2112 GBV_ILN_2113 GBV_ILN_2118 GBV_ILN_2119 GBV_ILN_2122 GBV_ILN_2129 GBV_ILN_2143 GBV_ILN_2144 GBV_ILN_2147 GBV_ILN_2148 GBV_ILN_2152 GBV_ILN_2153 GBV_ILN_2188 GBV_ILN_2190 GBV_ILN_2232 GBV_ILN_2336 GBV_ILN_2446 GBV_ILN_2470 GBV_ILN_2472 GBV_ILN_2507 GBV_ILN_2522 GBV_ILN_2548 GBV_ILN_4035 GBV_ILN_4037 GBV_ILN_4046 GBV_ILN_4112 GBV_ILN_4125 GBV_ILN_4126 GBV_ILN_4242 GBV_ILN_4246 GBV_ILN_4249 GBV_ILN_4251 GBV_ILN_4305 GBV_ILN_4306 GBV_ILN_4307 GBV_ILN_4313 GBV_ILN_4322 GBV_ILN_4323 GBV_ILN_4324 GBV_ILN_4325 GBV_ILN_4326 GBV_ILN_4328 GBV_ILN_4333 GBV_ILN_4334 GBV_ILN_4335 GBV_ILN_4336 GBV_ILN_4338 GBV_ILN_4393 GBV_ILN_4700 AR 82 2021 9 29 06 13073-13102 |
allfieldsSound |
10.1007/s11042-021-11148-9 doi (DE-627)SPR049771043 (SPR)s11042-021-11148-9-e DE-627 ger DE-627 rakwb eng Chebbi, Safa verfasserin (orcid)0000-0002-2671-7144 aut Deception detection using multimodal fusion approaches 2021 Text txt rdacontent Computermedien c rdamedia Online-Ressource cr rdacarrier © The Author(s), under exclusive licence to Springer Science+Business Media, LLC, part of Springer Nature 2021 Abstract Automatic deception detection is an important task that has gained a huge interest in different fields due to its potential applications. Particularly, it can improve justice and security in society by helping in detecting deceivers in high-stakes situations across jurisprudence, law enforcement, and national security domains, among others. However, the existing deception detection systems until today are not as accurate as it is expected, which makes their use very risky especially in critical fields. This article outlines an approach for automatically distinguishing between deceit and truth based on audio, video and text modalities and explores the possibility of combining them together in order to detect deception more accurately. First, each modality has been evaluated separately and then a feature and decision-level fusion approaches have been proposed to combine the considered modalities. The proposed feature level fusion approach investigates a diversified feature selection techniques to select the most relevant ones among the whole used feature set, while the decision level fusion approach is based on the belief theory considering information about the certainty degree of each modality. To do so, we used a real-life video dataset of people communicating truthfully or deceptively collected from public american court trials. Unimodal models trained on audio, video and text separately achieved an accuracy rate of 60%, 94% and 58% respectively. When using the feature level fusion approach, the best accuracy deception detection result reaches 93% using only 19 combined features, while a 100% deception recognition rate has been obtained with the decision-level fusion proposed approach, outperforming the results obtained in the literature. Deception detection (dpeaa)DE-He213 Multimodality (dpeaa)DE-He213 Audio (dpeaa)DE-He213 Video (dpeaa)DE-He213 Text (dpeaa)DE-He213 Feature selection (dpeaa)DE-He213 Feature-level fusion (dpeaa)DE-He213 Decision-level fusion (dpeaa)DE-He213 Belief theory (dpeaa)DE-He213 Jebara, Sofia Ben aut Enthalten in Multimedia tools and applications Dordrecht [u.a.] : Springer Science + Business Media B.V, 1995 82(2021), 9 vom: 29. Juni, Seite 13073-13102 (DE-627)27135030X (DE-600)1479928-5 1573-7721 nnns volume:82 year:2021 number:9 day:29 month:06 pages:13073-13102 https://dx.doi.org/10.1007/s11042-021-11148-9 lizenzpflichtig Volltext GBV_USEFLAG_A SYSFLAG_A GBV_SPRINGER GBV_ILN_11 GBV_ILN_20 GBV_ILN_22 GBV_ILN_23 GBV_ILN_24 GBV_ILN_31 GBV_ILN_32 GBV_ILN_39 GBV_ILN_40 GBV_ILN_60 GBV_ILN_62 GBV_ILN_63 GBV_ILN_69 GBV_ILN_70 GBV_ILN_73 GBV_ILN_74 GBV_ILN_90 GBV_ILN_95 GBV_ILN_100 GBV_ILN_101 GBV_ILN_105 GBV_ILN_110 GBV_ILN_120 GBV_ILN_138 GBV_ILN_150 GBV_ILN_151 GBV_ILN_152 GBV_ILN_161 GBV_ILN_170 GBV_ILN_171 GBV_ILN_187 GBV_ILN_213 GBV_ILN_224 GBV_ILN_230 GBV_ILN_250 GBV_ILN_281 GBV_ILN_285 GBV_ILN_293 GBV_ILN_370 GBV_ILN_602 GBV_ILN_636 GBV_ILN_702 GBV_ILN_2001 GBV_ILN_2003 GBV_ILN_2004 GBV_ILN_2005 GBV_ILN_2006 GBV_ILN_2007 GBV_ILN_2008 GBV_ILN_2009 GBV_ILN_2010 GBV_ILN_2011 GBV_ILN_2014 GBV_ILN_2015 GBV_ILN_2020 GBV_ILN_2021 GBV_ILN_2025 GBV_ILN_2026 GBV_ILN_2027 GBV_ILN_2031 GBV_ILN_2034 GBV_ILN_2037 GBV_ILN_2038 GBV_ILN_2039 GBV_ILN_2044 GBV_ILN_2048 GBV_ILN_2049 GBV_ILN_2050 GBV_ILN_2055 GBV_ILN_2056 GBV_ILN_2057 GBV_ILN_2059 GBV_ILN_2061 GBV_ILN_2064 GBV_ILN_2065 GBV_ILN_2068 GBV_ILN_2088 GBV_ILN_2093 GBV_ILN_2106 GBV_ILN_2107 GBV_ILN_2108 GBV_ILN_2110 GBV_ILN_2111 GBV_ILN_2112 GBV_ILN_2113 GBV_ILN_2118 GBV_ILN_2119 GBV_ILN_2122 GBV_ILN_2129 GBV_ILN_2143 GBV_ILN_2144 GBV_ILN_2147 GBV_ILN_2148 GBV_ILN_2152 GBV_ILN_2153 GBV_ILN_2188 GBV_ILN_2190 GBV_ILN_2232 GBV_ILN_2336 GBV_ILN_2446 GBV_ILN_2470 GBV_ILN_2472 GBV_ILN_2507 GBV_ILN_2522 GBV_ILN_2548 GBV_ILN_4035 GBV_ILN_4037 GBV_ILN_4046 GBV_ILN_4112 GBV_ILN_4125 GBV_ILN_4126 GBV_ILN_4242 GBV_ILN_4246 GBV_ILN_4249 GBV_ILN_4251 GBV_ILN_4305 GBV_ILN_4306 GBV_ILN_4307 GBV_ILN_4313 GBV_ILN_4322 GBV_ILN_4323 GBV_ILN_4324 GBV_ILN_4325 GBV_ILN_4326 GBV_ILN_4328 GBV_ILN_4333 GBV_ILN_4334 GBV_ILN_4335 GBV_ILN_4336 GBV_ILN_4338 GBV_ILN_4393 GBV_ILN_4700 AR 82 2021 9 29 06 13073-13102 |
language |
English |
source |
Enthalten in Multimedia tools and applications 82(2021), 9 vom: 29. Juni, Seite 13073-13102 volume:82 year:2021 number:9 day:29 month:06 pages:13073-13102 |
sourceStr |
Enthalten in Multimedia tools and applications 82(2021), 9 vom: 29. Juni, Seite 13073-13102 volume:82 year:2021 number:9 day:29 month:06 pages:13073-13102 |
format_phy_str_mv |
Article |
institution |
findex.gbv.de |
topic_facet |
Deception detection Multimodality Audio Video Text Feature selection Feature-level fusion Decision-level fusion Belief theory |
isfreeaccess_bool |
false |
container_title |
Multimedia tools and applications |
authorswithroles_txt_mv |
Chebbi, Safa @@aut@@ Jebara, Sofia Ben @@aut@@ |
publishDateDaySort_date |
2021-06-29T00:00:00Z |
hierarchy_top_id |
27135030X |
id |
SPR049771043 |
language_de |
englisch |
fullrecord |
<?xml version="1.0" encoding="UTF-8"?><collection xmlns="http://www.loc.gov/MARC21/slim"><record><leader>01000naa a22002652 4500</leader><controlfield tag="001">SPR049771043</controlfield><controlfield tag="003">DE-627</controlfield><controlfield tag="005">20230323113802.0</controlfield><controlfield tag="007">cr uuu---uuuuu</controlfield><controlfield tag="008">230323s2021 xx |||||o 00| ||eng c</controlfield><datafield tag="024" ind1="7" ind2=" "><subfield code="a">10.1007/s11042-021-11148-9</subfield><subfield code="2">doi</subfield></datafield><datafield tag="035" ind1=" " ind2=" "><subfield code="a">(DE-627)SPR049771043</subfield></datafield><datafield tag="035" ind1=" " ind2=" "><subfield code="a">(SPR)s11042-021-11148-9-e</subfield></datafield><datafield tag="040" ind1=" " ind2=" "><subfield code="a">DE-627</subfield><subfield code="b">ger</subfield><subfield code="c">DE-627</subfield><subfield code="e">rakwb</subfield></datafield><datafield tag="041" ind1=" " ind2=" "><subfield code="a">eng</subfield></datafield><datafield tag="100" ind1="1" ind2=" "><subfield code="a">Chebbi, Safa</subfield><subfield code="e">verfasserin</subfield><subfield code="0">(orcid)0000-0002-2671-7144</subfield><subfield code="4">aut</subfield></datafield><datafield tag="245" ind1="1" ind2="0"><subfield code="a">Deception detection using multimodal fusion approaches</subfield></datafield><datafield tag="264" ind1=" " ind2="1"><subfield code="c">2021</subfield></datafield><datafield tag="336" ind1=" " ind2=" "><subfield code="a">Text</subfield><subfield code="b">txt</subfield><subfield code="2">rdacontent</subfield></datafield><datafield tag="337" ind1=" " ind2=" "><subfield code="a">Computermedien</subfield><subfield code="b">c</subfield><subfield code="2">rdamedia</subfield></datafield><datafield tag="338" ind1=" " ind2=" "><subfield code="a">Online-Ressource</subfield><subfield code="b">cr</subfield><subfield code="2">rdacarrier</subfield></datafield><datafield tag="500" ind1=" " ind2=" "><subfield code="a">© The Author(s), under exclusive licence to Springer Science+Business Media, LLC, part of Springer Nature 2021</subfield></datafield><datafield tag="520" ind1=" " ind2=" "><subfield code="a">Abstract Automatic deception detection is an important task that has gained a huge interest in different fields due to its potential applications. Particularly, it can improve justice and security in society by helping in detecting deceivers in high-stakes situations across jurisprudence, law enforcement, and national security domains, among others. However, the existing deception detection systems until today are not as accurate as it is expected, which makes their use very risky especially in critical fields. This article outlines an approach for automatically distinguishing between deceit and truth based on audio, video and text modalities and explores the possibility of combining them together in order to detect deception more accurately. First, each modality has been evaluated separately and then a feature and decision-level fusion approaches have been proposed to combine the considered modalities. The proposed feature level fusion approach investigates a diversified feature selection techniques to select the most relevant ones among the whole used feature set, while the decision level fusion approach is based on the belief theory considering information about the certainty degree of each modality. To do so, we used a real-life video dataset of people communicating truthfully or deceptively collected from public american court trials. Unimodal models trained on audio, video and text separately achieved an accuracy rate of 60%, 94% and 58% respectively. When using the feature level fusion approach, the best accuracy deception detection result reaches 93% using only 19 combined features, while a 100% deception recognition rate has been obtained with the decision-level fusion proposed approach, outperforming the results obtained in the literature.</subfield></datafield><datafield tag="650" ind1=" " ind2="4"><subfield code="a">Deception detection</subfield><subfield code="7">(dpeaa)DE-He213</subfield></datafield><datafield tag="650" ind1=" " ind2="4"><subfield code="a">Multimodality</subfield><subfield code="7">(dpeaa)DE-He213</subfield></datafield><datafield tag="650" ind1=" " ind2="4"><subfield code="a">Audio</subfield><subfield code="7">(dpeaa)DE-He213</subfield></datafield><datafield tag="650" ind1=" " ind2="4"><subfield code="a">Video</subfield><subfield code="7">(dpeaa)DE-He213</subfield></datafield><datafield tag="650" ind1=" " ind2="4"><subfield code="a">Text</subfield><subfield code="7">(dpeaa)DE-He213</subfield></datafield><datafield tag="650" ind1=" " ind2="4"><subfield code="a">Feature selection</subfield><subfield code="7">(dpeaa)DE-He213</subfield></datafield><datafield tag="650" ind1=" " ind2="4"><subfield code="a">Feature-level fusion</subfield><subfield code="7">(dpeaa)DE-He213</subfield></datafield><datafield tag="650" ind1=" " ind2="4"><subfield code="a">Decision-level fusion</subfield><subfield code="7">(dpeaa)DE-He213</subfield></datafield><datafield tag="650" ind1=" " ind2="4"><subfield code="a">Belief theory</subfield><subfield code="7">(dpeaa)DE-He213</subfield></datafield><datafield tag="700" ind1="1" ind2=" "><subfield code="a">Jebara, Sofia Ben</subfield><subfield code="4">aut</subfield></datafield><datafield tag="773" ind1="0" ind2="8"><subfield code="i">Enthalten in</subfield><subfield code="t">Multimedia tools and applications</subfield><subfield code="d">Dordrecht [u.a.] : Springer Science + Business Media B.V, 1995</subfield><subfield code="g">82(2021), 9 vom: 29. Juni, Seite 13073-13102</subfield><subfield code="w">(DE-627)27135030X</subfield><subfield code="w">(DE-600)1479928-5</subfield><subfield code="x">1573-7721</subfield><subfield code="7">nnns</subfield></datafield><datafield tag="773" ind1="1" ind2="8"><subfield code="g">volume:82</subfield><subfield code="g">year:2021</subfield><subfield code="g">number:9</subfield><subfield code="g">day:29</subfield><subfield code="g">month:06</subfield><subfield code="g">pages:13073-13102</subfield></datafield><datafield tag="856" ind1="4" ind2="0"><subfield code="u">https://dx.doi.org/10.1007/s11042-021-11148-9</subfield><subfield code="z">lizenzpflichtig</subfield><subfield code="3">Volltext</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_USEFLAG_A</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">SYSFLAG_A</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_SPRINGER</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_11</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_20</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_22</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_23</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_24</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_31</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_32</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_39</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_40</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_60</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_62</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_63</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_69</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_70</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_73</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_74</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_90</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_95</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_100</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_101</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_105</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_110</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_120</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_138</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_150</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_151</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_152</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_161</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_170</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_171</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_187</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_213</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_224</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_230</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_250</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_281</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_285</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_293</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_370</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_602</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_636</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_702</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2001</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2003</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2004</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2005</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2006</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2007</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2008</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2009</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2010</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2011</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2014</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2015</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2020</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2021</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2025</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2026</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2027</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2031</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2034</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2037</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2038</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2039</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2044</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2048</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2049</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2050</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2055</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2056</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2057</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2059</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2061</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2064</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2065</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2068</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2088</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2093</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2106</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2107</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2108</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2110</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2111</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2112</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2113</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2118</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2119</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2122</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2129</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2143</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2144</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2147</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2148</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2152</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2153</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2188</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2190</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2232</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2336</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2446</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2470</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2472</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2507</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2522</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2548</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4035</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4037</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4046</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4112</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4125</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4126</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4242</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4246</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4249</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4251</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4305</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4306</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4307</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4313</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4322</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4323</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4324</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4325</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4326</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4328</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4333</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4334</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4335</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4336</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4338</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4393</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4700</subfield></datafield><datafield tag="951" ind1=" " ind2=" "><subfield code="a">AR</subfield></datafield><datafield tag="952" ind1=" " ind2=" "><subfield code="d">82</subfield><subfield code="j">2021</subfield><subfield code="e">9</subfield><subfield code="b">29</subfield><subfield code="c">06</subfield><subfield code="h">13073-13102</subfield></datafield></record></collection>
|
author |
Chebbi, Safa |
spellingShingle |
Chebbi, Safa misc Deception detection misc Multimodality misc Audio misc Video misc Text misc Feature selection misc Feature-level fusion misc Decision-level fusion misc Belief theory Deception detection using multimodal fusion approaches |
authorStr |
Chebbi, Safa |
ppnlink_with_tag_str_mv |
@@773@@(DE-627)27135030X |
format |
electronic Article |
delete_txt_mv |
keep |
author_role |
aut aut |
collection |
springer |
remote_str |
true |
illustrated |
Not Illustrated |
issn |
1573-7721 |
topic_title |
Deception detection using multimodal fusion approaches Deception detection (dpeaa)DE-He213 Multimodality (dpeaa)DE-He213 Audio (dpeaa)DE-He213 Video (dpeaa)DE-He213 Text (dpeaa)DE-He213 Feature selection (dpeaa)DE-He213 Feature-level fusion (dpeaa)DE-He213 Decision-level fusion (dpeaa)DE-He213 Belief theory (dpeaa)DE-He213 |
topic |
misc Deception detection misc Multimodality misc Audio misc Video misc Text misc Feature selection misc Feature-level fusion misc Decision-level fusion misc Belief theory |
topic_unstemmed |
misc Deception detection misc Multimodality misc Audio misc Video misc Text misc Feature selection misc Feature-level fusion misc Decision-level fusion misc Belief theory |
topic_browse |
misc Deception detection misc Multimodality misc Audio misc Video misc Text misc Feature selection misc Feature-level fusion misc Decision-level fusion misc Belief theory |
format_facet |
Elektronische Aufsätze Aufsätze Elektronische Ressource |
format_main_str_mv |
Text Zeitschrift/Artikel |
carriertype_str_mv |
cr |
hierarchy_parent_title |
Multimedia tools and applications |
hierarchy_parent_id |
27135030X |
hierarchy_top_title |
Multimedia tools and applications |
isfreeaccess_txt |
false |
familylinks_str_mv |
(DE-627)27135030X (DE-600)1479928-5 |
title |
Deception detection using multimodal fusion approaches |
ctrlnum |
(DE-627)SPR049771043 (SPR)s11042-021-11148-9-e |
title_full |
Deception detection using multimodal fusion approaches |
author_sort |
Chebbi, Safa |
journal |
Multimedia tools and applications |
journalStr |
Multimedia tools and applications |
lang_code |
eng |
isOA_bool |
false |
recordtype |
marc |
publishDateSort |
2021 |
contenttype_str_mv |
txt |
container_start_page |
13073 |
author_browse |
Chebbi, Safa Jebara, Sofia Ben |
container_volume |
82 |
format_se |
Elektronische Aufsätze |
author-letter |
Chebbi, Safa |
doi_str_mv |
10.1007/s11042-021-11148-9 |
normlink |
(ORCID)0000-0002-2671-7144 |
normlink_prefix_str_mv |
(orcid)0000-0002-2671-7144 |
title_sort |
deception detection using multimodal fusion approaches |
title_auth |
Deception detection using multimodal fusion approaches |
abstract |
Abstract Automatic deception detection is an important task that has gained a huge interest in different fields due to its potential applications. Particularly, it can improve justice and security in society by helping in detecting deceivers in high-stakes situations across jurisprudence, law enforcement, and national security domains, among others. However, the existing deception detection systems until today are not as accurate as it is expected, which makes their use very risky especially in critical fields. This article outlines an approach for automatically distinguishing between deceit and truth based on audio, video and text modalities and explores the possibility of combining them together in order to detect deception more accurately. First, each modality has been evaluated separately and then a feature and decision-level fusion approaches have been proposed to combine the considered modalities. The proposed feature level fusion approach investigates a diversified feature selection techniques to select the most relevant ones among the whole used feature set, while the decision level fusion approach is based on the belief theory considering information about the certainty degree of each modality. To do so, we used a real-life video dataset of people communicating truthfully or deceptively collected from public american court trials. Unimodal models trained on audio, video and text separately achieved an accuracy rate of 60%, 94% and 58% respectively. When using the feature level fusion approach, the best accuracy deception detection result reaches 93% using only 19 combined features, while a 100% deception recognition rate has been obtained with the decision-level fusion proposed approach, outperforming the results obtained in the literature. © The Author(s), under exclusive licence to Springer Science+Business Media, LLC, part of Springer Nature 2021 |
abstractGer |
Abstract Automatic deception detection is an important task that has gained a huge interest in different fields due to its potential applications. Particularly, it can improve justice and security in society by helping in detecting deceivers in high-stakes situations across jurisprudence, law enforcement, and national security domains, among others. However, the existing deception detection systems until today are not as accurate as it is expected, which makes their use very risky especially in critical fields. This article outlines an approach for automatically distinguishing between deceit and truth based on audio, video and text modalities and explores the possibility of combining them together in order to detect deception more accurately. First, each modality has been evaluated separately and then a feature and decision-level fusion approaches have been proposed to combine the considered modalities. The proposed feature level fusion approach investigates a diversified feature selection techniques to select the most relevant ones among the whole used feature set, while the decision level fusion approach is based on the belief theory considering information about the certainty degree of each modality. To do so, we used a real-life video dataset of people communicating truthfully or deceptively collected from public american court trials. Unimodal models trained on audio, video and text separately achieved an accuracy rate of 60%, 94% and 58% respectively. When using the feature level fusion approach, the best accuracy deception detection result reaches 93% using only 19 combined features, while a 100% deception recognition rate has been obtained with the decision-level fusion proposed approach, outperforming the results obtained in the literature. © The Author(s), under exclusive licence to Springer Science+Business Media, LLC, part of Springer Nature 2021 |
abstract_unstemmed |
Abstract Automatic deception detection is an important task that has gained a huge interest in different fields due to its potential applications. Particularly, it can improve justice and security in society by helping in detecting deceivers in high-stakes situations across jurisprudence, law enforcement, and national security domains, among others. However, the existing deception detection systems until today are not as accurate as it is expected, which makes their use very risky especially in critical fields. This article outlines an approach for automatically distinguishing between deceit and truth based on audio, video and text modalities and explores the possibility of combining them together in order to detect deception more accurately. First, each modality has been evaluated separately and then a feature and decision-level fusion approaches have been proposed to combine the considered modalities. The proposed feature level fusion approach investigates a diversified feature selection techniques to select the most relevant ones among the whole used feature set, while the decision level fusion approach is based on the belief theory considering information about the certainty degree of each modality. To do so, we used a real-life video dataset of people communicating truthfully or deceptively collected from public american court trials. Unimodal models trained on audio, video and text separately achieved an accuracy rate of 60%, 94% and 58% respectively. When using the feature level fusion approach, the best accuracy deception detection result reaches 93% using only 19 combined features, while a 100% deception recognition rate has been obtained with the decision-level fusion proposed approach, outperforming the results obtained in the literature. © The Author(s), under exclusive licence to Springer Science+Business Media, LLC, part of Springer Nature 2021 |
collection_details |
GBV_USEFLAG_A SYSFLAG_A GBV_SPRINGER GBV_ILN_11 GBV_ILN_20 GBV_ILN_22 GBV_ILN_23 GBV_ILN_24 GBV_ILN_31 GBV_ILN_32 GBV_ILN_39 GBV_ILN_40 GBV_ILN_60 GBV_ILN_62 GBV_ILN_63 GBV_ILN_69 GBV_ILN_70 GBV_ILN_73 GBV_ILN_74 GBV_ILN_90 GBV_ILN_95 GBV_ILN_100 GBV_ILN_101 GBV_ILN_105 GBV_ILN_110 GBV_ILN_120 GBV_ILN_138 GBV_ILN_150 GBV_ILN_151 GBV_ILN_152 GBV_ILN_161 GBV_ILN_170 GBV_ILN_171 GBV_ILN_187 GBV_ILN_213 GBV_ILN_224 GBV_ILN_230 GBV_ILN_250 GBV_ILN_281 GBV_ILN_285 GBV_ILN_293 GBV_ILN_370 GBV_ILN_602 GBV_ILN_636 GBV_ILN_702 GBV_ILN_2001 GBV_ILN_2003 GBV_ILN_2004 GBV_ILN_2005 GBV_ILN_2006 GBV_ILN_2007 GBV_ILN_2008 GBV_ILN_2009 GBV_ILN_2010 GBV_ILN_2011 GBV_ILN_2014 GBV_ILN_2015 GBV_ILN_2020 GBV_ILN_2021 GBV_ILN_2025 GBV_ILN_2026 GBV_ILN_2027 GBV_ILN_2031 GBV_ILN_2034 GBV_ILN_2037 GBV_ILN_2038 GBV_ILN_2039 GBV_ILN_2044 GBV_ILN_2048 GBV_ILN_2049 GBV_ILN_2050 GBV_ILN_2055 GBV_ILN_2056 GBV_ILN_2057 GBV_ILN_2059 GBV_ILN_2061 GBV_ILN_2064 GBV_ILN_2065 GBV_ILN_2068 GBV_ILN_2088 GBV_ILN_2093 GBV_ILN_2106 GBV_ILN_2107 GBV_ILN_2108 GBV_ILN_2110 GBV_ILN_2111 GBV_ILN_2112 GBV_ILN_2113 GBV_ILN_2118 GBV_ILN_2119 GBV_ILN_2122 GBV_ILN_2129 GBV_ILN_2143 GBV_ILN_2144 GBV_ILN_2147 GBV_ILN_2148 GBV_ILN_2152 GBV_ILN_2153 GBV_ILN_2188 GBV_ILN_2190 GBV_ILN_2232 GBV_ILN_2336 GBV_ILN_2446 GBV_ILN_2470 GBV_ILN_2472 GBV_ILN_2507 GBV_ILN_2522 GBV_ILN_2548 GBV_ILN_4035 GBV_ILN_4037 GBV_ILN_4046 GBV_ILN_4112 GBV_ILN_4125 GBV_ILN_4126 GBV_ILN_4242 GBV_ILN_4246 GBV_ILN_4249 GBV_ILN_4251 GBV_ILN_4305 GBV_ILN_4306 GBV_ILN_4307 GBV_ILN_4313 GBV_ILN_4322 GBV_ILN_4323 GBV_ILN_4324 GBV_ILN_4325 GBV_ILN_4326 GBV_ILN_4328 GBV_ILN_4333 GBV_ILN_4334 GBV_ILN_4335 GBV_ILN_4336 GBV_ILN_4338 GBV_ILN_4393 GBV_ILN_4700 |
container_issue |
9 |
title_short |
Deception detection using multimodal fusion approaches |
url |
https://dx.doi.org/10.1007/s11042-021-11148-9 |
remote_bool |
true |
author2 |
Jebara, Sofia Ben |
author2Str |
Jebara, Sofia Ben |
ppnlink |
27135030X |
mediatype_str_mv |
c |
isOA_txt |
false |
hochschulschrift_bool |
false |
doi_str |
10.1007/s11042-021-11148-9 |
up_date |
2024-07-04T02:13:20.484Z |
_version_ |
1803612796209856512 |
fullrecord_marcxml |
<?xml version="1.0" encoding="UTF-8"?><collection xmlns="http://www.loc.gov/MARC21/slim"><record><leader>01000naa a22002652 4500</leader><controlfield tag="001">SPR049771043</controlfield><controlfield tag="003">DE-627</controlfield><controlfield tag="005">20230323113802.0</controlfield><controlfield tag="007">cr uuu---uuuuu</controlfield><controlfield tag="008">230323s2021 xx |||||o 00| ||eng c</controlfield><datafield tag="024" ind1="7" ind2=" "><subfield code="a">10.1007/s11042-021-11148-9</subfield><subfield code="2">doi</subfield></datafield><datafield tag="035" ind1=" " ind2=" "><subfield code="a">(DE-627)SPR049771043</subfield></datafield><datafield tag="035" ind1=" " ind2=" "><subfield code="a">(SPR)s11042-021-11148-9-e</subfield></datafield><datafield tag="040" ind1=" " ind2=" "><subfield code="a">DE-627</subfield><subfield code="b">ger</subfield><subfield code="c">DE-627</subfield><subfield code="e">rakwb</subfield></datafield><datafield tag="041" ind1=" " ind2=" "><subfield code="a">eng</subfield></datafield><datafield tag="100" ind1="1" ind2=" "><subfield code="a">Chebbi, Safa</subfield><subfield code="e">verfasserin</subfield><subfield code="0">(orcid)0000-0002-2671-7144</subfield><subfield code="4">aut</subfield></datafield><datafield tag="245" ind1="1" ind2="0"><subfield code="a">Deception detection using multimodal fusion approaches</subfield></datafield><datafield tag="264" ind1=" " ind2="1"><subfield code="c">2021</subfield></datafield><datafield tag="336" ind1=" " ind2=" "><subfield code="a">Text</subfield><subfield code="b">txt</subfield><subfield code="2">rdacontent</subfield></datafield><datafield tag="337" ind1=" " ind2=" "><subfield code="a">Computermedien</subfield><subfield code="b">c</subfield><subfield code="2">rdamedia</subfield></datafield><datafield tag="338" ind1=" " ind2=" "><subfield code="a">Online-Ressource</subfield><subfield code="b">cr</subfield><subfield code="2">rdacarrier</subfield></datafield><datafield tag="500" ind1=" " ind2=" "><subfield code="a">© The Author(s), under exclusive licence to Springer Science+Business Media, LLC, part of Springer Nature 2021</subfield></datafield><datafield tag="520" ind1=" " ind2=" "><subfield code="a">Abstract Automatic deception detection is an important task that has gained a huge interest in different fields due to its potential applications. Particularly, it can improve justice and security in society by helping in detecting deceivers in high-stakes situations across jurisprudence, law enforcement, and national security domains, among others. However, the existing deception detection systems until today are not as accurate as it is expected, which makes their use very risky especially in critical fields. This article outlines an approach for automatically distinguishing between deceit and truth based on audio, video and text modalities and explores the possibility of combining them together in order to detect deception more accurately. First, each modality has been evaluated separately and then a feature and decision-level fusion approaches have been proposed to combine the considered modalities. The proposed feature level fusion approach investigates a diversified feature selection techniques to select the most relevant ones among the whole used feature set, while the decision level fusion approach is based on the belief theory considering information about the certainty degree of each modality. To do so, we used a real-life video dataset of people communicating truthfully or deceptively collected from public american court trials. Unimodal models trained on audio, video and text separately achieved an accuracy rate of 60%, 94% and 58% respectively. When using the feature level fusion approach, the best accuracy deception detection result reaches 93% using only 19 combined features, while a 100% deception recognition rate has been obtained with the decision-level fusion proposed approach, outperforming the results obtained in the literature.</subfield></datafield><datafield tag="650" ind1=" " ind2="4"><subfield code="a">Deception detection</subfield><subfield code="7">(dpeaa)DE-He213</subfield></datafield><datafield tag="650" ind1=" " ind2="4"><subfield code="a">Multimodality</subfield><subfield code="7">(dpeaa)DE-He213</subfield></datafield><datafield tag="650" ind1=" " ind2="4"><subfield code="a">Audio</subfield><subfield code="7">(dpeaa)DE-He213</subfield></datafield><datafield tag="650" ind1=" " ind2="4"><subfield code="a">Video</subfield><subfield code="7">(dpeaa)DE-He213</subfield></datafield><datafield tag="650" ind1=" " ind2="4"><subfield code="a">Text</subfield><subfield code="7">(dpeaa)DE-He213</subfield></datafield><datafield tag="650" ind1=" " ind2="4"><subfield code="a">Feature selection</subfield><subfield code="7">(dpeaa)DE-He213</subfield></datafield><datafield tag="650" ind1=" " ind2="4"><subfield code="a">Feature-level fusion</subfield><subfield code="7">(dpeaa)DE-He213</subfield></datafield><datafield tag="650" ind1=" " ind2="4"><subfield code="a">Decision-level fusion</subfield><subfield code="7">(dpeaa)DE-He213</subfield></datafield><datafield tag="650" ind1=" " ind2="4"><subfield code="a">Belief theory</subfield><subfield code="7">(dpeaa)DE-He213</subfield></datafield><datafield tag="700" ind1="1" ind2=" "><subfield code="a">Jebara, Sofia Ben</subfield><subfield code="4">aut</subfield></datafield><datafield tag="773" ind1="0" ind2="8"><subfield code="i">Enthalten in</subfield><subfield code="t">Multimedia tools and applications</subfield><subfield code="d">Dordrecht [u.a.] : Springer Science + Business Media B.V, 1995</subfield><subfield code="g">82(2021), 9 vom: 29. Juni, Seite 13073-13102</subfield><subfield code="w">(DE-627)27135030X</subfield><subfield code="w">(DE-600)1479928-5</subfield><subfield code="x">1573-7721</subfield><subfield code="7">nnns</subfield></datafield><datafield tag="773" ind1="1" ind2="8"><subfield code="g">volume:82</subfield><subfield code="g">year:2021</subfield><subfield code="g">number:9</subfield><subfield code="g">day:29</subfield><subfield code="g">month:06</subfield><subfield code="g">pages:13073-13102</subfield></datafield><datafield tag="856" ind1="4" ind2="0"><subfield code="u">https://dx.doi.org/10.1007/s11042-021-11148-9</subfield><subfield code="z">lizenzpflichtig</subfield><subfield code="3">Volltext</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_USEFLAG_A</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">SYSFLAG_A</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_SPRINGER</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_11</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_20</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_22</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_23</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_24</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_31</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_32</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_39</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_40</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_60</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_62</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_63</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_69</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_70</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_73</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_74</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_90</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_95</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_100</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_101</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_105</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_110</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_120</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_138</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_150</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_151</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_152</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_161</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_170</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_171</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_187</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_213</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_224</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_230</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_250</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_281</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_285</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_293</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_370</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_602</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_636</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_702</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2001</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2003</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2004</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2005</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2006</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2007</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2008</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2009</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2010</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2011</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2014</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2015</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2020</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2021</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2025</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2026</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2027</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2031</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2034</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2037</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2038</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2039</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2044</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2048</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2049</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2050</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2055</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2056</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2057</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2059</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2061</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2064</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2065</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2068</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2088</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2093</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2106</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2107</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2108</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2110</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2111</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2112</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2113</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2118</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2119</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2122</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2129</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2143</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2144</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2147</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2148</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2152</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2153</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2188</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2190</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2232</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2336</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2446</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2470</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2472</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2507</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2522</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2548</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4035</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4037</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4046</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4112</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4125</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4126</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4242</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4246</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4249</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4251</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4305</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4306</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4307</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4313</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4322</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4323</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4324</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4325</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4326</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4328</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4333</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4334</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4335</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4336</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4338</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4393</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4700</subfield></datafield><datafield tag="951" ind1=" " ind2=" "><subfield code="a">AR</subfield></datafield><datafield tag="952" ind1=" " ind2=" "><subfield code="d">82</subfield><subfield code="j">2021</subfield><subfield code="e">9</subfield><subfield code="b">29</subfield><subfield code="c">06</subfield><subfield code="h">13073-13102</subfield></datafield></record></collection>
|
score |
7.3984175 |