Image-format-independent tampered image detection based on overlapping concurrent directional patterns and neural networks
Abstract With the advancement of photo editing software, digital documents can easily be altered, which causes some legal issues. This paper proposes an image authentication method, which determines whether an image is authentic. Unlike many existing methods that only work with images in the JPEG fo...
Ausführliche Beschreibung
Autor*in: |
Wu, Meng-Luen [verfasserIn] |
---|
Format: |
Artikel |
---|---|
Sprache: |
Englisch |
Erschienen: |
2017 |
---|
Schlagwörter: |
---|
Anmerkung: |
© Springer Science+Business Media New York 2017 |
---|
Übergeordnetes Werk: |
Enthalten in: Applied intelligence - Springer US, 1991, 47(2017), 2 vom: 13. März, Seite 347-361 |
---|---|
Übergeordnetes Werk: |
volume:47 ; year:2017 ; number:2 ; day:13 ; month:03 ; pages:347-361 |
Links: |
---|
DOI / URN: |
10.1007/s10489-017-0893-4 |
---|
Katalog-ID: |
OLC2066102997 |
---|
LEADER | 01000caa a22002652 4500 | ||
---|---|---|---|
001 | OLC2066102997 | ||
003 | DE-627 | ||
005 | 20230502204957.0 | ||
007 | tu | ||
008 | 200820s2017 xx ||||| 00| ||eng c | ||
024 | 7 | |a 10.1007/s10489-017-0893-4 |2 doi | |
035 | |a (DE-627)OLC2066102997 | ||
035 | |a (DE-He213)s10489-017-0893-4-p | ||
040 | |a DE-627 |b ger |c DE-627 |e rakwb | ||
041 | |a eng | ||
082 | 0 | 4 | |a 004 |q VZ |
100 | 1 | |a Wu, Meng-Luen |e verfasserin |0 (orcid)0000-0002-8017-8073 |4 aut | |
245 | 1 | 0 | |a Image-format-independent tampered image detection based on overlapping concurrent directional patterns and neural networks |
264 | 1 | |c 2017 | |
336 | |a Text |b txt |2 rdacontent | ||
337 | |a ohne Hilfsmittel zu benutzen |b n |2 rdamedia | ||
338 | |a Band |b nc |2 rdacarrier | ||
500 | |a © Springer Science+Business Media New York 2017 | ||
520 | |a Abstract With the advancement of photo editing software, digital documents can easily be altered, which causes some legal issues. This paper proposes an image authentication method, which determines whether an image is authentic. Unlike many existing methods that only work with images in the JPEG format, the proposed method is image format independent, implying that it works with both noncompressed images and images in all compression formats. To improve the authentication accuracy, some strategies, such as overlapping image blocks only on concurrent directions, using a two-scale local binary pattern operator, and choosing the mean deviation instead of the standard deviation, are applied. A back-propagation neural network (BPNN) is used instead of support vector machines (SVMs) for classification to make online learning easier and achieve higher accuracy. In our experiments, we used the CASIA Database (CASIA TIDE v1.0) of compressed images and the Columbia University Digital Video Multimedia (DVMM) dataset of uncompressed images to evaluate our image authentication method. This benchmark dataset includes two types of image tampering, namely image splicing and copy–move forgery. Experiments were performed using both the SVM and BPNN classifiers with various parameters. We determined that the BPNN achieved a higher accuracy of up to 97.26 %. | ||
650 | 4 | |a Digital image forensics | |
650 | 4 | |a Digital image authentication | |
650 | 4 | |a Tampered image detection | |
650 | 4 | |a Artificial neural network | |
700 | 1 | |a Fahn, Chin-Shyurng |4 aut | |
700 | 1 | |a Chen, Yi-Fan |4 aut | |
773 | 0 | 8 | |i Enthalten in |t Applied intelligence |d Springer US, 1991 |g 47(2017), 2 vom: 13. März, Seite 347-361 |w (DE-627)130990515 |w (DE-600)1080229-0 |w (DE-576)029154286 |x 0924-669X |7 nnns |
773 | 1 | 8 | |g volume:47 |g year:2017 |g number:2 |g day:13 |g month:03 |g pages:347-361 |
856 | 4 | 1 | |u https://doi.org/10.1007/s10489-017-0893-4 |z lizenzpflichtig |3 Volltext |
912 | |a GBV_USEFLAG_A | ||
912 | |a SYSFLAG_A | ||
912 | |a GBV_OLC | ||
912 | |a SSG-OLC-MAT | ||
912 | |a GBV_ILN_70 | ||
951 | |a AR | ||
952 | |d 47 |j 2017 |e 2 |b 13 |c 03 |h 347-361 |
author_variant |
m l w mlw c s f csf y f c yfc |
---|---|
matchkey_str |
article:0924669X:2017----::mgfraidpnetaprdmgdtcinaeooelpigocretieto |
hierarchy_sort_str |
2017 |
publishDate |
2017 |
allfields |
10.1007/s10489-017-0893-4 doi (DE-627)OLC2066102997 (DE-He213)s10489-017-0893-4-p DE-627 ger DE-627 rakwb eng 004 VZ Wu, Meng-Luen verfasserin (orcid)0000-0002-8017-8073 aut Image-format-independent tampered image detection based on overlapping concurrent directional patterns and neural networks 2017 Text txt rdacontent ohne Hilfsmittel zu benutzen n rdamedia Band nc rdacarrier © Springer Science+Business Media New York 2017 Abstract With the advancement of photo editing software, digital documents can easily be altered, which causes some legal issues. This paper proposes an image authentication method, which determines whether an image is authentic. Unlike many existing methods that only work with images in the JPEG format, the proposed method is image format independent, implying that it works with both noncompressed images and images in all compression formats. To improve the authentication accuracy, some strategies, such as overlapping image blocks only on concurrent directions, using a two-scale local binary pattern operator, and choosing the mean deviation instead of the standard deviation, are applied. A back-propagation neural network (BPNN) is used instead of support vector machines (SVMs) for classification to make online learning easier and achieve higher accuracy. In our experiments, we used the CASIA Database (CASIA TIDE v1.0) of compressed images and the Columbia University Digital Video Multimedia (DVMM) dataset of uncompressed images to evaluate our image authentication method. This benchmark dataset includes two types of image tampering, namely image splicing and copy–move forgery. Experiments were performed using both the SVM and BPNN classifiers with various parameters. We determined that the BPNN achieved a higher accuracy of up to 97.26 %. Digital image forensics Digital image authentication Tampered image detection Artificial neural network Fahn, Chin-Shyurng aut Chen, Yi-Fan aut Enthalten in Applied intelligence Springer US, 1991 47(2017), 2 vom: 13. März, Seite 347-361 (DE-627)130990515 (DE-600)1080229-0 (DE-576)029154286 0924-669X nnns volume:47 year:2017 number:2 day:13 month:03 pages:347-361 https://doi.org/10.1007/s10489-017-0893-4 lizenzpflichtig Volltext GBV_USEFLAG_A SYSFLAG_A GBV_OLC SSG-OLC-MAT GBV_ILN_70 AR 47 2017 2 13 03 347-361 |
spelling |
10.1007/s10489-017-0893-4 doi (DE-627)OLC2066102997 (DE-He213)s10489-017-0893-4-p DE-627 ger DE-627 rakwb eng 004 VZ Wu, Meng-Luen verfasserin (orcid)0000-0002-8017-8073 aut Image-format-independent tampered image detection based on overlapping concurrent directional patterns and neural networks 2017 Text txt rdacontent ohne Hilfsmittel zu benutzen n rdamedia Band nc rdacarrier © Springer Science+Business Media New York 2017 Abstract With the advancement of photo editing software, digital documents can easily be altered, which causes some legal issues. This paper proposes an image authentication method, which determines whether an image is authentic. Unlike many existing methods that only work with images in the JPEG format, the proposed method is image format independent, implying that it works with both noncompressed images and images in all compression formats. To improve the authentication accuracy, some strategies, such as overlapping image blocks only on concurrent directions, using a two-scale local binary pattern operator, and choosing the mean deviation instead of the standard deviation, are applied. A back-propagation neural network (BPNN) is used instead of support vector machines (SVMs) for classification to make online learning easier and achieve higher accuracy. In our experiments, we used the CASIA Database (CASIA TIDE v1.0) of compressed images and the Columbia University Digital Video Multimedia (DVMM) dataset of uncompressed images to evaluate our image authentication method. This benchmark dataset includes two types of image tampering, namely image splicing and copy–move forgery. Experiments were performed using both the SVM and BPNN classifiers with various parameters. We determined that the BPNN achieved a higher accuracy of up to 97.26 %. Digital image forensics Digital image authentication Tampered image detection Artificial neural network Fahn, Chin-Shyurng aut Chen, Yi-Fan aut Enthalten in Applied intelligence Springer US, 1991 47(2017), 2 vom: 13. März, Seite 347-361 (DE-627)130990515 (DE-600)1080229-0 (DE-576)029154286 0924-669X nnns volume:47 year:2017 number:2 day:13 month:03 pages:347-361 https://doi.org/10.1007/s10489-017-0893-4 lizenzpflichtig Volltext GBV_USEFLAG_A SYSFLAG_A GBV_OLC SSG-OLC-MAT GBV_ILN_70 AR 47 2017 2 13 03 347-361 |
allfields_unstemmed |
10.1007/s10489-017-0893-4 doi (DE-627)OLC2066102997 (DE-He213)s10489-017-0893-4-p DE-627 ger DE-627 rakwb eng 004 VZ Wu, Meng-Luen verfasserin (orcid)0000-0002-8017-8073 aut Image-format-independent tampered image detection based on overlapping concurrent directional patterns and neural networks 2017 Text txt rdacontent ohne Hilfsmittel zu benutzen n rdamedia Band nc rdacarrier © Springer Science+Business Media New York 2017 Abstract With the advancement of photo editing software, digital documents can easily be altered, which causes some legal issues. This paper proposes an image authentication method, which determines whether an image is authentic. Unlike many existing methods that only work with images in the JPEG format, the proposed method is image format independent, implying that it works with both noncompressed images and images in all compression formats. To improve the authentication accuracy, some strategies, such as overlapping image blocks only on concurrent directions, using a two-scale local binary pattern operator, and choosing the mean deviation instead of the standard deviation, are applied. A back-propagation neural network (BPNN) is used instead of support vector machines (SVMs) for classification to make online learning easier and achieve higher accuracy. In our experiments, we used the CASIA Database (CASIA TIDE v1.0) of compressed images and the Columbia University Digital Video Multimedia (DVMM) dataset of uncompressed images to evaluate our image authentication method. This benchmark dataset includes two types of image tampering, namely image splicing and copy–move forgery. Experiments were performed using both the SVM and BPNN classifiers with various parameters. We determined that the BPNN achieved a higher accuracy of up to 97.26 %. Digital image forensics Digital image authentication Tampered image detection Artificial neural network Fahn, Chin-Shyurng aut Chen, Yi-Fan aut Enthalten in Applied intelligence Springer US, 1991 47(2017), 2 vom: 13. März, Seite 347-361 (DE-627)130990515 (DE-600)1080229-0 (DE-576)029154286 0924-669X nnns volume:47 year:2017 number:2 day:13 month:03 pages:347-361 https://doi.org/10.1007/s10489-017-0893-4 lizenzpflichtig Volltext GBV_USEFLAG_A SYSFLAG_A GBV_OLC SSG-OLC-MAT GBV_ILN_70 AR 47 2017 2 13 03 347-361 |
allfieldsGer |
10.1007/s10489-017-0893-4 doi (DE-627)OLC2066102997 (DE-He213)s10489-017-0893-4-p DE-627 ger DE-627 rakwb eng 004 VZ Wu, Meng-Luen verfasserin (orcid)0000-0002-8017-8073 aut Image-format-independent tampered image detection based on overlapping concurrent directional patterns and neural networks 2017 Text txt rdacontent ohne Hilfsmittel zu benutzen n rdamedia Band nc rdacarrier © Springer Science+Business Media New York 2017 Abstract With the advancement of photo editing software, digital documents can easily be altered, which causes some legal issues. This paper proposes an image authentication method, which determines whether an image is authentic. Unlike many existing methods that only work with images in the JPEG format, the proposed method is image format independent, implying that it works with both noncompressed images and images in all compression formats. To improve the authentication accuracy, some strategies, such as overlapping image blocks only on concurrent directions, using a two-scale local binary pattern operator, and choosing the mean deviation instead of the standard deviation, are applied. A back-propagation neural network (BPNN) is used instead of support vector machines (SVMs) for classification to make online learning easier and achieve higher accuracy. In our experiments, we used the CASIA Database (CASIA TIDE v1.0) of compressed images and the Columbia University Digital Video Multimedia (DVMM) dataset of uncompressed images to evaluate our image authentication method. This benchmark dataset includes two types of image tampering, namely image splicing and copy–move forgery. Experiments were performed using both the SVM and BPNN classifiers with various parameters. We determined that the BPNN achieved a higher accuracy of up to 97.26 %. Digital image forensics Digital image authentication Tampered image detection Artificial neural network Fahn, Chin-Shyurng aut Chen, Yi-Fan aut Enthalten in Applied intelligence Springer US, 1991 47(2017), 2 vom: 13. März, Seite 347-361 (DE-627)130990515 (DE-600)1080229-0 (DE-576)029154286 0924-669X nnns volume:47 year:2017 number:2 day:13 month:03 pages:347-361 https://doi.org/10.1007/s10489-017-0893-4 lizenzpflichtig Volltext GBV_USEFLAG_A SYSFLAG_A GBV_OLC SSG-OLC-MAT GBV_ILN_70 AR 47 2017 2 13 03 347-361 |
allfieldsSound |
10.1007/s10489-017-0893-4 doi (DE-627)OLC2066102997 (DE-He213)s10489-017-0893-4-p DE-627 ger DE-627 rakwb eng 004 VZ Wu, Meng-Luen verfasserin (orcid)0000-0002-8017-8073 aut Image-format-independent tampered image detection based on overlapping concurrent directional patterns and neural networks 2017 Text txt rdacontent ohne Hilfsmittel zu benutzen n rdamedia Band nc rdacarrier © Springer Science+Business Media New York 2017 Abstract With the advancement of photo editing software, digital documents can easily be altered, which causes some legal issues. This paper proposes an image authentication method, which determines whether an image is authentic. Unlike many existing methods that only work with images in the JPEG format, the proposed method is image format independent, implying that it works with both noncompressed images and images in all compression formats. To improve the authentication accuracy, some strategies, such as overlapping image blocks only on concurrent directions, using a two-scale local binary pattern operator, and choosing the mean deviation instead of the standard deviation, are applied. A back-propagation neural network (BPNN) is used instead of support vector machines (SVMs) for classification to make online learning easier and achieve higher accuracy. In our experiments, we used the CASIA Database (CASIA TIDE v1.0) of compressed images and the Columbia University Digital Video Multimedia (DVMM) dataset of uncompressed images to evaluate our image authentication method. This benchmark dataset includes two types of image tampering, namely image splicing and copy–move forgery. Experiments were performed using both the SVM and BPNN classifiers with various parameters. We determined that the BPNN achieved a higher accuracy of up to 97.26 %. Digital image forensics Digital image authentication Tampered image detection Artificial neural network Fahn, Chin-Shyurng aut Chen, Yi-Fan aut Enthalten in Applied intelligence Springer US, 1991 47(2017), 2 vom: 13. März, Seite 347-361 (DE-627)130990515 (DE-600)1080229-0 (DE-576)029154286 0924-669X nnns volume:47 year:2017 number:2 day:13 month:03 pages:347-361 https://doi.org/10.1007/s10489-017-0893-4 lizenzpflichtig Volltext GBV_USEFLAG_A SYSFLAG_A GBV_OLC SSG-OLC-MAT GBV_ILN_70 AR 47 2017 2 13 03 347-361 |
language |
English |
source |
Enthalten in Applied intelligence 47(2017), 2 vom: 13. März, Seite 347-361 volume:47 year:2017 number:2 day:13 month:03 pages:347-361 |
sourceStr |
Enthalten in Applied intelligence 47(2017), 2 vom: 13. März, Seite 347-361 volume:47 year:2017 number:2 day:13 month:03 pages:347-361 |
format_phy_str_mv |
Article |
institution |
findex.gbv.de |
topic_facet |
Digital image forensics Digital image authentication Tampered image detection Artificial neural network |
dewey-raw |
004 |
isfreeaccess_bool |
false |
container_title |
Applied intelligence |
authorswithroles_txt_mv |
Wu, Meng-Luen @@aut@@ Fahn, Chin-Shyurng @@aut@@ Chen, Yi-Fan @@aut@@ |
publishDateDaySort_date |
2017-03-13T00:00:00Z |
hierarchy_top_id |
130990515 |
dewey-sort |
14 |
id |
OLC2066102997 |
language_de |
englisch |
fullrecord |
<?xml version="1.0" encoding="UTF-8"?><collection xmlns="http://www.loc.gov/MARC21/slim"><record><leader>01000caa a22002652 4500</leader><controlfield tag="001">OLC2066102997</controlfield><controlfield tag="003">DE-627</controlfield><controlfield tag="005">20230502204957.0</controlfield><controlfield tag="007">tu</controlfield><controlfield tag="008">200820s2017 xx ||||| 00| ||eng c</controlfield><datafield tag="024" ind1="7" ind2=" "><subfield code="a">10.1007/s10489-017-0893-4</subfield><subfield code="2">doi</subfield></datafield><datafield tag="035" ind1=" " ind2=" "><subfield code="a">(DE-627)OLC2066102997</subfield></datafield><datafield tag="035" ind1=" " ind2=" "><subfield code="a">(DE-He213)s10489-017-0893-4-p</subfield></datafield><datafield tag="040" ind1=" " ind2=" "><subfield code="a">DE-627</subfield><subfield code="b">ger</subfield><subfield code="c">DE-627</subfield><subfield code="e">rakwb</subfield></datafield><datafield tag="041" ind1=" " ind2=" "><subfield code="a">eng</subfield></datafield><datafield tag="082" ind1="0" ind2="4"><subfield code="a">004</subfield><subfield code="q">VZ</subfield></datafield><datafield tag="100" ind1="1" ind2=" "><subfield code="a">Wu, Meng-Luen</subfield><subfield code="e">verfasserin</subfield><subfield code="0">(orcid)0000-0002-8017-8073</subfield><subfield code="4">aut</subfield></datafield><datafield tag="245" ind1="1" ind2="0"><subfield code="a">Image-format-independent tampered image detection based on overlapping concurrent directional patterns and neural networks</subfield></datafield><datafield tag="264" ind1=" " ind2="1"><subfield code="c">2017</subfield></datafield><datafield tag="336" ind1=" " ind2=" "><subfield code="a">Text</subfield><subfield code="b">txt</subfield><subfield code="2">rdacontent</subfield></datafield><datafield tag="337" ind1=" " ind2=" "><subfield code="a">ohne Hilfsmittel zu benutzen</subfield><subfield code="b">n</subfield><subfield code="2">rdamedia</subfield></datafield><datafield tag="338" ind1=" " ind2=" "><subfield code="a">Band</subfield><subfield code="b">nc</subfield><subfield code="2">rdacarrier</subfield></datafield><datafield tag="500" ind1=" " ind2=" "><subfield code="a">© Springer Science+Business Media New York 2017</subfield></datafield><datafield tag="520" ind1=" " ind2=" "><subfield code="a">Abstract With the advancement of photo editing software, digital documents can easily be altered, which causes some legal issues. This paper proposes an image authentication method, which determines whether an image is authentic. Unlike many existing methods that only work with images in the JPEG format, the proposed method is image format independent, implying that it works with both noncompressed images and images in all compression formats. To improve the authentication accuracy, some strategies, such as overlapping image blocks only on concurrent directions, using a two-scale local binary pattern operator, and choosing the mean deviation instead of the standard deviation, are applied. A back-propagation neural network (BPNN) is used instead of support vector machines (SVMs) for classification to make online learning easier and achieve higher accuracy. In our experiments, we used the CASIA Database (CASIA TIDE v1.0) of compressed images and the Columbia University Digital Video Multimedia (DVMM) dataset of uncompressed images to evaluate our image authentication method. This benchmark dataset includes two types of image tampering, namely image splicing and copy–move forgery. Experiments were performed using both the SVM and BPNN classifiers with various parameters. We determined that the BPNN achieved a higher accuracy of up to 97.26 %.</subfield></datafield><datafield tag="650" ind1=" " ind2="4"><subfield code="a">Digital image forensics</subfield></datafield><datafield tag="650" ind1=" " ind2="4"><subfield code="a">Digital image authentication</subfield></datafield><datafield tag="650" ind1=" " ind2="4"><subfield code="a">Tampered image detection</subfield></datafield><datafield tag="650" ind1=" " ind2="4"><subfield code="a">Artificial neural network</subfield></datafield><datafield tag="700" ind1="1" ind2=" "><subfield code="a">Fahn, Chin-Shyurng</subfield><subfield code="4">aut</subfield></datafield><datafield tag="700" ind1="1" ind2=" "><subfield code="a">Chen, Yi-Fan</subfield><subfield code="4">aut</subfield></datafield><datafield tag="773" ind1="0" ind2="8"><subfield code="i">Enthalten in</subfield><subfield code="t">Applied intelligence</subfield><subfield code="d">Springer US, 1991</subfield><subfield code="g">47(2017), 2 vom: 13. März, Seite 347-361</subfield><subfield code="w">(DE-627)130990515</subfield><subfield code="w">(DE-600)1080229-0</subfield><subfield code="w">(DE-576)029154286</subfield><subfield code="x">0924-669X</subfield><subfield code="7">nnns</subfield></datafield><datafield tag="773" ind1="1" ind2="8"><subfield code="g">volume:47</subfield><subfield code="g">year:2017</subfield><subfield code="g">number:2</subfield><subfield code="g">day:13</subfield><subfield code="g">month:03</subfield><subfield code="g">pages:347-361</subfield></datafield><datafield tag="856" ind1="4" ind2="1"><subfield code="u">https://doi.org/10.1007/s10489-017-0893-4</subfield><subfield code="z">lizenzpflichtig</subfield><subfield code="3">Volltext</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_USEFLAG_A</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">SYSFLAG_A</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_OLC</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">SSG-OLC-MAT</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_70</subfield></datafield><datafield tag="951" ind1=" " ind2=" "><subfield code="a">AR</subfield></datafield><datafield tag="952" ind1=" " ind2=" "><subfield code="d">47</subfield><subfield code="j">2017</subfield><subfield code="e">2</subfield><subfield code="b">13</subfield><subfield code="c">03</subfield><subfield code="h">347-361</subfield></datafield></record></collection>
|
author |
Wu, Meng-Luen |
spellingShingle |
Wu, Meng-Luen ddc 004 misc Digital image forensics misc Digital image authentication misc Tampered image detection misc Artificial neural network Image-format-independent tampered image detection based on overlapping concurrent directional patterns and neural networks |
authorStr |
Wu, Meng-Luen |
ppnlink_with_tag_str_mv |
@@773@@(DE-627)130990515 |
format |
Article |
dewey-ones |
004 - Data processing & computer science |
delete_txt_mv |
keep |
author_role |
aut aut aut |
collection |
OLC |
remote_str |
false |
illustrated |
Not Illustrated |
issn |
0924-669X |
topic_title |
004 VZ Image-format-independent tampered image detection based on overlapping concurrent directional patterns and neural networks Digital image forensics Digital image authentication Tampered image detection Artificial neural network |
topic |
ddc 004 misc Digital image forensics misc Digital image authentication misc Tampered image detection misc Artificial neural network |
topic_unstemmed |
ddc 004 misc Digital image forensics misc Digital image authentication misc Tampered image detection misc Artificial neural network |
topic_browse |
ddc 004 misc Digital image forensics misc Digital image authentication misc Tampered image detection misc Artificial neural network |
format_facet |
Aufsätze Gedruckte Aufsätze |
format_main_str_mv |
Text Zeitschrift/Artikel |
carriertype_str_mv |
nc |
hierarchy_parent_title |
Applied intelligence |
hierarchy_parent_id |
130990515 |
dewey-tens |
000 - Computer science, knowledge & systems |
hierarchy_top_title |
Applied intelligence |
isfreeaccess_txt |
false |
familylinks_str_mv |
(DE-627)130990515 (DE-600)1080229-0 (DE-576)029154286 |
title |
Image-format-independent tampered image detection based on overlapping concurrent directional patterns and neural networks |
ctrlnum |
(DE-627)OLC2066102997 (DE-He213)s10489-017-0893-4-p |
title_full |
Image-format-independent tampered image detection based on overlapping concurrent directional patterns and neural networks |
author_sort |
Wu, Meng-Luen |
journal |
Applied intelligence |
journalStr |
Applied intelligence |
lang_code |
eng |
isOA_bool |
false |
dewey-hundreds |
000 - Computer science, information & general works |
recordtype |
marc |
publishDateSort |
2017 |
contenttype_str_mv |
txt |
container_start_page |
347 |
author_browse |
Wu, Meng-Luen Fahn, Chin-Shyurng Chen, Yi-Fan |
container_volume |
47 |
class |
004 VZ |
format_se |
Aufsätze |
author-letter |
Wu, Meng-Luen |
doi_str_mv |
10.1007/s10489-017-0893-4 |
normlink |
(ORCID)0000-0002-8017-8073 |
normlink_prefix_str_mv |
(orcid)0000-0002-8017-8073 |
dewey-full |
004 |
title_sort |
image-format-independent tampered image detection based on overlapping concurrent directional patterns and neural networks |
title_auth |
Image-format-independent tampered image detection based on overlapping concurrent directional patterns and neural networks |
abstract |
Abstract With the advancement of photo editing software, digital documents can easily be altered, which causes some legal issues. This paper proposes an image authentication method, which determines whether an image is authentic. Unlike many existing methods that only work with images in the JPEG format, the proposed method is image format independent, implying that it works with both noncompressed images and images in all compression formats. To improve the authentication accuracy, some strategies, such as overlapping image blocks only on concurrent directions, using a two-scale local binary pattern operator, and choosing the mean deviation instead of the standard deviation, are applied. A back-propagation neural network (BPNN) is used instead of support vector machines (SVMs) for classification to make online learning easier and achieve higher accuracy. In our experiments, we used the CASIA Database (CASIA TIDE v1.0) of compressed images and the Columbia University Digital Video Multimedia (DVMM) dataset of uncompressed images to evaluate our image authentication method. This benchmark dataset includes two types of image tampering, namely image splicing and copy–move forgery. Experiments were performed using both the SVM and BPNN classifiers with various parameters. We determined that the BPNN achieved a higher accuracy of up to 97.26 %. © Springer Science+Business Media New York 2017 |
abstractGer |
Abstract With the advancement of photo editing software, digital documents can easily be altered, which causes some legal issues. This paper proposes an image authentication method, which determines whether an image is authentic. Unlike many existing methods that only work with images in the JPEG format, the proposed method is image format independent, implying that it works with both noncompressed images and images in all compression formats. To improve the authentication accuracy, some strategies, such as overlapping image blocks only on concurrent directions, using a two-scale local binary pattern operator, and choosing the mean deviation instead of the standard deviation, are applied. A back-propagation neural network (BPNN) is used instead of support vector machines (SVMs) for classification to make online learning easier and achieve higher accuracy. In our experiments, we used the CASIA Database (CASIA TIDE v1.0) of compressed images and the Columbia University Digital Video Multimedia (DVMM) dataset of uncompressed images to evaluate our image authentication method. This benchmark dataset includes two types of image tampering, namely image splicing and copy–move forgery. Experiments were performed using both the SVM and BPNN classifiers with various parameters. We determined that the BPNN achieved a higher accuracy of up to 97.26 %. © Springer Science+Business Media New York 2017 |
abstract_unstemmed |
Abstract With the advancement of photo editing software, digital documents can easily be altered, which causes some legal issues. This paper proposes an image authentication method, which determines whether an image is authentic. Unlike many existing methods that only work with images in the JPEG format, the proposed method is image format independent, implying that it works with both noncompressed images and images in all compression formats. To improve the authentication accuracy, some strategies, such as overlapping image blocks only on concurrent directions, using a two-scale local binary pattern operator, and choosing the mean deviation instead of the standard deviation, are applied. A back-propagation neural network (BPNN) is used instead of support vector machines (SVMs) for classification to make online learning easier and achieve higher accuracy. In our experiments, we used the CASIA Database (CASIA TIDE v1.0) of compressed images and the Columbia University Digital Video Multimedia (DVMM) dataset of uncompressed images to evaluate our image authentication method. This benchmark dataset includes two types of image tampering, namely image splicing and copy–move forgery. Experiments were performed using both the SVM and BPNN classifiers with various parameters. We determined that the BPNN achieved a higher accuracy of up to 97.26 %. © Springer Science+Business Media New York 2017 |
collection_details |
GBV_USEFLAG_A SYSFLAG_A GBV_OLC SSG-OLC-MAT GBV_ILN_70 |
container_issue |
2 |
title_short |
Image-format-independent tampered image detection based on overlapping concurrent directional patterns and neural networks |
url |
https://doi.org/10.1007/s10489-017-0893-4 |
remote_bool |
false |
author2 |
Fahn, Chin-Shyurng Chen, Yi-Fan |
author2Str |
Fahn, Chin-Shyurng Chen, Yi-Fan |
ppnlink |
130990515 |
mediatype_str_mv |
n |
isOA_txt |
false |
hochschulschrift_bool |
false |
doi_str |
10.1007/s10489-017-0893-4 |
up_date |
2024-07-04T03:46:18.343Z |
_version_ |
1803618645017886720 |
fullrecord_marcxml |
<?xml version="1.0" encoding="UTF-8"?><collection xmlns="http://www.loc.gov/MARC21/slim"><record><leader>01000caa a22002652 4500</leader><controlfield tag="001">OLC2066102997</controlfield><controlfield tag="003">DE-627</controlfield><controlfield tag="005">20230502204957.0</controlfield><controlfield tag="007">tu</controlfield><controlfield tag="008">200820s2017 xx ||||| 00| ||eng c</controlfield><datafield tag="024" ind1="7" ind2=" "><subfield code="a">10.1007/s10489-017-0893-4</subfield><subfield code="2">doi</subfield></datafield><datafield tag="035" ind1=" " ind2=" "><subfield code="a">(DE-627)OLC2066102997</subfield></datafield><datafield tag="035" ind1=" " ind2=" "><subfield code="a">(DE-He213)s10489-017-0893-4-p</subfield></datafield><datafield tag="040" ind1=" " ind2=" "><subfield code="a">DE-627</subfield><subfield code="b">ger</subfield><subfield code="c">DE-627</subfield><subfield code="e">rakwb</subfield></datafield><datafield tag="041" ind1=" " ind2=" "><subfield code="a">eng</subfield></datafield><datafield tag="082" ind1="0" ind2="4"><subfield code="a">004</subfield><subfield code="q">VZ</subfield></datafield><datafield tag="100" ind1="1" ind2=" "><subfield code="a">Wu, Meng-Luen</subfield><subfield code="e">verfasserin</subfield><subfield code="0">(orcid)0000-0002-8017-8073</subfield><subfield code="4">aut</subfield></datafield><datafield tag="245" ind1="1" ind2="0"><subfield code="a">Image-format-independent tampered image detection based on overlapping concurrent directional patterns and neural networks</subfield></datafield><datafield tag="264" ind1=" " ind2="1"><subfield code="c">2017</subfield></datafield><datafield tag="336" ind1=" " ind2=" "><subfield code="a">Text</subfield><subfield code="b">txt</subfield><subfield code="2">rdacontent</subfield></datafield><datafield tag="337" ind1=" " ind2=" "><subfield code="a">ohne Hilfsmittel zu benutzen</subfield><subfield code="b">n</subfield><subfield code="2">rdamedia</subfield></datafield><datafield tag="338" ind1=" " ind2=" "><subfield code="a">Band</subfield><subfield code="b">nc</subfield><subfield code="2">rdacarrier</subfield></datafield><datafield tag="500" ind1=" " ind2=" "><subfield code="a">© Springer Science+Business Media New York 2017</subfield></datafield><datafield tag="520" ind1=" " ind2=" "><subfield code="a">Abstract With the advancement of photo editing software, digital documents can easily be altered, which causes some legal issues. This paper proposes an image authentication method, which determines whether an image is authentic. Unlike many existing methods that only work with images in the JPEG format, the proposed method is image format independent, implying that it works with both noncompressed images and images in all compression formats. To improve the authentication accuracy, some strategies, such as overlapping image blocks only on concurrent directions, using a two-scale local binary pattern operator, and choosing the mean deviation instead of the standard deviation, are applied. A back-propagation neural network (BPNN) is used instead of support vector machines (SVMs) for classification to make online learning easier and achieve higher accuracy. In our experiments, we used the CASIA Database (CASIA TIDE v1.0) of compressed images and the Columbia University Digital Video Multimedia (DVMM) dataset of uncompressed images to evaluate our image authentication method. This benchmark dataset includes two types of image tampering, namely image splicing and copy–move forgery. Experiments were performed using both the SVM and BPNN classifiers with various parameters. We determined that the BPNN achieved a higher accuracy of up to 97.26 %.</subfield></datafield><datafield tag="650" ind1=" " ind2="4"><subfield code="a">Digital image forensics</subfield></datafield><datafield tag="650" ind1=" " ind2="4"><subfield code="a">Digital image authentication</subfield></datafield><datafield tag="650" ind1=" " ind2="4"><subfield code="a">Tampered image detection</subfield></datafield><datafield tag="650" ind1=" " ind2="4"><subfield code="a">Artificial neural network</subfield></datafield><datafield tag="700" ind1="1" ind2=" "><subfield code="a">Fahn, Chin-Shyurng</subfield><subfield code="4">aut</subfield></datafield><datafield tag="700" ind1="1" ind2=" "><subfield code="a">Chen, Yi-Fan</subfield><subfield code="4">aut</subfield></datafield><datafield tag="773" ind1="0" ind2="8"><subfield code="i">Enthalten in</subfield><subfield code="t">Applied intelligence</subfield><subfield code="d">Springer US, 1991</subfield><subfield code="g">47(2017), 2 vom: 13. März, Seite 347-361</subfield><subfield code="w">(DE-627)130990515</subfield><subfield code="w">(DE-600)1080229-0</subfield><subfield code="w">(DE-576)029154286</subfield><subfield code="x">0924-669X</subfield><subfield code="7">nnns</subfield></datafield><datafield tag="773" ind1="1" ind2="8"><subfield code="g">volume:47</subfield><subfield code="g">year:2017</subfield><subfield code="g">number:2</subfield><subfield code="g">day:13</subfield><subfield code="g">month:03</subfield><subfield code="g">pages:347-361</subfield></datafield><datafield tag="856" ind1="4" ind2="1"><subfield code="u">https://doi.org/10.1007/s10489-017-0893-4</subfield><subfield code="z">lizenzpflichtig</subfield><subfield code="3">Volltext</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_USEFLAG_A</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">SYSFLAG_A</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_OLC</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">SSG-OLC-MAT</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_70</subfield></datafield><datafield tag="951" ind1=" " ind2=" "><subfield code="a">AR</subfield></datafield><datafield tag="952" ind1=" " ind2=" "><subfield code="d">47</subfield><subfield code="j">2017</subfield><subfield code="e">2</subfield><subfield code="b">13</subfield><subfield code="c">03</subfield><subfield code="h">347-361</subfield></datafield></record></collection>
|
score |
7.4000187 |