TaintBench: Automatic real-world malware benchmarking of Android taint analyses
Abstract Due to the lack of established real-world benchmark suites for static taint analyses of Android applications, evaluations of these analyses are often restricted and hard to compare. Even in evaluations that do use real-world apps, details about the ground truth in those apps are rarely docu...
Ausführliche Beschreibung
Autor*in: |
Luo, Linghui [verfasserIn] |
---|
Format: |
Artikel |
---|---|
Sprache: |
Englisch |
Erschienen: |
2021 |
---|
Schlagwörter: |
---|
Anmerkung: |
© The Author(s) 2021 |
---|
Übergeordnetes Werk: |
Enthalten in: Empirical software engineering - Springer US, 1996, 27(2021), 1 vom: 29. Okt. |
---|---|
Übergeordnetes Werk: |
volume:27 ; year:2021 ; number:1 ; day:29 ; month:10 |
Links: |
---|
DOI / URN: |
10.1007/s10664-021-10013-5 |
---|
Katalog-ID: |
OLC2077302100 |
---|
LEADER | 01000caa a22002652 4500 | ||
---|---|---|---|
001 | OLC2077302100 | ||
003 | DE-627 | ||
005 | 20230505201610.0 | ||
007 | tu | ||
008 | 221220s2021 xx ||||| 00| ||eng c | ||
024 | 7 | |a 10.1007/s10664-021-10013-5 |2 doi | |
035 | |a (DE-627)OLC2077302100 | ||
035 | |a (DE-He213)s10664-021-10013-5-p | ||
040 | |a DE-627 |b ger |c DE-627 |e rakwb | ||
041 | |a eng | ||
082 | 0 | 4 | |a 004 |q VZ |
100 | 1 | |a Luo, Linghui |e verfasserin |4 aut | |
245 | 1 | 0 | |a TaintBench: Automatic real-world malware benchmarking of Android taint analyses |
264 | 1 | |c 2021 | |
336 | |a Text |b txt |2 rdacontent | ||
337 | |a ohne Hilfsmittel zu benutzen |b n |2 rdamedia | ||
338 | |a Band |b nc |2 rdacarrier | ||
500 | |a © The Author(s) 2021 | ||
520 | |a Abstract Due to the lack of established real-world benchmark suites for static taint analyses of Android applications, evaluations of these analyses are often restricted and hard to compare. Even in evaluations that do use real-world apps, details about the ground truth in those apps are rarely documented, which makes it difficult to compare and reproduce the results. To push Android taint analysis research forward, this paper thus recommends criteria for constructing real-world benchmark suites for this specific domain, and presents TaintBench, the first real-world malware benchmark suite with documented taint flows. TaintBench benchmark apps include taint flows with complex structures, and addresses static challenges that are commonly agreed on by the community. Together with the TaintBench suite, we introduce the TaintBench framework, whose goal is to simplify real-world benchmarking of Android taint analyses. First, a usability test shows that the framework improves experts’ performance and perceived usability when documenting and inspecting taint flows. Second, experiments using TaintBench reveal new insights for the taint analysis tools Amandroid and FlowDroid: (i) They are less effective on real-world malware apps than on synthetic benchmark apps. (ii) Predefined lists of sources and sinks heavily impact the tools’ accuracy. (iii) Surprisingly, up-to-date versions of both tools are less accurate than their predecessors. | ||
650 | 4 | |a Taint analysis | |
650 | 4 | |a Benchmark | |
650 | 4 | |a Real-world benchmark | |
650 | 4 | |a Android malware | |
700 | 1 | |a Pauck, Felix |4 aut | |
700 | 1 | |a Piskachev, Goran |4 aut | |
700 | 1 | |a Benz, Manuel |4 aut | |
700 | 1 | |a Pashchenko, Ivan |4 aut | |
700 | 1 | |a Mory, Martin |4 aut | |
700 | 1 | |a Bodden, Eric |4 aut | |
700 | 1 | |a Hermann, Ben |4 aut | |
700 | 1 | |a Massacci, Fabio |4 aut | |
773 | 0 | 8 | |i Enthalten in |t Empirical software engineering |d Springer US, 1996 |g 27(2021), 1 vom: 29. Okt. |w (DE-627)235946516 |w (DE-600)1401304-6 |w (DE-576)102432406 |x 1382-3256 |7 nnns |
773 | 1 | 8 | |g volume:27 |g year:2021 |g number:1 |g day:29 |g month:10 |
856 | 4 | 1 | |u https://doi.org/10.1007/s10664-021-10013-5 |z lizenzpflichtig |3 Volltext |
912 | |a GBV_USEFLAG_A | ||
912 | |a SYSFLAG_A | ||
912 | |a GBV_OLC | ||
912 | |a SSG-OLC-MAT | ||
951 | |a AR | ||
952 | |d 27 |j 2021 |e 1 |b 29 |c 10 |
author_variant |
l l ll f p fp g p gp m b mb i p ip m m mm e b eb b h bh f m fm |
---|---|
matchkey_str |
article:13823256:2021----::anbnhuoairawrdawrbnhaknoa |
hierarchy_sort_str |
2021 |
publishDate |
2021 |
allfields |
10.1007/s10664-021-10013-5 doi (DE-627)OLC2077302100 (DE-He213)s10664-021-10013-5-p DE-627 ger DE-627 rakwb eng 004 VZ Luo, Linghui verfasserin aut TaintBench: Automatic real-world malware benchmarking of Android taint analyses 2021 Text txt rdacontent ohne Hilfsmittel zu benutzen n rdamedia Band nc rdacarrier © The Author(s) 2021 Abstract Due to the lack of established real-world benchmark suites for static taint analyses of Android applications, evaluations of these analyses are often restricted and hard to compare. Even in evaluations that do use real-world apps, details about the ground truth in those apps are rarely documented, which makes it difficult to compare and reproduce the results. To push Android taint analysis research forward, this paper thus recommends criteria for constructing real-world benchmark suites for this specific domain, and presents TaintBench, the first real-world malware benchmark suite with documented taint flows. TaintBench benchmark apps include taint flows with complex structures, and addresses static challenges that are commonly agreed on by the community. Together with the TaintBench suite, we introduce the TaintBench framework, whose goal is to simplify real-world benchmarking of Android taint analyses. First, a usability test shows that the framework improves experts’ performance and perceived usability when documenting and inspecting taint flows. Second, experiments using TaintBench reveal new insights for the taint analysis tools Amandroid and FlowDroid: (i) They are less effective on real-world malware apps than on synthetic benchmark apps. (ii) Predefined lists of sources and sinks heavily impact the tools’ accuracy. (iii) Surprisingly, up-to-date versions of both tools are less accurate than their predecessors. Taint analysis Benchmark Real-world benchmark Android malware Pauck, Felix aut Piskachev, Goran aut Benz, Manuel aut Pashchenko, Ivan aut Mory, Martin aut Bodden, Eric aut Hermann, Ben aut Massacci, Fabio aut Enthalten in Empirical software engineering Springer US, 1996 27(2021), 1 vom: 29. Okt. (DE-627)235946516 (DE-600)1401304-6 (DE-576)102432406 1382-3256 nnns volume:27 year:2021 number:1 day:29 month:10 https://doi.org/10.1007/s10664-021-10013-5 lizenzpflichtig Volltext GBV_USEFLAG_A SYSFLAG_A GBV_OLC SSG-OLC-MAT AR 27 2021 1 29 10 |
spelling |
10.1007/s10664-021-10013-5 doi (DE-627)OLC2077302100 (DE-He213)s10664-021-10013-5-p DE-627 ger DE-627 rakwb eng 004 VZ Luo, Linghui verfasserin aut TaintBench: Automatic real-world malware benchmarking of Android taint analyses 2021 Text txt rdacontent ohne Hilfsmittel zu benutzen n rdamedia Band nc rdacarrier © The Author(s) 2021 Abstract Due to the lack of established real-world benchmark suites for static taint analyses of Android applications, evaluations of these analyses are often restricted and hard to compare. Even in evaluations that do use real-world apps, details about the ground truth in those apps are rarely documented, which makes it difficult to compare and reproduce the results. To push Android taint analysis research forward, this paper thus recommends criteria for constructing real-world benchmark suites for this specific domain, and presents TaintBench, the first real-world malware benchmark suite with documented taint flows. TaintBench benchmark apps include taint flows with complex structures, and addresses static challenges that are commonly agreed on by the community. Together with the TaintBench suite, we introduce the TaintBench framework, whose goal is to simplify real-world benchmarking of Android taint analyses. First, a usability test shows that the framework improves experts’ performance and perceived usability when documenting and inspecting taint flows. Second, experiments using TaintBench reveal new insights for the taint analysis tools Amandroid and FlowDroid: (i) They are less effective on real-world malware apps than on synthetic benchmark apps. (ii) Predefined lists of sources and sinks heavily impact the tools’ accuracy. (iii) Surprisingly, up-to-date versions of both tools are less accurate than their predecessors. Taint analysis Benchmark Real-world benchmark Android malware Pauck, Felix aut Piskachev, Goran aut Benz, Manuel aut Pashchenko, Ivan aut Mory, Martin aut Bodden, Eric aut Hermann, Ben aut Massacci, Fabio aut Enthalten in Empirical software engineering Springer US, 1996 27(2021), 1 vom: 29. Okt. (DE-627)235946516 (DE-600)1401304-6 (DE-576)102432406 1382-3256 nnns volume:27 year:2021 number:1 day:29 month:10 https://doi.org/10.1007/s10664-021-10013-5 lizenzpflichtig Volltext GBV_USEFLAG_A SYSFLAG_A GBV_OLC SSG-OLC-MAT AR 27 2021 1 29 10 |
allfields_unstemmed |
10.1007/s10664-021-10013-5 doi (DE-627)OLC2077302100 (DE-He213)s10664-021-10013-5-p DE-627 ger DE-627 rakwb eng 004 VZ Luo, Linghui verfasserin aut TaintBench: Automatic real-world malware benchmarking of Android taint analyses 2021 Text txt rdacontent ohne Hilfsmittel zu benutzen n rdamedia Band nc rdacarrier © The Author(s) 2021 Abstract Due to the lack of established real-world benchmark suites for static taint analyses of Android applications, evaluations of these analyses are often restricted and hard to compare. Even in evaluations that do use real-world apps, details about the ground truth in those apps are rarely documented, which makes it difficult to compare and reproduce the results. To push Android taint analysis research forward, this paper thus recommends criteria for constructing real-world benchmark suites for this specific domain, and presents TaintBench, the first real-world malware benchmark suite with documented taint flows. TaintBench benchmark apps include taint flows with complex structures, and addresses static challenges that are commonly agreed on by the community. Together with the TaintBench suite, we introduce the TaintBench framework, whose goal is to simplify real-world benchmarking of Android taint analyses. First, a usability test shows that the framework improves experts’ performance and perceived usability when documenting and inspecting taint flows. Second, experiments using TaintBench reveal new insights for the taint analysis tools Amandroid and FlowDroid: (i) They are less effective on real-world malware apps than on synthetic benchmark apps. (ii) Predefined lists of sources and sinks heavily impact the tools’ accuracy. (iii) Surprisingly, up-to-date versions of both tools are less accurate than their predecessors. Taint analysis Benchmark Real-world benchmark Android malware Pauck, Felix aut Piskachev, Goran aut Benz, Manuel aut Pashchenko, Ivan aut Mory, Martin aut Bodden, Eric aut Hermann, Ben aut Massacci, Fabio aut Enthalten in Empirical software engineering Springer US, 1996 27(2021), 1 vom: 29. Okt. (DE-627)235946516 (DE-600)1401304-6 (DE-576)102432406 1382-3256 nnns volume:27 year:2021 number:1 day:29 month:10 https://doi.org/10.1007/s10664-021-10013-5 lizenzpflichtig Volltext GBV_USEFLAG_A SYSFLAG_A GBV_OLC SSG-OLC-MAT AR 27 2021 1 29 10 |
allfieldsGer |
10.1007/s10664-021-10013-5 doi (DE-627)OLC2077302100 (DE-He213)s10664-021-10013-5-p DE-627 ger DE-627 rakwb eng 004 VZ Luo, Linghui verfasserin aut TaintBench: Automatic real-world malware benchmarking of Android taint analyses 2021 Text txt rdacontent ohne Hilfsmittel zu benutzen n rdamedia Band nc rdacarrier © The Author(s) 2021 Abstract Due to the lack of established real-world benchmark suites for static taint analyses of Android applications, evaluations of these analyses are often restricted and hard to compare. Even in evaluations that do use real-world apps, details about the ground truth in those apps are rarely documented, which makes it difficult to compare and reproduce the results. To push Android taint analysis research forward, this paper thus recommends criteria for constructing real-world benchmark suites for this specific domain, and presents TaintBench, the first real-world malware benchmark suite with documented taint flows. TaintBench benchmark apps include taint flows with complex structures, and addresses static challenges that are commonly agreed on by the community. Together with the TaintBench suite, we introduce the TaintBench framework, whose goal is to simplify real-world benchmarking of Android taint analyses. First, a usability test shows that the framework improves experts’ performance and perceived usability when documenting and inspecting taint flows. Second, experiments using TaintBench reveal new insights for the taint analysis tools Amandroid and FlowDroid: (i) They are less effective on real-world malware apps than on synthetic benchmark apps. (ii) Predefined lists of sources and sinks heavily impact the tools’ accuracy. (iii) Surprisingly, up-to-date versions of both tools are less accurate than their predecessors. Taint analysis Benchmark Real-world benchmark Android malware Pauck, Felix aut Piskachev, Goran aut Benz, Manuel aut Pashchenko, Ivan aut Mory, Martin aut Bodden, Eric aut Hermann, Ben aut Massacci, Fabio aut Enthalten in Empirical software engineering Springer US, 1996 27(2021), 1 vom: 29. Okt. (DE-627)235946516 (DE-600)1401304-6 (DE-576)102432406 1382-3256 nnns volume:27 year:2021 number:1 day:29 month:10 https://doi.org/10.1007/s10664-021-10013-5 lizenzpflichtig Volltext GBV_USEFLAG_A SYSFLAG_A GBV_OLC SSG-OLC-MAT AR 27 2021 1 29 10 |
allfieldsSound |
10.1007/s10664-021-10013-5 doi (DE-627)OLC2077302100 (DE-He213)s10664-021-10013-5-p DE-627 ger DE-627 rakwb eng 004 VZ Luo, Linghui verfasserin aut TaintBench: Automatic real-world malware benchmarking of Android taint analyses 2021 Text txt rdacontent ohne Hilfsmittel zu benutzen n rdamedia Band nc rdacarrier © The Author(s) 2021 Abstract Due to the lack of established real-world benchmark suites for static taint analyses of Android applications, evaluations of these analyses are often restricted and hard to compare. Even in evaluations that do use real-world apps, details about the ground truth in those apps are rarely documented, which makes it difficult to compare and reproduce the results. To push Android taint analysis research forward, this paper thus recommends criteria for constructing real-world benchmark suites for this specific domain, and presents TaintBench, the first real-world malware benchmark suite with documented taint flows. TaintBench benchmark apps include taint flows with complex structures, and addresses static challenges that are commonly agreed on by the community. Together with the TaintBench suite, we introduce the TaintBench framework, whose goal is to simplify real-world benchmarking of Android taint analyses. First, a usability test shows that the framework improves experts’ performance and perceived usability when documenting and inspecting taint flows. Second, experiments using TaintBench reveal new insights for the taint analysis tools Amandroid and FlowDroid: (i) They are less effective on real-world malware apps than on synthetic benchmark apps. (ii) Predefined lists of sources and sinks heavily impact the tools’ accuracy. (iii) Surprisingly, up-to-date versions of both tools are less accurate than their predecessors. Taint analysis Benchmark Real-world benchmark Android malware Pauck, Felix aut Piskachev, Goran aut Benz, Manuel aut Pashchenko, Ivan aut Mory, Martin aut Bodden, Eric aut Hermann, Ben aut Massacci, Fabio aut Enthalten in Empirical software engineering Springer US, 1996 27(2021), 1 vom: 29. Okt. (DE-627)235946516 (DE-600)1401304-6 (DE-576)102432406 1382-3256 nnns volume:27 year:2021 number:1 day:29 month:10 https://doi.org/10.1007/s10664-021-10013-5 lizenzpflichtig Volltext GBV_USEFLAG_A SYSFLAG_A GBV_OLC SSG-OLC-MAT AR 27 2021 1 29 10 |
language |
English |
source |
Enthalten in Empirical software engineering 27(2021), 1 vom: 29. Okt. volume:27 year:2021 number:1 day:29 month:10 |
sourceStr |
Enthalten in Empirical software engineering 27(2021), 1 vom: 29. Okt. volume:27 year:2021 number:1 day:29 month:10 |
format_phy_str_mv |
Article |
institution |
findex.gbv.de |
topic_facet |
Taint analysis Benchmark Real-world benchmark Android malware |
dewey-raw |
004 |
isfreeaccess_bool |
false |
container_title |
Empirical software engineering |
authorswithroles_txt_mv |
Luo, Linghui @@aut@@ Pauck, Felix @@aut@@ Piskachev, Goran @@aut@@ Benz, Manuel @@aut@@ Pashchenko, Ivan @@aut@@ Mory, Martin @@aut@@ Bodden, Eric @@aut@@ Hermann, Ben @@aut@@ Massacci, Fabio @@aut@@ |
publishDateDaySort_date |
2021-10-29T00:00:00Z |
hierarchy_top_id |
235946516 |
dewey-sort |
14 |
id |
OLC2077302100 |
language_de |
englisch |
fullrecord |
<?xml version="1.0" encoding="UTF-8"?><collection xmlns="http://www.loc.gov/MARC21/slim"><record><leader>01000caa a22002652 4500</leader><controlfield tag="001">OLC2077302100</controlfield><controlfield tag="003">DE-627</controlfield><controlfield tag="005">20230505201610.0</controlfield><controlfield tag="007">tu</controlfield><controlfield tag="008">221220s2021 xx ||||| 00| ||eng c</controlfield><datafield tag="024" ind1="7" ind2=" "><subfield code="a">10.1007/s10664-021-10013-5</subfield><subfield code="2">doi</subfield></datafield><datafield tag="035" ind1=" " ind2=" "><subfield code="a">(DE-627)OLC2077302100</subfield></datafield><datafield tag="035" ind1=" " ind2=" "><subfield code="a">(DE-He213)s10664-021-10013-5-p</subfield></datafield><datafield tag="040" ind1=" " ind2=" "><subfield code="a">DE-627</subfield><subfield code="b">ger</subfield><subfield code="c">DE-627</subfield><subfield code="e">rakwb</subfield></datafield><datafield tag="041" ind1=" " ind2=" "><subfield code="a">eng</subfield></datafield><datafield tag="082" ind1="0" ind2="4"><subfield code="a">004</subfield><subfield code="q">VZ</subfield></datafield><datafield tag="100" ind1="1" ind2=" "><subfield code="a">Luo, Linghui</subfield><subfield code="e">verfasserin</subfield><subfield code="4">aut</subfield></datafield><datafield tag="245" ind1="1" ind2="0"><subfield code="a">TaintBench: Automatic real-world malware benchmarking of Android taint analyses</subfield></datafield><datafield tag="264" ind1=" " ind2="1"><subfield code="c">2021</subfield></datafield><datafield tag="336" ind1=" " ind2=" "><subfield code="a">Text</subfield><subfield code="b">txt</subfield><subfield code="2">rdacontent</subfield></datafield><datafield tag="337" ind1=" " ind2=" "><subfield code="a">ohne Hilfsmittel zu benutzen</subfield><subfield code="b">n</subfield><subfield code="2">rdamedia</subfield></datafield><datafield tag="338" ind1=" " ind2=" "><subfield code="a">Band</subfield><subfield code="b">nc</subfield><subfield code="2">rdacarrier</subfield></datafield><datafield tag="500" ind1=" " ind2=" "><subfield code="a">© The Author(s) 2021</subfield></datafield><datafield tag="520" ind1=" " ind2=" "><subfield code="a">Abstract Due to the lack of established real-world benchmark suites for static taint analyses of Android applications, evaluations of these analyses are often restricted and hard to compare. Even in evaluations that do use real-world apps, details about the ground truth in those apps are rarely documented, which makes it difficult to compare and reproduce the results. To push Android taint analysis research forward, this paper thus recommends criteria for constructing real-world benchmark suites for this specific domain, and presents TaintBench, the first real-world malware benchmark suite with documented taint flows. TaintBench benchmark apps include taint flows with complex structures, and addresses static challenges that are commonly agreed on by the community. Together with the TaintBench suite, we introduce the TaintBench framework, whose goal is to simplify real-world benchmarking of Android taint analyses. First, a usability test shows that the framework improves experts’ performance and perceived usability when documenting and inspecting taint flows. Second, experiments using TaintBench reveal new insights for the taint analysis tools Amandroid and FlowDroid: (i) They are less effective on real-world malware apps than on synthetic benchmark apps. (ii) Predefined lists of sources and sinks heavily impact the tools’ accuracy. (iii) Surprisingly, up-to-date versions of both tools are less accurate than their predecessors.</subfield></datafield><datafield tag="650" ind1=" " ind2="4"><subfield code="a">Taint analysis</subfield></datafield><datafield tag="650" ind1=" " ind2="4"><subfield code="a">Benchmark</subfield></datafield><datafield tag="650" ind1=" " ind2="4"><subfield code="a">Real-world benchmark</subfield></datafield><datafield tag="650" ind1=" " ind2="4"><subfield code="a">Android malware</subfield></datafield><datafield tag="700" ind1="1" ind2=" "><subfield code="a">Pauck, Felix</subfield><subfield code="4">aut</subfield></datafield><datafield tag="700" ind1="1" ind2=" "><subfield code="a">Piskachev, Goran</subfield><subfield code="4">aut</subfield></datafield><datafield tag="700" ind1="1" ind2=" "><subfield code="a">Benz, Manuel</subfield><subfield code="4">aut</subfield></datafield><datafield tag="700" ind1="1" ind2=" "><subfield code="a">Pashchenko, Ivan</subfield><subfield code="4">aut</subfield></datafield><datafield tag="700" ind1="1" ind2=" "><subfield code="a">Mory, Martin</subfield><subfield code="4">aut</subfield></datafield><datafield tag="700" ind1="1" ind2=" "><subfield code="a">Bodden, Eric</subfield><subfield code="4">aut</subfield></datafield><datafield tag="700" ind1="1" ind2=" "><subfield code="a">Hermann, Ben</subfield><subfield code="4">aut</subfield></datafield><datafield tag="700" ind1="1" ind2=" "><subfield code="a">Massacci, Fabio</subfield><subfield code="4">aut</subfield></datafield><datafield tag="773" ind1="0" ind2="8"><subfield code="i">Enthalten in</subfield><subfield code="t">Empirical software engineering</subfield><subfield code="d">Springer US, 1996</subfield><subfield code="g">27(2021), 1 vom: 29. Okt.</subfield><subfield code="w">(DE-627)235946516</subfield><subfield code="w">(DE-600)1401304-6</subfield><subfield code="w">(DE-576)102432406</subfield><subfield code="x">1382-3256</subfield><subfield code="7">nnns</subfield></datafield><datafield tag="773" ind1="1" ind2="8"><subfield code="g">volume:27</subfield><subfield code="g">year:2021</subfield><subfield code="g">number:1</subfield><subfield code="g">day:29</subfield><subfield code="g">month:10</subfield></datafield><datafield tag="856" ind1="4" ind2="1"><subfield code="u">https://doi.org/10.1007/s10664-021-10013-5</subfield><subfield code="z">lizenzpflichtig</subfield><subfield code="3">Volltext</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_USEFLAG_A</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">SYSFLAG_A</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_OLC</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">SSG-OLC-MAT</subfield></datafield><datafield tag="951" ind1=" " ind2=" "><subfield code="a">AR</subfield></datafield><datafield tag="952" ind1=" " ind2=" "><subfield code="d">27</subfield><subfield code="j">2021</subfield><subfield code="e">1</subfield><subfield code="b">29</subfield><subfield code="c">10</subfield></datafield></record></collection>
|
author |
Luo, Linghui |
spellingShingle |
Luo, Linghui ddc 004 misc Taint analysis misc Benchmark misc Real-world benchmark misc Android malware TaintBench: Automatic real-world malware benchmarking of Android taint analyses |
authorStr |
Luo, Linghui |
ppnlink_with_tag_str_mv |
@@773@@(DE-627)235946516 |
format |
Article |
dewey-ones |
004 - Data processing & computer science |
delete_txt_mv |
keep |
author_role |
aut aut aut aut aut aut aut aut aut |
collection |
OLC |
remote_str |
false |
illustrated |
Not Illustrated |
issn |
1382-3256 |
topic_title |
004 VZ TaintBench: Automatic real-world malware benchmarking of Android taint analyses Taint analysis Benchmark Real-world benchmark Android malware |
topic |
ddc 004 misc Taint analysis misc Benchmark misc Real-world benchmark misc Android malware |
topic_unstemmed |
ddc 004 misc Taint analysis misc Benchmark misc Real-world benchmark misc Android malware |
topic_browse |
ddc 004 misc Taint analysis misc Benchmark misc Real-world benchmark misc Android malware |
format_facet |
Aufsätze Gedruckte Aufsätze |
format_main_str_mv |
Text Zeitschrift/Artikel |
carriertype_str_mv |
nc |
hierarchy_parent_title |
Empirical software engineering |
hierarchy_parent_id |
235946516 |
dewey-tens |
000 - Computer science, knowledge & systems |
hierarchy_top_title |
Empirical software engineering |
isfreeaccess_txt |
false |
familylinks_str_mv |
(DE-627)235946516 (DE-600)1401304-6 (DE-576)102432406 |
title |
TaintBench: Automatic real-world malware benchmarking of Android taint analyses |
ctrlnum |
(DE-627)OLC2077302100 (DE-He213)s10664-021-10013-5-p |
title_full |
TaintBench: Automatic real-world malware benchmarking of Android taint analyses |
author_sort |
Luo, Linghui |
journal |
Empirical software engineering |
journalStr |
Empirical software engineering |
lang_code |
eng |
isOA_bool |
false |
dewey-hundreds |
000 - Computer science, information & general works |
recordtype |
marc |
publishDateSort |
2021 |
contenttype_str_mv |
txt |
author_browse |
Luo, Linghui Pauck, Felix Piskachev, Goran Benz, Manuel Pashchenko, Ivan Mory, Martin Bodden, Eric Hermann, Ben Massacci, Fabio |
container_volume |
27 |
class |
004 VZ |
format_se |
Aufsätze |
author-letter |
Luo, Linghui |
doi_str_mv |
10.1007/s10664-021-10013-5 |
dewey-full |
004 |
title_sort |
taintbench: automatic real-world malware benchmarking of android taint analyses |
title_auth |
TaintBench: Automatic real-world malware benchmarking of Android taint analyses |
abstract |
Abstract Due to the lack of established real-world benchmark suites for static taint analyses of Android applications, evaluations of these analyses are often restricted and hard to compare. Even in evaluations that do use real-world apps, details about the ground truth in those apps are rarely documented, which makes it difficult to compare and reproduce the results. To push Android taint analysis research forward, this paper thus recommends criteria for constructing real-world benchmark suites for this specific domain, and presents TaintBench, the first real-world malware benchmark suite with documented taint flows. TaintBench benchmark apps include taint flows with complex structures, and addresses static challenges that are commonly agreed on by the community. Together with the TaintBench suite, we introduce the TaintBench framework, whose goal is to simplify real-world benchmarking of Android taint analyses. First, a usability test shows that the framework improves experts’ performance and perceived usability when documenting and inspecting taint flows. Second, experiments using TaintBench reveal new insights for the taint analysis tools Amandroid and FlowDroid: (i) They are less effective on real-world malware apps than on synthetic benchmark apps. (ii) Predefined lists of sources and sinks heavily impact the tools’ accuracy. (iii) Surprisingly, up-to-date versions of both tools are less accurate than their predecessors. © The Author(s) 2021 |
abstractGer |
Abstract Due to the lack of established real-world benchmark suites for static taint analyses of Android applications, evaluations of these analyses are often restricted and hard to compare. Even in evaluations that do use real-world apps, details about the ground truth in those apps are rarely documented, which makes it difficult to compare and reproduce the results. To push Android taint analysis research forward, this paper thus recommends criteria for constructing real-world benchmark suites for this specific domain, and presents TaintBench, the first real-world malware benchmark suite with documented taint flows. TaintBench benchmark apps include taint flows with complex structures, and addresses static challenges that are commonly agreed on by the community. Together with the TaintBench suite, we introduce the TaintBench framework, whose goal is to simplify real-world benchmarking of Android taint analyses. First, a usability test shows that the framework improves experts’ performance and perceived usability when documenting and inspecting taint flows. Second, experiments using TaintBench reveal new insights for the taint analysis tools Amandroid and FlowDroid: (i) They are less effective on real-world malware apps than on synthetic benchmark apps. (ii) Predefined lists of sources and sinks heavily impact the tools’ accuracy. (iii) Surprisingly, up-to-date versions of both tools are less accurate than their predecessors. © The Author(s) 2021 |
abstract_unstemmed |
Abstract Due to the lack of established real-world benchmark suites for static taint analyses of Android applications, evaluations of these analyses are often restricted and hard to compare. Even in evaluations that do use real-world apps, details about the ground truth in those apps are rarely documented, which makes it difficult to compare and reproduce the results. To push Android taint analysis research forward, this paper thus recommends criteria for constructing real-world benchmark suites for this specific domain, and presents TaintBench, the first real-world malware benchmark suite with documented taint flows. TaintBench benchmark apps include taint flows with complex structures, and addresses static challenges that are commonly agreed on by the community. Together with the TaintBench suite, we introduce the TaintBench framework, whose goal is to simplify real-world benchmarking of Android taint analyses. First, a usability test shows that the framework improves experts’ performance and perceived usability when documenting and inspecting taint flows. Second, experiments using TaintBench reveal new insights for the taint analysis tools Amandroid and FlowDroid: (i) They are less effective on real-world malware apps than on synthetic benchmark apps. (ii) Predefined lists of sources and sinks heavily impact the tools’ accuracy. (iii) Surprisingly, up-to-date versions of both tools are less accurate than their predecessors. © The Author(s) 2021 |
collection_details |
GBV_USEFLAG_A SYSFLAG_A GBV_OLC SSG-OLC-MAT |
container_issue |
1 |
title_short |
TaintBench: Automatic real-world malware benchmarking of Android taint analyses |
url |
https://doi.org/10.1007/s10664-021-10013-5 |
remote_bool |
false |
author2 |
Pauck, Felix Piskachev, Goran Benz, Manuel Pashchenko, Ivan Mory, Martin Bodden, Eric Hermann, Ben Massacci, Fabio |
author2Str |
Pauck, Felix Piskachev, Goran Benz, Manuel Pashchenko, Ivan Mory, Martin Bodden, Eric Hermann, Ben Massacci, Fabio |
ppnlink |
235946516 |
mediatype_str_mv |
n |
isOA_txt |
false |
hochschulschrift_bool |
false |
doi_str |
10.1007/s10664-021-10013-5 |
up_date |
2024-07-03T14:50:01.269Z |
_version_ |
1803569805446348800 |
fullrecord_marcxml |
<?xml version="1.0" encoding="UTF-8"?><collection xmlns="http://www.loc.gov/MARC21/slim"><record><leader>01000caa a22002652 4500</leader><controlfield tag="001">OLC2077302100</controlfield><controlfield tag="003">DE-627</controlfield><controlfield tag="005">20230505201610.0</controlfield><controlfield tag="007">tu</controlfield><controlfield tag="008">221220s2021 xx ||||| 00| ||eng c</controlfield><datafield tag="024" ind1="7" ind2=" "><subfield code="a">10.1007/s10664-021-10013-5</subfield><subfield code="2">doi</subfield></datafield><datafield tag="035" ind1=" " ind2=" "><subfield code="a">(DE-627)OLC2077302100</subfield></datafield><datafield tag="035" ind1=" " ind2=" "><subfield code="a">(DE-He213)s10664-021-10013-5-p</subfield></datafield><datafield tag="040" ind1=" " ind2=" "><subfield code="a">DE-627</subfield><subfield code="b">ger</subfield><subfield code="c">DE-627</subfield><subfield code="e">rakwb</subfield></datafield><datafield tag="041" ind1=" " ind2=" "><subfield code="a">eng</subfield></datafield><datafield tag="082" ind1="0" ind2="4"><subfield code="a">004</subfield><subfield code="q">VZ</subfield></datafield><datafield tag="100" ind1="1" ind2=" "><subfield code="a">Luo, Linghui</subfield><subfield code="e">verfasserin</subfield><subfield code="4">aut</subfield></datafield><datafield tag="245" ind1="1" ind2="0"><subfield code="a">TaintBench: Automatic real-world malware benchmarking of Android taint analyses</subfield></datafield><datafield tag="264" ind1=" " ind2="1"><subfield code="c">2021</subfield></datafield><datafield tag="336" ind1=" " ind2=" "><subfield code="a">Text</subfield><subfield code="b">txt</subfield><subfield code="2">rdacontent</subfield></datafield><datafield tag="337" ind1=" " ind2=" "><subfield code="a">ohne Hilfsmittel zu benutzen</subfield><subfield code="b">n</subfield><subfield code="2">rdamedia</subfield></datafield><datafield tag="338" ind1=" " ind2=" "><subfield code="a">Band</subfield><subfield code="b">nc</subfield><subfield code="2">rdacarrier</subfield></datafield><datafield tag="500" ind1=" " ind2=" "><subfield code="a">© The Author(s) 2021</subfield></datafield><datafield tag="520" ind1=" " ind2=" "><subfield code="a">Abstract Due to the lack of established real-world benchmark suites for static taint analyses of Android applications, evaluations of these analyses are often restricted and hard to compare. Even in evaluations that do use real-world apps, details about the ground truth in those apps are rarely documented, which makes it difficult to compare and reproduce the results. To push Android taint analysis research forward, this paper thus recommends criteria for constructing real-world benchmark suites for this specific domain, and presents TaintBench, the first real-world malware benchmark suite with documented taint flows. TaintBench benchmark apps include taint flows with complex structures, and addresses static challenges that are commonly agreed on by the community. Together with the TaintBench suite, we introduce the TaintBench framework, whose goal is to simplify real-world benchmarking of Android taint analyses. First, a usability test shows that the framework improves experts’ performance and perceived usability when documenting and inspecting taint flows. Second, experiments using TaintBench reveal new insights for the taint analysis tools Amandroid and FlowDroid: (i) They are less effective on real-world malware apps than on synthetic benchmark apps. (ii) Predefined lists of sources and sinks heavily impact the tools’ accuracy. (iii) Surprisingly, up-to-date versions of both tools are less accurate than their predecessors.</subfield></datafield><datafield tag="650" ind1=" " ind2="4"><subfield code="a">Taint analysis</subfield></datafield><datafield tag="650" ind1=" " ind2="4"><subfield code="a">Benchmark</subfield></datafield><datafield tag="650" ind1=" " ind2="4"><subfield code="a">Real-world benchmark</subfield></datafield><datafield tag="650" ind1=" " ind2="4"><subfield code="a">Android malware</subfield></datafield><datafield tag="700" ind1="1" ind2=" "><subfield code="a">Pauck, Felix</subfield><subfield code="4">aut</subfield></datafield><datafield tag="700" ind1="1" ind2=" "><subfield code="a">Piskachev, Goran</subfield><subfield code="4">aut</subfield></datafield><datafield tag="700" ind1="1" ind2=" "><subfield code="a">Benz, Manuel</subfield><subfield code="4">aut</subfield></datafield><datafield tag="700" ind1="1" ind2=" "><subfield code="a">Pashchenko, Ivan</subfield><subfield code="4">aut</subfield></datafield><datafield tag="700" ind1="1" ind2=" "><subfield code="a">Mory, Martin</subfield><subfield code="4">aut</subfield></datafield><datafield tag="700" ind1="1" ind2=" "><subfield code="a">Bodden, Eric</subfield><subfield code="4">aut</subfield></datafield><datafield tag="700" ind1="1" ind2=" "><subfield code="a">Hermann, Ben</subfield><subfield code="4">aut</subfield></datafield><datafield tag="700" ind1="1" ind2=" "><subfield code="a">Massacci, Fabio</subfield><subfield code="4">aut</subfield></datafield><datafield tag="773" ind1="0" ind2="8"><subfield code="i">Enthalten in</subfield><subfield code="t">Empirical software engineering</subfield><subfield code="d">Springer US, 1996</subfield><subfield code="g">27(2021), 1 vom: 29. Okt.</subfield><subfield code="w">(DE-627)235946516</subfield><subfield code="w">(DE-600)1401304-6</subfield><subfield code="w">(DE-576)102432406</subfield><subfield code="x">1382-3256</subfield><subfield code="7">nnns</subfield></datafield><datafield tag="773" ind1="1" ind2="8"><subfield code="g">volume:27</subfield><subfield code="g">year:2021</subfield><subfield code="g">number:1</subfield><subfield code="g">day:29</subfield><subfield code="g">month:10</subfield></datafield><datafield tag="856" ind1="4" ind2="1"><subfield code="u">https://doi.org/10.1007/s10664-021-10013-5</subfield><subfield code="z">lizenzpflichtig</subfield><subfield code="3">Volltext</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_USEFLAG_A</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">SYSFLAG_A</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_OLC</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">SSG-OLC-MAT</subfield></datafield><datafield tag="951" ind1=" " ind2=" "><subfield code="a">AR</subfield></datafield><datafield tag="952" ind1=" " ind2=" "><subfield code="d">27</subfield><subfield code="j">2021</subfield><subfield code="e">1</subfield><subfield code="b">29</subfield><subfield code="c">10</subfield></datafield></record></collection>
|
score |
7.4013615 |