Comprehensibility of system models during test design: a controlled experiment comparing UML activity diagrams and state machines
Abstract UML activity diagrams and state machines are both used for modeling system behavior from the user perspective and are frequently the basis for deriving system test cases. In practice, system test cases are often derived manually from UML activity diagrams or state machines. For this task, c...
Ausführliche Beschreibung
Autor*in: |
Felderer, Michael [verfasserIn] |
---|
Format: |
Artikel |
---|---|
Sprache: |
Englisch |
Erschienen: |
2018 |
---|
Schlagwörter: |
---|
Anmerkung: |
© The Author(s) 2018 |
---|
Übergeordnetes Werk: |
Enthalten in: Software quality journal - Springer US, 1992, 27(2018), 1 vom: 23. Apr., Seite 125-147 |
---|---|
Übergeordnetes Werk: |
volume:27 ; year:2018 ; number:1 ; day:23 ; month:04 ; pages:125-147 |
Links: |
---|
DOI / URN: |
10.1007/s11219-018-9407-9 |
---|
Katalog-ID: |
OLC2033734250 |
---|
LEADER | 01000caa a22002652 4500 | ||
---|---|---|---|
001 | OLC2033734250 | ||
003 | DE-627 | ||
005 | 20230504051312.0 | ||
007 | tu | ||
008 | 200819s2018 xx ||||| 00| ||eng c | ||
024 | 7 | |a 10.1007/s11219-018-9407-9 |2 doi | |
035 | |a (DE-627)OLC2033734250 | ||
035 | |a (DE-He213)s11219-018-9407-9-p | ||
040 | |a DE-627 |b ger |c DE-627 |e rakwb | ||
041 | |a eng | ||
082 | 0 | 4 | |a 004 |q VZ |
100 | 1 | |a Felderer, Michael |e verfasserin |4 aut | |
245 | 1 | 0 | |a Comprehensibility of system models during test design: a controlled experiment comparing UML activity diagrams and state machines |
264 | 1 | |c 2018 | |
336 | |a Text |b txt |2 rdacontent | ||
337 | |a ohne Hilfsmittel zu benutzen |b n |2 rdamedia | ||
338 | |a Band |b nc |2 rdacarrier | ||
500 | |a © The Author(s) 2018 | ||
520 | |a Abstract UML activity diagrams and state machines are both used for modeling system behavior from the user perspective and are frequently the basis for deriving system test cases. In practice, system test cases are often derived manually from UML activity diagrams or state machines. For this task, comprehensibility of respective models is essential and a relevant question for practice to support model selection and design, as well as subsequent test derivation. Therefore, the objective of this paper is to compare the comprehensibility of UML activity diagrams and state machines during manual test case derivation. We investigate the comprehensibility of UML activity diagrams and state machines in a controlled student experiment. Three measures for comprehensibility have been investigated: (1) the self-assessed comprehensibility, (2) the actual comprehensibility measured by the correctness of answers to comprehensibility questions, and (3) the number of errors made during test case derivation. The experiment was performed and internally replicated with overall 84 participants divided into three groups at two institutions. Our experiment indicates that activity diagrams are more comprehensible but also more error-prone with regard to manual test case derivation and discusses how these results can improve system modeling and test case design. | ||
650 | 4 | |a UML models | |
650 | 4 | |a System testing | |
650 | 4 | |a System models | |
650 | 4 | |a Test design | |
650 | 4 | |a Model comprehensibility | |
650 | 4 | |a Controlled experiment | |
700 | 1 | |a Herrmann, Andrea |4 aut | |
773 | 0 | 8 | |i Enthalten in |t Software quality journal |d Springer US, 1992 |g 27(2018), 1 vom: 23. Apr., Seite 125-147 |w (DE-627)131154087 |w (DE-600)1131702-4 |w (DE-576)04308236X |x 0963-9314 |7 nnns |
773 | 1 | 8 | |g volume:27 |g year:2018 |g number:1 |g day:23 |g month:04 |g pages:125-147 |
856 | 4 | 1 | |u https://doi.org/10.1007/s11219-018-9407-9 |z lizenzpflichtig |3 Volltext |
912 | |a GBV_USEFLAG_A | ||
912 | |a SYSFLAG_A | ||
912 | |a GBV_OLC | ||
912 | |a SSG-OLC-MAT | ||
912 | |a GBV_ILN_70 | ||
951 | |a AR | ||
952 | |d 27 |j 2018 |e 1 |b 23 |c 04 |h 125-147 |
author_variant |
m f mf a h ah |
---|---|
matchkey_str |
article:09639314:2018----::opeesbltossemdldrntsdsgaotoldxeietoprnulci |
hierarchy_sort_str |
2018 |
publishDate |
2018 |
allfields |
10.1007/s11219-018-9407-9 doi (DE-627)OLC2033734250 (DE-He213)s11219-018-9407-9-p DE-627 ger DE-627 rakwb eng 004 VZ Felderer, Michael verfasserin aut Comprehensibility of system models during test design: a controlled experiment comparing UML activity diagrams and state machines 2018 Text txt rdacontent ohne Hilfsmittel zu benutzen n rdamedia Band nc rdacarrier © The Author(s) 2018 Abstract UML activity diagrams and state machines are both used for modeling system behavior from the user perspective and are frequently the basis for deriving system test cases. In practice, system test cases are often derived manually from UML activity diagrams or state machines. For this task, comprehensibility of respective models is essential and a relevant question for practice to support model selection and design, as well as subsequent test derivation. Therefore, the objective of this paper is to compare the comprehensibility of UML activity diagrams and state machines during manual test case derivation. We investigate the comprehensibility of UML activity diagrams and state machines in a controlled student experiment. Three measures for comprehensibility have been investigated: (1) the self-assessed comprehensibility, (2) the actual comprehensibility measured by the correctness of answers to comprehensibility questions, and (3) the number of errors made during test case derivation. The experiment was performed and internally replicated with overall 84 participants divided into three groups at two institutions. Our experiment indicates that activity diagrams are more comprehensible but also more error-prone with regard to manual test case derivation and discusses how these results can improve system modeling and test case design. UML models System testing System models Test design Model comprehensibility Controlled experiment Herrmann, Andrea aut Enthalten in Software quality journal Springer US, 1992 27(2018), 1 vom: 23. Apr., Seite 125-147 (DE-627)131154087 (DE-600)1131702-4 (DE-576)04308236X 0963-9314 nnns volume:27 year:2018 number:1 day:23 month:04 pages:125-147 https://doi.org/10.1007/s11219-018-9407-9 lizenzpflichtig Volltext GBV_USEFLAG_A SYSFLAG_A GBV_OLC SSG-OLC-MAT GBV_ILN_70 AR 27 2018 1 23 04 125-147 |
spelling |
10.1007/s11219-018-9407-9 doi (DE-627)OLC2033734250 (DE-He213)s11219-018-9407-9-p DE-627 ger DE-627 rakwb eng 004 VZ Felderer, Michael verfasserin aut Comprehensibility of system models during test design: a controlled experiment comparing UML activity diagrams and state machines 2018 Text txt rdacontent ohne Hilfsmittel zu benutzen n rdamedia Band nc rdacarrier © The Author(s) 2018 Abstract UML activity diagrams and state machines are both used for modeling system behavior from the user perspective and are frequently the basis for deriving system test cases. In practice, system test cases are often derived manually from UML activity diagrams or state machines. For this task, comprehensibility of respective models is essential and a relevant question for practice to support model selection and design, as well as subsequent test derivation. Therefore, the objective of this paper is to compare the comprehensibility of UML activity diagrams and state machines during manual test case derivation. We investigate the comprehensibility of UML activity diagrams and state machines in a controlled student experiment. Three measures for comprehensibility have been investigated: (1) the self-assessed comprehensibility, (2) the actual comprehensibility measured by the correctness of answers to comprehensibility questions, and (3) the number of errors made during test case derivation. The experiment was performed and internally replicated with overall 84 participants divided into three groups at two institutions. Our experiment indicates that activity diagrams are more comprehensible but also more error-prone with regard to manual test case derivation and discusses how these results can improve system modeling and test case design. UML models System testing System models Test design Model comprehensibility Controlled experiment Herrmann, Andrea aut Enthalten in Software quality journal Springer US, 1992 27(2018), 1 vom: 23. Apr., Seite 125-147 (DE-627)131154087 (DE-600)1131702-4 (DE-576)04308236X 0963-9314 nnns volume:27 year:2018 number:1 day:23 month:04 pages:125-147 https://doi.org/10.1007/s11219-018-9407-9 lizenzpflichtig Volltext GBV_USEFLAG_A SYSFLAG_A GBV_OLC SSG-OLC-MAT GBV_ILN_70 AR 27 2018 1 23 04 125-147 |
allfields_unstemmed |
10.1007/s11219-018-9407-9 doi (DE-627)OLC2033734250 (DE-He213)s11219-018-9407-9-p DE-627 ger DE-627 rakwb eng 004 VZ Felderer, Michael verfasserin aut Comprehensibility of system models during test design: a controlled experiment comparing UML activity diagrams and state machines 2018 Text txt rdacontent ohne Hilfsmittel zu benutzen n rdamedia Band nc rdacarrier © The Author(s) 2018 Abstract UML activity diagrams and state machines are both used for modeling system behavior from the user perspective and are frequently the basis for deriving system test cases. In practice, system test cases are often derived manually from UML activity diagrams or state machines. For this task, comprehensibility of respective models is essential and a relevant question for practice to support model selection and design, as well as subsequent test derivation. Therefore, the objective of this paper is to compare the comprehensibility of UML activity diagrams and state machines during manual test case derivation. We investigate the comprehensibility of UML activity diagrams and state machines in a controlled student experiment. Three measures for comprehensibility have been investigated: (1) the self-assessed comprehensibility, (2) the actual comprehensibility measured by the correctness of answers to comprehensibility questions, and (3) the number of errors made during test case derivation. The experiment was performed and internally replicated with overall 84 participants divided into three groups at two institutions. Our experiment indicates that activity diagrams are more comprehensible but also more error-prone with regard to manual test case derivation and discusses how these results can improve system modeling and test case design. UML models System testing System models Test design Model comprehensibility Controlled experiment Herrmann, Andrea aut Enthalten in Software quality journal Springer US, 1992 27(2018), 1 vom: 23. Apr., Seite 125-147 (DE-627)131154087 (DE-600)1131702-4 (DE-576)04308236X 0963-9314 nnns volume:27 year:2018 number:1 day:23 month:04 pages:125-147 https://doi.org/10.1007/s11219-018-9407-9 lizenzpflichtig Volltext GBV_USEFLAG_A SYSFLAG_A GBV_OLC SSG-OLC-MAT GBV_ILN_70 AR 27 2018 1 23 04 125-147 |
allfieldsGer |
10.1007/s11219-018-9407-9 doi (DE-627)OLC2033734250 (DE-He213)s11219-018-9407-9-p DE-627 ger DE-627 rakwb eng 004 VZ Felderer, Michael verfasserin aut Comprehensibility of system models during test design: a controlled experiment comparing UML activity diagrams and state machines 2018 Text txt rdacontent ohne Hilfsmittel zu benutzen n rdamedia Band nc rdacarrier © The Author(s) 2018 Abstract UML activity diagrams and state machines are both used for modeling system behavior from the user perspective and are frequently the basis for deriving system test cases. In practice, system test cases are often derived manually from UML activity diagrams or state machines. For this task, comprehensibility of respective models is essential and a relevant question for practice to support model selection and design, as well as subsequent test derivation. Therefore, the objective of this paper is to compare the comprehensibility of UML activity diagrams and state machines during manual test case derivation. We investigate the comprehensibility of UML activity diagrams and state machines in a controlled student experiment. Three measures for comprehensibility have been investigated: (1) the self-assessed comprehensibility, (2) the actual comprehensibility measured by the correctness of answers to comprehensibility questions, and (3) the number of errors made during test case derivation. The experiment was performed and internally replicated with overall 84 participants divided into three groups at two institutions. Our experiment indicates that activity diagrams are more comprehensible but also more error-prone with regard to manual test case derivation and discusses how these results can improve system modeling and test case design. UML models System testing System models Test design Model comprehensibility Controlled experiment Herrmann, Andrea aut Enthalten in Software quality journal Springer US, 1992 27(2018), 1 vom: 23. Apr., Seite 125-147 (DE-627)131154087 (DE-600)1131702-4 (DE-576)04308236X 0963-9314 nnns volume:27 year:2018 number:1 day:23 month:04 pages:125-147 https://doi.org/10.1007/s11219-018-9407-9 lizenzpflichtig Volltext GBV_USEFLAG_A SYSFLAG_A GBV_OLC SSG-OLC-MAT GBV_ILN_70 AR 27 2018 1 23 04 125-147 |
allfieldsSound |
10.1007/s11219-018-9407-9 doi (DE-627)OLC2033734250 (DE-He213)s11219-018-9407-9-p DE-627 ger DE-627 rakwb eng 004 VZ Felderer, Michael verfasserin aut Comprehensibility of system models during test design: a controlled experiment comparing UML activity diagrams and state machines 2018 Text txt rdacontent ohne Hilfsmittel zu benutzen n rdamedia Band nc rdacarrier © The Author(s) 2018 Abstract UML activity diagrams and state machines are both used for modeling system behavior from the user perspective and are frequently the basis for deriving system test cases. In practice, system test cases are often derived manually from UML activity diagrams or state machines. For this task, comprehensibility of respective models is essential and a relevant question for practice to support model selection and design, as well as subsequent test derivation. Therefore, the objective of this paper is to compare the comprehensibility of UML activity diagrams and state machines during manual test case derivation. We investigate the comprehensibility of UML activity diagrams and state machines in a controlled student experiment. Three measures for comprehensibility have been investigated: (1) the self-assessed comprehensibility, (2) the actual comprehensibility measured by the correctness of answers to comprehensibility questions, and (3) the number of errors made during test case derivation. The experiment was performed and internally replicated with overall 84 participants divided into three groups at two institutions. Our experiment indicates that activity diagrams are more comprehensible but also more error-prone with regard to manual test case derivation and discusses how these results can improve system modeling and test case design. UML models System testing System models Test design Model comprehensibility Controlled experiment Herrmann, Andrea aut Enthalten in Software quality journal Springer US, 1992 27(2018), 1 vom: 23. Apr., Seite 125-147 (DE-627)131154087 (DE-600)1131702-4 (DE-576)04308236X 0963-9314 nnns volume:27 year:2018 number:1 day:23 month:04 pages:125-147 https://doi.org/10.1007/s11219-018-9407-9 lizenzpflichtig Volltext GBV_USEFLAG_A SYSFLAG_A GBV_OLC SSG-OLC-MAT GBV_ILN_70 AR 27 2018 1 23 04 125-147 |
language |
English |
source |
Enthalten in Software quality journal 27(2018), 1 vom: 23. Apr., Seite 125-147 volume:27 year:2018 number:1 day:23 month:04 pages:125-147 |
sourceStr |
Enthalten in Software quality journal 27(2018), 1 vom: 23. Apr., Seite 125-147 volume:27 year:2018 number:1 day:23 month:04 pages:125-147 |
format_phy_str_mv |
Article |
institution |
findex.gbv.de |
topic_facet |
UML models System testing System models Test design Model comprehensibility Controlled experiment |
dewey-raw |
004 |
isfreeaccess_bool |
false |
container_title |
Software quality journal |
authorswithroles_txt_mv |
Felderer, Michael @@aut@@ Herrmann, Andrea @@aut@@ |
publishDateDaySort_date |
2018-04-23T00:00:00Z |
hierarchy_top_id |
131154087 |
dewey-sort |
14 |
id |
OLC2033734250 |
language_de |
englisch |
fullrecord |
<?xml version="1.0" encoding="UTF-8"?><collection xmlns="http://www.loc.gov/MARC21/slim"><record><leader>01000caa a22002652 4500</leader><controlfield tag="001">OLC2033734250</controlfield><controlfield tag="003">DE-627</controlfield><controlfield tag="005">20230504051312.0</controlfield><controlfield tag="007">tu</controlfield><controlfield tag="008">200819s2018 xx ||||| 00| ||eng c</controlfield><datafield tag="024" ind1="7" ind2=" "><subfield code="a">10.1007/s11219-018-9407-9</subfield><subfield code="2">doi</subfield></datafield><datafield tag="035" ind1=" " ind2=" "><subfield code="a">(DE-627)OLC2033734250</subfield></datafield><datafield tag="035" ind1=" " ind2=" "><subfield code="a">(DE-He213)s11219-018-9407-9-p</subfield></datafield><datafield tag="040" ind1=" " ind2=" "><subfield code="a">DE-627</subfield><subfield code="b">ger</subfield><subfield code="c">DE-627</subfield><subfield code="e">rakwb</subfield></datafield><datafield tag="041" ind1=" " ind2=" "><subfield code="a">eng</subfield></datafield><datafield tag="082" ind1="0" ind2="4"><subfield code="a">004</subfield><subfield code="q">VZ</subfield></datafield><datafield tag="100" ind1="1" ind2=" "><subfield code="a">Felderer, Michael</subfield><subfield code="e">verfasserin</subfield><subfield code="4">aut</subfield></datafield><datafield tag="245" ind1="1" ind2="0"><subfield code="a">Comprehensibility of system models during test design: a controlled experiment comparing UML activity diagrams and state machines</subfield></datafield><datafield tag="264" ind1=" " ind2="1"><subfield code="c">2018</subfield></datafield><datafield tag="336" ind1=" " ind2=" "><subfield code="a">Text</subfield><subfield code="b">txt</subfield><subfield code="2">rdacontent</subfield></datafield><datafield tag="337" ind1=" " ind2=" "><subfield code="a">ohne Hilfsmittel zu benutzen</subfield><subfield code="b">n</subfield><subfield code="2">rdamedia</subfield></datafield><datafield tag="338" ind1=" " ind2=" "><subfield code="a">Band</subfield><subfield code="b">nc</subfield><subfield code="2">rdacarrier</subfield></datafield><datafield tag="500" ind1=" " ind2=" "><subfield code="a">© The Author(s) 2018</subfield></datafield><datafield tag="520" ind1=" " ind2=" "><subfield code="a">Abstract UML activity diagrams and state machines are both used for modeling system behavior from the user perspective and are frequently the basis for deriving system test cases. In practice, system test cases are often derived manually from UML activity diagrams or state machines. For this task, comprehensibility of respective models is essential and a relevant question for practice to support model selection and design, as well as subsequent test derivation. Therefore, the objective of this paper is to compare the comprehensibility of UML activity diagrams and state machines during manual test case derivation. We investigate the comprehensibility of UML activity diagrams and state machines in a controlled student experiment. Three measures for comprehensibility have been investigated: (1) the self-assessed comprehensibility, (2) the actual comprehensibility measured by the correctness of answers to comprehensibility questions, and (3) the number of errors made during test case derivation. The experiment was performed and internally replicated with overall 84 participants divided into three groups at two institutions. Our experiment indicates that activity diagrams are more comprehensible but also more error-prone with regard to manual test case derivation and discusses how these results can improve system modeling and test case design.</subfield></datafield><datafield tag="650" ind1=" " ind2="4"><subfield code="a">UML models</subfield></datafield><datafield tag="650" ind1=" " ind2="4"><subfield code="a">System testing</subfield></datafield><datafield tag="650" ind1=" " ind2="4"><subfield code="a">System models</subfield></datafield><datafield tag="650" ind1=" " ind2="4"><subfield code="a">Test design</subfield></datafield><datafield tag="650" ind1=" " ind2="4"><subfield code="a">Model comprehensibility</subfield></datafield><datafield tag="650" ind1=" " ind2="4"><subfield code="a">Controlled experiment</subfield></datafield><datafield tag="700" ind1="1" ind2=" "><subfield code="a">Herrmann, Andrea</subfield><subfield code="4">aut</subfield></datafield><datafield tag="773" ind1="0" ind2="8"><subfield code="i">Enthalten in</subfield><subfield code="t">Software quality journal</subfield><subfield code="d">Springer US, 1992</subfield><subfield code="g">27(2018), 1 vom: 23. Apr., Seite 125-147</subfield><subfield code="w">(DE-627)131154087</subfield><subfield code="w">(DE-600)1131702-4</subfield><subfield code="w">(DE-576)04308236X</subfield><subfield code="x">0963-9314</subfield><subfield code="7">nnns</subfield></datafield><datafield tag="773" ind1="1" ind2="8"><subfield code="g">volume:27</subfield><subfield code="g">year:2018</subfield><subfield code="g">number:1</subfield><subfield code="g">day:23</subfield><subfield code="g">month:04</subfield><subfield code="g">pages:125-147</subfield></datafield><datafield tag="856" ind1="4" ind2="1"><subfield code="u">https://doi.org/10.1007/s11219-018-9407-9</subfield><subfield code="z">lizenzpflichtig</subfield><subfield code="3">Volltext</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_USEFLAG_A</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">SYSFLAG_A</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_OLC</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">SSG-OLC-MAT</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_70</subfield></datafield><datafield tag="951" ind1=" " ind2=" "><subfield code="a">AR</subfield></datafield><datafield tag="952" ind1=" " ind2=" "><subfield code="d">27</subfield><subfield code="j">2018</subfield><subfield code="e">1</subfield><subfield code="b">23</subfield><subfield code="c">04</subfield><subfield code="h">125-147</subfield></datafield></record></collection>
|
author |
Felderer, Michael |
spellingShingle |
Felderer, Michael ddc 004 misc UML models misc System testing misc System models misc Test design misc Model comprehensibility misc Controlled experiment Comprehensibility of system models during test design: a controlled experiment comparing UML activity diagrams and state machines |
authorStr |
Felderer, Michael |
ppnlink_with_tag_str_mv |
@@773@@(DE-627)131154087 |
format |
Article |
dewey-ones |
004 - Data processing & computer science |
delete_txt_mv |
keep |
author_role |
aut aut |
collection |
OLC |
remote_str |
false |
illustrated |
Not Illustrated |
issn |
0963-9314 |
topic_title |
004 VZ Comprehensibility of system models during test design: a controlled experiment comparing UML activity diagrams and state machines UML models System testing System models Test design Model comprehensibility Controlled experiment |
topic |
ddc 004 misc UML models misc System testing misc System models misc Test design misc Model comprehensibility misc Controlled experiment |
topic_unstemmed |
ddc 004 misc UML models misc System testing misc System models misc Test design misc Model comprehensibility misc Controlled experiment |
topic_browse |
ddc 004 misc UML models misc System testing misc System models misc Test design misc Model comprehensibility misc Controlled experiment |
format_facet |
Aufsätze Gedruckte Aufsätze |
format_main_str_mv |
Text Zeitschrift/Artikel |
carriertype_str_mv |
nc |
hierarchy_parent_title |
Software quality journal |
hierarchy_parent_id |
131154087 |
dewey-tens |
000 - Computer science, knowledge & systems |
hierarchy_top_title |
Software quality journal |
isfreeaccess_txt |
false |
familylinks_str_mv |
(DE-627)131154087 (DE-600)1131702-4 (DE-576)04308236X |
title |
Comprehensibility of system models during test design: a controlled experiment comparing UML activity diagrams and state machines |
ctrlnum |
(DE-627)OLC2033734250 (DE-He213)s11219-018-9407-9-p |
title_full |
Comprehensibility of system models during test design: a controlled experiment comparing UML activity diagrams and state machines |
author_sort |
Felderer, Michael |
journal |
Software quality journal |
journalStr |
Software quality journal |
lang_code |
eng |
isOA_bool |
false |
dewey-hundreds |
000 - Computer science, information & general works |
recordtype |
marc |
publishDateSort |
2018 |
contenttype_str_mv |
txt |
container_start_page |
125 |
author_browse |
Felderer, Michael Herrmann, Andrea |
container_volume |
27 |
class |
004 VZ |
format_se |
Aufsätze |
author-letter |
Felderer, Michael |
doi_str_mv |
10.1007/s11219-018-9407-9 |
dewey-full |
004 |
title_sort |
comprehensibility of system models during test design: a controlled experiment comparing uml activity diagrams and state machines |
title_auth |
Comprehensibility of system models during test design: a controlled experiment comparing UML activity diagrams and state machines |
abstract |
Abstract UML activity diagrams and state machines are both used for modeling system behavior from the user perspective and are frequently the basis for deriving system test cases. In practice, system test cases are often derived manually from UML activity diagrams or state machines. For this task, comprehensibility of respective models is essential and a relevant question for practice to support model selection and design, as well as subsequent test derivation. Therefore, the objective of this paper is to compare the comprehensibility of UML activity diagrams and state machines during manual test case derivation. We investigate the comprehensibility of UML activity diagrams and state machines in a controlled student experiment. Three measures for comprehensibility have been investigated: (1) the self-assessed comprehensibility, (2) the actual comprehensibility measured by the correctness of answers to comprehensibility questions, and (3) the number of errors made during test case derivation. The experiment was performed and internally replicated with overall 84 participants divided into three groups at two institutions. Our experiment indicates that activity diagrams are more comprehensible but also more error-prone with regard to manual test case derivation and discusses how these results can improve system modeling and test case design. © The Author(s) 2018 |
abstractGer |
Abstract UML activity diagrams and state machines are both used for modeling system behavior from the user perspective and are frequently the basis for deriving system test cases. In practice, system test cases are often derived manually from UML activity diagrams or state machines. For this task, comprehensibility of respective models is essential and a relevant question for practice to support model selection and design, as well as subsequent test derivation. Therefore, the objective of this paper is to compare the comprehensibility of UML activity diagrams and state machines during manual test case derivation. We investigate the comprehensibility of UML activity diagrams and state machines in a controlled student experiment. Three measures for comprehensibility have been investigated: (1) the self-assessed comprehensibility, (2) the actual comprehensibility measured by the correctness of answers to comprehensibility questions, and (3) the number of errors made during test case derivation. The experiment was performed and internally replicated with overall 84 participants divided into three groups at two institutions. Our experiment indicates that activity diagrams are more comprehensible but also more error-prone with regard to manual test case derivation and discusses how these results can improve system modeling and test case design. © The Author(s) 2018 |
abstract_unstemmed |
Abstract UML activity diagrams and state machines are both used for modeling system behavior from the user perspective and are frequently the basis for deriving system test cases. In practice, system test cases are often derived manually from UML activity diagrams or state machines. For this task, comprehensibility of respective models is essential and a relevant question for practice to support model selection and design, as well as subsequent test derivation. Therefore, the objective of this paper is to compare the comprehensibility of UML activity diagrams and state machines during manual test case derivation. We investigate the comprehensibility of UML activity diagrams and state machines in a controlled student experiment. Three measures for comprehensibility have been investigated: (1) the self-assessed comprehensibility, (2) the actual comprehensibility measured by the correctness of answers to comprehensibility questions, and (3) the number of errors made during test case derivation. The experiment was performed and internally replicated with overall 84 participants divided into three groups at two institutions. Our experiment indicates that activity diagrams are more comprehensible but also more error-prone with regard to manual test case derivation and discusses how these results can improve system modeling and test case design. © The Author(s) 2018 |
collection_details |
GBV_USEFLAG_A SYSFLAG_A GBV_OLC SSG-OLC-MAT GBV_ILN_70 |
container_issue |
1 |
title_short |
Comprehensibility of system models during test design: a controlled experiment comparing UML activity diagrams and state machines |
url |
https://doi.org/10.1007/s11219-018-9407-9 |
remote_bool |
false |
author2 |
Herrmann, Andrea |
author2Str |
Herrmann, Andrea |
ppnlink |
131154087 |
mediatype_str_mv |
n |
isOA_txt |
false |
hochschulschrift_bool |
false |
doi_str |
10.1007/s11219-018-9407-9 |
up_date |
2024-07-03T18:13:41.608Z |
_version_ |
1803582619370127360 |
fullrecord_marcxml |
<?xml version="1.0" encoding="UTF-8"?><collection xmlns="http://www.loc.gov/MARC21/slim"><record><leader>01000caa a22002652 4500</leader><controlfield tag="001">OLC2033734250</controlfield><controlfield tag="003">DE-627</controlfield><controlfield tag="005">20230504051312.0</controlfield><controlfield tag="007">tu</controlfield><controlfield tag="008">200819s2018 xx ||||| 00| ||eng c</controlfield><datafield tag="024" ind1="7" ind2=" "><subfield code="a">10.1007/s11219-018-9407-9</subfield><subfield code="2">doi</subfield></datafield><datafield tag="035" ind1=" " ind2=" "><subfield code="a">(DE-627)OLC2033734250</subfield></datafield><datafield tag="035" ind1=" " ind2=" "><subfield code="a">(DE-He213)s11219-018-9407-9-p</subfield></datafield><datafield tag="040" ind1=" " ind2=" "><subfield code="a">DE-627</subfield><subfield code="b">ger</subfield><subfield code="c">DE-627</subfield><subfield code="e">rakwb</subfield></datafield><datafield tag="041" ind1=" " ind2=" "><subfield code="a">eng</subfield></datafield><datafield tag="082" ind1="0" ind2="4"><subfield code="a">004</subfield><subfield code="q">VZ</subfield></datafield><datafield tag="100" ind1="1" ind2=" "><subfield code="a">Felderer, Michael</subfield><subfield code="e">verfasserin</subfield><subfield code="4">aut</subfield></datafield><datafield tag="245" ind1="1" ind2="0"><subfield code="a">Comprehensibility of system models during test design: a controlled experiment comparing UML activity diagrams and state machines</subfield></datafield><datafield tag="264" ind1=" " ind2="1"><subfield code="c">2018</subfield></datafield><datafield tag="336" ind1=" " ind2=" "><subfield code="a">Text</subfield><subfield code="b">txt</subfield><subfield code="2">rdacontent</subfield></datafield><datafield tag="337" ind1=" " ind2=" "><subfield code="a">ohne Hilfsmittel zu benutzen</subfield><subfield code="b">n</subfield><subfield code="2">rdamedia</subfield></datafield><datafield tag="338" ind1=" " ind2=" "><subfield code="a">Band</subfield><subfield code="b">nc</subfield><subfield code="2">rdacarrier</subfield></datafield><datafield tag="500" ind1=" " ind2=" "><subfield code="a">© The Author(s) 2018</subfield></datafield><datafield tag="520" ind1=" " ind2=" "><subfield code="a">Abstract UML activity diagrams and state machines are both used for modeling system behavior from the user perspective and are frequently the basis for deriving system test cases. In practice, system test cases are often derived manually from UML activity diagrams or state machines. For this task, comprehensibility of respective models is essential and a relevant question for practice to support model selection and design, as well as subsequent test derivation. Therefore, the objective of this paper is to compare the comprehensibility of UML activity diagrams and state machines during manual test case derivation. We investigate the comprehensibility of UML activity diagrams and state machines in a controlled student experiment. Three measures for comprehensibility have been investigated: (1) the self-assessed comprehensibility, (2) the actual comprehensibility measured by the correctness of answers to comprehensibility questions, and (3) the number of errors made during test case derivation. The experiment was performed and internally replicated with overall 84 participants divided into three groups at two institutions. Our experiment indicates that activity diagrams are more comprehensible but also more error-prone with regard to manual test case derivation and discusses how these results can improve system modeling and test case design.</subfield></datafield><datafield tag="650" ind1=" " ind2="4"><subfield code="a">UML models</subfield></datafield><datafield tag="650" ind1=" " ind2="4"><subfield code="a">System testing</subfield></datafield><datafield tag="650" ind1=" " ind2="4"><subfield code="a">System models</subfield></datafield><datafield tag="650" ind1=" " ind2="4"><subfield code="a">Test design</subfield></datafield><datafield tag="650" ind1=" " ind2="4"><subfield code="a">Model comprehensibility</subfield></datafield><datafield tag="650" ind1=" " ind2="4"><subfield code="a">Controlled experiment</subfield></datafield><datafield tag="700" ind1="1" ind2=" "><subfield code="a">Herrmann, Andrea</subfield><subfield code="4">aut</subfield></datafield><datafield tag="773" ind1="0" ind2="8"><subfield code="i">Enthalten in</subfield><subfield code="t">Software quality journal</subfield><subfield code="d">Springer US, 1992</subfield><subfield code="g">27(2018), 1 vom: 23. Apr., Seite 125-147</subfield><subfield code="w">(DE-627)131154087</subfield><subfield code="w">(DE-600)1131702-4</subfield><subfield code="w">(DE-576)04308236X</subfield><subfield code="x">0963-9314</subfield><subfield code="7">nnns</subfield></datafield><datafield tag="773" ind1="1" ind2="8"><subfield code="g">volume:27</subfield><subfield code="g">year:2018</subfield><subfield code="g">number:1</subfield><subfield code="g">day:23</subfield><subfield code="g">month:04</subfield><subfield code="g">pages:125-147</subfield></datafield><datafield tag="856" ind1="4" ind2="1"><subfield code="u">https://doi.org/10.1007/s11219-018-9407-9</subfield><subfield code="z">lizenzpflichtig</subfield><subfield code="3">Volltext</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_USEFLAG_A</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">SYSFLAG_A</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_OLC</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">SSG-OLC-MAT</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_70</subfield></datafield><datafield tag="951" ind1=" " ind2=" "><subfield code="a">AR</subfield></datafield><datafield tag="952" ind1=" " ind2=" "><subfield code="d">27</subfield><subfield code="j">2018</subfield><subfield code="e">1</subfield><subfield code="b">23</subfield><subfield code="c">04</subfield><subfield code="h">125-147</subfield></datafield></record></collection>
|
score |
7.402337 |