Evaluating the Psychometric Characteristics of Generated Multiple-Choice Test Items
Item development is a time- and resource-intensive process. Automatic item generation integrates cognitive modeling with computer technology to systematically generate test items. To date, however, items generated using cognitive modeling procedures have received limited use in operational testing s...
Ausführliche Beschreibung
Autor*in: |
Gierl, Mark J [verfasserIn] |
---|
Format: |
Artikel |
---|---|
Sprache: |
Englisch |
Erschienen: |
2016 |
---|
Rechteinformationen: |
Nutzungsrecht: © 2016 Taylor & Francis 2016 |
---|
Schlagwörter: |
---|
Übergeordnetes Werk: |
Enthalten in: Applied measurement in education - Chestnut, Pa.[u.a.] : Routledge, Taylor & Francis Group, 1988, 29(2016), 3, Seite 196 |
---|---|
Übergeordnetes Werk: |
volume:29 ; year:2016 ; number:3 ; pages:196 |
Links: |
---|
DOI / URN: |
10.1080/08957347.2016.1171768 |
---|
Katalog-ID: |
OLC1979325227 |
---|
LEADER | 01000caa a2200265 4500 | ||
---|---|---|---|
001 | OLC1979325227 | ||
003 | DE-627 | ||
005 | 20230714203412.0 | ||
007 | tu | ||
008 | 160720s2016 xx ||||| 00| ||eng c | ||
024 | 7 | |a 10.1080/08957347.2016.1171768 |2 doi | |
028 | 5 | 2 | |a PQ20161012 |
035 | |a (DE-627)OLC1979325227 | ||
035 | |a (DE-599)GBVOLC1979325227 | ||
035 | |a (PRQ)c1307-7671748b9271ce49ba18eb27999286383c0216db8e575aa33708a633e4facbfb0 | ||
035 | |a (KEY)0166940020160000029000300196evaluatingthepsychometriccharacteristicsofgenerate | ||
040 | |a DE-627 |b ger |c DE-627 |e rakwb | ||
041 | |a eng | ||
082 | 0 | 4 | |a 370 |q DNB |
100 | 1 | |a Gierl, Mark J |e verfasserin |4 aut | |
245 | 1 | 0 | |a Evaluating the Psychometric Characteristics of Generated Multiple-Choice Test Items |
264 | 1 | |c 2016 | |
336 | |a Text |b txt |2 rdacontent | ||
337 | |a ohne Hilfsmittel zu benutzen |b n |2 rdamedia | ||
338 | |a Band |b nc |2 rdacarrier | ||
520 | |a Item development is a time- and resource-intensive process. Automatic item generation integrates cognitive modeling with computer technology to systematically generate test items. To date, however, items generated using cognitive modeling procedures have received limited use in operational testing situations. As a result, the psychometric characteristics of generated multiple-choice test items are largely unknown and undocumented. We present item analysis results from one of the first empirical studies designed to evaluate the psychometric properties of generated multiple-choice items using the results from a high stakes national medical licensure examination. The item analysis results for the correct option revealed that the generated items measured examinees' performance across a broad range of ability levels while, at the same time, providing a consistently strong level of discrimination for each item. Results for the incorrect options revealed that the generated items consistently differentiated the low from the high performing examinees. | ||
540 | |a Nutzungsrecht: © 2016 Taylor & Francis 2016 | ||
650 | 4 | |a Automation | |
650 | 4 | |a Cognitive models | |
650 | 4 | |a Information technology | |
650 | 4 | |a Performance evaluation | |
650 | 4 | |a Quantitative psychology | |
650 | 4 | |a Educational tests & measurements | |
700 | 1 | |a Lai, Hollis |4 oth | |
700 | 1 | |a Pugh, Debra |4 oth | |
700 | 1 | |a Touchie, Claire |4 oth | |
700 | 1 | |a Boulais, André-Philippe |4 oth | |
700 | 1 | |a De Champlain, André |4 oth | |
773 | 0 | 8 | |i Enthalten in |t Applied measurement in education |d Chestnut, Pa.[u.a.] : Routledge, Taylor & Francis Group, 1988 |g 29(2016), 3, Seite 196 |w (DE-627)188004599 |w (DE-600)1268255-X |w (DE-576)056399685 |x 0895-7347 |7 nnns |
773 | 1 | 8 | |g volume:29 |g year:2016 |g number:3 |g pages:196 |
856 | 4 | 1 | |u http://dx.doi.org/10.1080/08957347.2016.1171768 |3 Volltext |
856 | 4 | 2 | |u http://www.tandfonline.com/doi/abs/10.1080/08957347.2016.1171768 |
856 | 4 | 2 | |u http://search.proquest.com/docview/1793587007 |
912 | |a GBV_USEFLAG_A | ||
912 | |a SYSFLAG_A | ||
912 | |a GBV_OLC | ||
912 | |a SSG-OLC-BIF | ||
912 | |a GBV_ILN_4012 | ||
951 | |a AR | ||
952 | |d 29 |j 2016 |e 3 |h 196 |
author_variant |
m j g mj mjg |
---|---|
matchkey_str |
article:08957347:2016----::vlaighpyhmticaatrsisfeeaeml |
hierarchy_sort_str |
2016 |
publishDate |
2016 |
allfields |
10.1080/08957347.2016.1171768 doi PQ20161012 (DE-627)OLC1979325227 (DE-599)GBVOLC1979325227 (PRQ)c1307-7671748b9271ce49ba18eb27999286383c0216db8e575aa33708a633e4facbfb0 (KEY)0166940020160000029000300196evaluatingthepsychometriccharacteristicsofgenerate DE-627 ger DE-627 rakwb eng 370 DNB Gierl, Mark J verfasserin aut Evaluating the Psychometric Characteristics of Generated Multiple-Choice Test Items 2016 Text txt rdacontent ohne Hilfsmittel zu benutzen n rdamedia Band nc rdacarrier Item development is a time- and resource-intensive process. Automatic item generation integrates cognitive modeling with computer technology to systematically generate test items. To date, however, items generated using cognitive modeling procedures have received limited use in operational testing situations. As a result, the psychometric characteristics of generated multiple-choice test items are largely unknown and undocumented. We present item analysis results from one of the first empirical studies designed to evaluate the psychometric properties of generated multiple-choice items using the results from a high stakes national medical licensure examination. The item analysis results for the correct option revealed that the generated items measured examinees' performance across a broad range of ability levels while, at the same time, providing a consistently strong level of discrimination for each item. Results for the incorrect options revealed that the generated items consistently differentiated the low from the high performing examinees. Nutzungsrecht: © 2016 Taylor & Francis 2016 Automation Cognitive models Information technology Performance evaluation Quantitative psychology Educational tests & measurements Lai, Hollis oth Pugh, Debra oth Touchie, Claire oth Boulais, André-Philippe oth De Champlain, André oth Enthalten in Applied measurement in education Chestnut, Pa.[u.a.] : Routledge, Taylor & Francis Group, 1988 29(2016), 3, Seite 196 (DE-627)188004599 (DE-600)1268255-X (DE-576)056399685 0895-7347 nnns volume:29 year:2016 number:3 pages:196 http://dx.doi.org/10.1080/08957347.2016.1171768 Volltext http://www.tandfonline.com/doi/abs/10.1080/08957347.2016.1171768 http://search.proquest.com/docview/1793587007 GBV_USEFLAG_A SYSFLAG_A GBV_OLC SSG-OLC-BIF GBV_ILN_4012 AR 29 2016 3 196 |
spelling |
10.1080/08957347.2016.1171768 doi PQ20161012 (DE-627)OLC1979325227 (DE-599)GBVOLC1979325227 (PRQ)c1307-7671748b9271ce49ba18eb27999286383c0216db8e575aa33708a633e4facbfb0 (KEY)0166940020160000029000300196evaluatingthepsychometriccharacteristicsofgenerate DE-627 ger DE-627 rakwb eng 370 DNB Gierl, Mark J verfasserin aut Evaluating the Psychometric Characteristics of Generated Multiple-Choice Test Items 2016 Text txt rdacontent ohne Hilfsmittel zu benutzen n rdamedia Band nc rdacarrier Item development is a time- and resource-intensive process. Automatic item generation integrates cognitive modeling with computer technology to systematically generate test items. To date, however, items generated using cognitive modeling procedures have received limited use in operational testing situations. As a result, the psychometric characteristics of generated multiple-choice test items are largely unknown and undocumented. We present item analysis results from one of the first empirical studies designed to evaluate the psychometric properties of generated multiple-choice items using the results from a high stakes national medical licensure examination. The item analysis results for the correct option revealed that the generated items measured examinees' performance across a broad range of ability levels while, at the same time, providing a consistently strong level of discrimination for each item. Results for the incorrect options revealed that the generated items consistently differentiated the low from the high performing examinees. Nutzungsrecht: © 2016 Taylor & Francis 2016 Automation Cognitive models Information technology Performance evaluation Quantitative psychology Educational tests & measurements Lai, Hollis oth Pugh, Debra oth Touchie, Claire oth Boulais, André-Philippe oth De Champlain, André oth Enthalten in Applied measurement in education Chestnut, Pa.[u.a.] : Routledge, Taylor & Francis Group, 1988 29(2016), 3, Seite 196 (DE-627)188004599 (DE-600)1268255-X (DE-576)056399685 0895-7347 nnns volume:29 year:2016 number:3 pages:196 http://dx.doi.org/10.1080/08957347.2016.1171768 Volltext http://www.tandfonline.com/doi/abs/10.1080/08957347.2016.1171768 http://search.proquest.com/docview/1793587007 GBV_USEFLAG_A SYSFLAG_A GBV_OLC SSG-OLC-BIF GBV_ILN_4012 AR 29 2016 3 196 |
allfields_unstemmed |
10.1080/08957347.2016.1171768 doi PQ20161012 (DE-627)OLC1979325227 (DE-599)GBVOLC1979325227 (PRQ)c1307-7671748b9271ce49ba18eb27999286383c0216db8e575aa33708a633e4facbfb0 (KEY)0166940020160000029000300196evaluatingthepsychometriccharacteristicsofgenerate DE-627 ger DE-627 rakwb eng 370 DNB Gierl, Mark J verfasserin aut Evaluating the Psychometric Characteristics of Generated Multiple-Choice Test Items 2016 Text txt rdacontent ohne Hilfsmittel zu benutzen n rdamedia Band nc rdacarrier Item development is a time- and resource-intensive process. Automatic item generation integrates cognitive modeling with computer technology to systematically generate test items. To date, however, items generated using cognitive modeling procedures have received limited use in operational testing situations. As a result, the psychometric characteristics of generated multiple-choice test items are largely unknown and undocumented. We present item analysis results from one of the first empirical studies designed to evaluate the psychometric properties of generated multiple-choice items using the results from a high stakes national medical licensure examination. The item analysis results for the correct option revealed that the generated items measured examinees' performance across a broad range of ability levels while, at the same time, providing a consistently strong level of discrimination for each item. Results for the incorrect options revealed that the generated items consistently differentiated the low from the high performing examinees. Nutzungsrecht: © 2016 Taylor & Francis 2016 Automation Cognitive models Information technology Performance evaluation Quantitative psychology Educational tests & measurements Lai, Hollis oth Pugh, Debra oth Touchie, Claire oth Boulais, André-Philippe oth De Champlain, André oth Enthalten in Applied measurement in education Chestnut, Pa.[u.a.] : Routledge, Taylor & Francis Group, 1988 29(2016), 3, Seite 196 (DE-627)188004599 (DE-600)1268255-X (DE-576)056399685 0895-7347 nnns volume:29 year:2016 number:3 pages:196 http://dx.doi.org/10.1080/08957347.2016.1171768 Volltext http://www.tandfonline.com/doi/abs/10.1080/08957347.2016.1171768 http://search.proquest.com/docview/1793587007 GBV_USEFLAG_A SYSFLAG_A GBV_OLC SSG-OLC-BIF GBV_ILN_4012 AR 29 2016 3 196 |
allfieldsGer |
10.1080/08957347.2016.1171768 doi PQ20161012 (DE-627)OLC1979325227 (DE-599)GBVOLC1979325227 (PRQ)c1307-7671748b9271ce49ba18eb27999286383c0216db8e575aa33708a633e4facbfb0 (KEY)0166940020160000029000300196evaluatingthepsychometriccharacteristicsofgenerate DE-627 ger DE-627 rakwb eng 370 DNB Gierl, Mark J verfasserin aut Evaluating the Psychometric Characteristics of Generated Multiple-Choice Test Items 2016 Text txt rdacontent ohne Hilfsmittel zu benutzen n rdamedia Band nc rdacarrier Item development is a time- and resource-intensive process. Automatic item generation integrates cognitive modeling with computer technology to systematically generate test items. To date, however, items generated using cognitive modeling procedures have received limited use in operational testing situations. As a result, the psychometric characteristics of generated multiple-choice test items are largely unknown and undocumented. We present item analysis results from one of the first empirical studies designed to evaluate the psychometric properties of generated multiple-choice items using the results from a high stakes national medical licensure examination. The item analysis results for the correct option revealed that the generated items measured examinees' performance across a broad range of ability levels while, at the same time, providing a consistently strong level of discrimination for each item. Results for the incorrect options revealed that the generated items consistently differentiated the low from the high performing examinees. Nutzungsrecht: © 2016 Taylor & Francis 2016 Automation Cognitive models Information technology Performance evaluation Quantitative psychology Educational tests & measurements Lai, Hollis oth Pugh, Debra oth Touchie, Claire oth Boulais, André-Philippe oth De Champlain, André oth Enthalten in Applied measurement in education Chestnut, Pa.[u.a.] : Routledge, Taylor & Francis Group, 1988 29(2016), 3, Seite 196 (DE-627)188004599 (DE-600)1268255-X (DE-576)056399685 0895-7347 nnns volume:29 year:2016 number:3 pages:196 http://dx.doi.org/10.1080/08957347.2016.1171768 Volltext http://www.tandfonline.com/doi/abs/10.1080/08957347.2016.1171768 http://search.proquest.com/docview/1793587007 GBV_USEFLAG_A SYSFLAG_A GBV_OLC SSG-OLC-BIF GBV_ILN_4012 AR 29 2016 3 196 |
allfieldsSound |
10.1080/08957347.2016.1171768 doi PQ20161012 (DE-627)OLC1979325227 (DE-599)GBVOLC1979325227 (PRQ)c1307-7671748b9271ce49ba18eb27999286383c0216db8e575aa33708a633e4facbfb0 (KEY)0166940020160000029000300196evaluatingthepsychometriccharacteristicsofgenerate DE-627 ger DE-627 rakwb eng 370 DNB Gierl, Mark J verfasserin aut Evaluating the Psychometric Characteristics of Generated Multiple-Choice Test Items 2016 Text txt rdacontent ohne Hilfsmittel zu benutzen n rdamedia Band nc rdacarrier Item development is a time- and resource-intensive process. Automatic item generation integrates cognitive modeling with computer technology to systematically generate test items. To date, however, items generated using cognitive modeling procedures have received limited use in operational testing situations. As a result, the psychometric characteristics of generated multiple-choice test items are largely unknown and undocumented. We present item analysis results from one of the first empirical studies designed to evaluate the psychometric properties of generated multiple-choice items using the results from a high stakes national medical licensure examination. The item analysis results for the correct option revealed that the generated items measured examinees' performance across a broad range of ability levels while, at the same time, providing a consistently strong level of discrimination for each item. Results for the incorrect options revealed that the generated items consistently differentiated the low from the high performing examinees. Nutzungsrecht: © 2016 Taylor & Francis 2016 Automation Cognitive models Information technology Performance evaluation Quantitative psychology Educational tests & measurements Lai, Hollis oth Pugh, Debra oth Touchie, Claire oth Boulais, André-Philippe oth De Champlain, André oth Enthalten in Applied measurement in education Chestnut, Pa.[u.a.] : Routledge, Taylor & Francis Group, 1988 29(2016), 3, Seite 196 (DE-627)188004599 (DE-600)1268255-X (DE-576)056399685 0895-7347 nnns volume:29 year:2016 number:3 pages:196 http://dx.doi.org/10.1080/08957347.2016.1171768 Volltext http://www.tandfonline.com/doi/abs/10.1080/08957347.2016.1171768 http://search.proquest.com/docview/1793587007 GBV_USEFLAG_A SYSFLAG_A GBV_OLC SSG-OLC-BIF GBV_ILN_4012 AR 29 2016 3 196 |
language |
English |
source |
Enthalten in Applied measurement in education 29(2016), 3, Seite 196 volume:29 year:2016 number:3 pages:196 |
sourceStr |
Enthalten in Applied measurement in education 29(2016), 3, Seite 196 volume:29 year:2016 number:3 pages:196 |
format_phy_str_mv |
Article |
institution |
findex.gbv.de |
topic_facet |
Automation Cognitive models Information technology Performance evaluation Quantitative psychology Educational tests & measurements |
dewey-raw |
370 |
isfreeaccess_bool |
false |
container_title |
Applied measurement in education |
authorswithroles_txt_mv |
Gierl, Mark J @@aut@@ Lai, Hollis @@oth@@ Pugh, Debra @@oth@@ Touchie, Claire @@oth@@ Boulais, André-Philippe @@oth@@ De Champlain, André @@oth@@ |
publishDateDaySort_date |
2016-01-01T00:00:00Z |
hierarchy_top_id |
188004599 |
dewey-sort |
3370 |
id |
OLC1979325227 |
language_de |
englisch |
fullrecord |
<?xml version="1.0" encoding="UTF-8"?><collection xmlns="http://www.loc.gov/MARC21/slim"><record><leader>01000caa a2200265 4500</leader><controlfield tag="001">OLC1979325227</controlfield><controlfield tag="003">DE-627</controlfield><controlfield tag="005">20230714203412.0</controlfield><controlfield tag="007">tu</controlfield><controlfield tag="008">160720s2016 xx ||||| 00| ||eng c</controlfield><datafield tag="024" ind1="7" ind2=" "><subfield code="a">10.1080/08957347.2016.1171768</subfield><subfield code="2">doi</subfield></datafield><datafield tag="028" ind1="5" ind2="2"><subfield code="a">PQ20161012</subfield></datafield><datafield tag="035" ind1=" " ind2=" "><subfield code="a">(DE-627)OLC1979325227</subfield></datafield><datafield tag="035" ind1=" " ind2=" "><subfield code="a">(DE-599)GBVOLC1979325227</subfield></datafield><datafield tag="035" ind1=" " ind2=" "><subfield code="a">(PRQ)c1307-7671748b9271ce49ba18eb27999286383c0216db8e575aa33708a633e4facbfb0</subfield></datafield><datafield tag="035" ind1=" " ind2=" "><subfield code="a">(KEY)0166940020160000029000300196evaluatingthepsychometriccharacteristicsofgenerate</subfield></datafield><datafield tag="040" ind1=" " ind2=" "><subfield code="a">DE-627</subfield><subfield code="b">ger</subfield><subfield code="c">DE-627</subfield><subfield code="e">rakwb</subfield></datafield><datafield tag="041" ind1=" " ind2=" "><subfield code="a">eng</subfield></datafield><datafield tag="082" ind1="0" ind2="4"><subfield code="a">370</subfield><subfield code="q">DNB</subfield></datafield><datafield tag="100" ind1="1" ind2=" "><subfield code="a">Gierl, Mark J</subfield><subfield code="e">verfasserin</subfield><subfield code="4">aut</subfield></datafield><datafield tag="245" ind1="1" ind2="0"><subfield code="a">Evaluating the Psychometric Characteristics of Generated Multiple-Choice Test Items</subfield></datafield><datafield tag="264" ind1=" " ind2="1"><subfield code="c">2016</subfield></datafield><datafield tag="336" ind1=" " ind2=" "><subfield code="a">Text</subfield><subfield code="b">txt</subfield><subfield code="2">rdacontent</subfield></datafield><datafield tag="337" ind1=" " ind2=" "><subfield code="a">ohne Hilfsmittel zu benutzen</subfield><subfield code="b">n</subfield><subfield code="2">rdamedia</subfield></datafield><datafield tag="338" ind1=" " ind2=" "><subfield code="a">Band</subfield><subfield code="b">nc</subfield><subfield code="2">rdacarrier</subfield></datafield><datafield tag="520" ind1=" " ind2=" "><subfield code="a">Item development is a time- and resource-intensive process. Automatic item generation integrates cognitive modeling with computer technology to systematically generate test items. To date, however, items generated using cognitive modeling procedures have received limited use in operational testing situations. As a result, the psychometric characteristics of generated multiple-choice test items are largely unknown and undocumented. We present item analysis results from one of the first empirical studies designed to evaluate the psychometric properties of generated multiple-choice items using the results from a high stakes national medical licensure examination. The item analysis results for the correct option revealed that the generated items measured examinees' performance across a broad range of ability levels while, at the same time, providing a consistently strong level of discrimination for each item. Results for the incorrect options revealed that the generated items consistently differentiated the low from the high performing examinees.</subfield></datafield><datafield tag="540" ind1=" " ind2=" "><subfield code="a">Nutzungsrecht: © 2016 Taylor & Francis 2016</subfield></datafield><datafield tag="650" ind1=" " ind2="4"><subfield code="a">Automation</subfield></datafield><datafield tag="650" ind1=" " ind2="4"><subfield code="a">Cognitive models</subfield></datafield><datafield tag="650" ind1=" " ind2="4"><subfield code="a">Information technology</subfield></datafield><datafield tag="650" ind1=" " ind2="4"><subfield code="a">Performance evaluation</subfield></datafield><datafield tag="650" ind1=" " ind2="4"><subfield code="a">Quantitative psychology</subfield></datafield><datafield tag="650" ind1=" " ind2="4"><subfield code="a">Educational tests & measurements</subfield></datafield><datafield tag="700" ind1="1" ind2=" "><subfield code="a">Lai, Hollis</subfield><subfield code="4">oth</subfield></datafield><datafield tag="700" ind1="1" ind2=" "><subfield code="a">Pugh, Debra</subfield><subfield code="4">oth</subfield></datafield><datafield tag="700" ind1="1" ind2=" "><subfield code="a">Touchie, Claire</subfield><subfield code="4">oth</subfield></datafield><datafield tag="700" ind1="1" ind2=" "><subfield code="a">Boulais, André-Philippe</subfield><subfield code="4">oth</subfield></datafield><datafield tag="700" ind1="1" ind2=" "><subfield code="a">De Champlain, André</subfield><subfield code="4">oth</subfield></datafield><datafield tag="773" ind1="0" ind2="8"><subfield code="i">Enthalten in</subfield><subfield code="t">Applied measurement in education</subfield><subfield code="d">Chestnut, Pa.[u.a.] : Routledge, Taylor & Francis Group, 1988</subfield><subfield code="g">29(2016), 3, Seite 196</subfield><subfield code="w">(DE-627)188004599</subfield><subfield code="w">(DE-600)1268255-X</subfield><subfield code="w">(DE-576)056399685</subfield><subfield code="x">0895-7347</subfield><subfield code="7">nnns</subfield></datafield><datafield tag="773" ind1="1" ind2="8"><subfield code="g">volume:29</subfield><subfield code="g">year:2016</subfield><subfield code="g">number:3</subfield><subfield code="g">pages:196</subfield></datafield><datafield tag="856" ind1="4" ind2="1"><subfield code="u">http://dx.doi.org/10.1080/08957347.2016.1171768</subfield><subfield code="3">Volltext</subfield></datafield><datafield tag="856" ind1="4" ind2="2"><subfield code="u">http://www.tandfonline.com/doi/abs/10.1080/08957347.2016.1171768</subfield></datafield><datafield tag="856" ind1="4" ind2="2"><subfield code="u">http://search.proquest.com/docview/1793587007</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_USEFLAG_A</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">SYSFLAG_A</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_OLC</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">SSG-OLC-BIF</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4012</subfield></datafield><datafield tag="951" ind1=" " ind2=" "><subfield code="a">AR</subfield></datafield><datafield tag="952" ind1=" " ind2=" "><subfield code="d">29</subfield><subfield code="j">2016</subfield><subfield code="e">3</subfield><subfield code="h">196</subfield></datafield></record></collection>
|
author |
Gierl, Mark J |
spellingShingle |
Gierl, Mark J ddc 370 misc Automation misc Cognitive models misc Information technology misc Performance evaluation misc Quantitative psychology misc Educational tests & measurements Evaluating the Psychometric Characteristics of Generated Multiple-Choice Test Items |
authorStr |
Gierl, Mark J |
ppnlink_with_tag_str_mv |
@@773@@(DE-627)188004599 |
format |
Article |
dewey-ones |
370 - Education |
delete_txt_mv |
keep |
author_role |
aut |
collection |
OLC |
remote_str |
false |
illustrated |
Not Illustrated |
issn |
0895-7347 |
topic_title |
370 DNB Evaluating the Psychometric Characteristics of Generated Multiple-Choice Test Items Automation Cognitive models Information technology Performance evaluation Quantitative psychology Educational tests & measurements |
topic |
ddc 370 misc Automation misc Cognitive models misc Information technology misc Performance evaluation misc Quantitative psychology misc Educational tests & measurements |
topic_unstemmed |
ddc 370 misc Automation misc Cognitive models misc Information technology misc Performance evaluation misc Quantitative psychology misc Educational tests & measurements |
topic_browse |
ddc 370 misc Automation misc Cognitive models misc Information technology misc Performance evaluation misc Quantitative psychology misc Educational tests & measurements |
format_facet |
Aufsätze Gedruckte Aufsätze |
format_main_str_mv |
Text Zeitschrift/Artikel |
carriertype_str_mv |
nc |
author2_variant |
h l hl d p dp c t ct a p b apb c a d ca cad |
hierarchy_parent_title |
Applied measurement in education |
hierarchy_parent_id |
188004599 |
dewey-tens |
370 - Education |
hierarchy_top_title |
Applied measurement in education |
isfreeaccess_txt |
false |
familylinks_str_mv |
(DE-627)188004599 (DE-600)1268255-X (DE-576)056399685 |
title |
Evaluating the Psychometric Characteristics of Generated Multiple-Choice Test Items |
ctrlnum |
(DE-627)OLC1979325227 (DE-599)GBVOLC1979325227 (PRQ)c1307-7671748b9271ce49ba18eb27999286383c0216db8e575aa33708a633e4facbfb0 (KEY)0166940020160000029000300196evaluatingthepsychometriccharacteristicsofgenerate |
title_full |
Evaluating the Psychometric Characteristics of Generated Multiple-Choice Test Items |
author_sort |
Gierl, Mark J |
journal |
Applied measurement in education |
journalStr |
Applied measurement in education |
lang_code |
eng |
isOA_bool |
false |
dewey-hundreds |
300 - Social sciences |
recordtype |
marc |
publishDateSort |
2016 |
contenttype_str_mv |
txt |
container_start_page |
196 |
author_browse |
Gierl, Mark J |
container_volume |
29 |
class |
370 DNB |
format_se |
Aufsätze |
author-letter |
Gierl, Mark J |
doi_str_mv |
10.1080/08957347.2016.1171768 |
dewey-full |
370 |
title_sort |
evaluating the psychometric characteristics of generated multiple-choice test items |
title_auth |
Evaluating the Psychometric Characteristics of Generated Multiple-Choice Test Items |
abstract |
Item development is a time- and resource-intensive process. Automatic item generation integrates cognitive modeling with computer technology to systematically generate test items. To date, however, items generated using cognitive modeling procedures have received limited use in operational testing situations. As a result, the psychometric characteristics of generated multiple-choice test items are largely unknown and undocumented. We present item analysis results from one of the first empirical studies designed to evaluate the psychometric properties of generated multiple-choice items using the results from a high stakes national medical licensure examination. The item analysis results for the correct option revealed that the generated items measured examinees' performance across a broad range of ability levels while, at the same time, providing a consistently strong level of discrimination for each item. Results for the incorrect options revealed that the generated items consistently differentiated the low from the high performing examinees. |
abstractGer |
Item development is a time- and resource-intensive process. Automatic item generation integrates cognitive modeling with computer technology to systematically generate test items. To date, however, items generated using cognitive modeling procedures have received limited use in operational testing situations. As a result, the psychometric characteristics of generated multiple-choice test items are largely unknown and undocumented. We present item analysis results from one of the first empirical studies designed to evaluate the psychometric properties of generated multiple-choice items using the results from a high stakes national medical licensure examination. The item analysis results for the correct option revealed that the generated items measured examinees' performance across a broad range of ability levels while, at the same time, providing a consistently strong level of discrimination for each item. Results for the incorrect options revealed that the generated items consistently differentiated the low from the high performing examinees. |
abstract_unstemmed |
Item development is a time- and resource-intensive process. Automatic item generation integrates cognitive modeling with computer technology to systematically generate test items. To date, however, items generated using cognitive modeling procedures have received limited use in operational testing situations. As a result, the psychometric characteristics of generated multiple-choice test items are largely unknown and undocumented. We present item analysis results from one of the first empirical studies designed to evaluate the psychometric properties of generated multiple-choice items using the results from a high stakes national medical licensure examination. The item analysis results for the correct option revealed that the generated items measured examinees' performance across a broad range of ability levels while, at the same time, providing a consistently strong level of discrimination for each item. Results for the incorrect options revealed that the generated items consistently differentiated the low from the high performing examinees. |
collection_details |
GBV_USEFLAG_A SYSFLAG_A GBV_OLC SSG-OLC-BIF GBV_ILN_4012 |
container_issue |
3 |
title_short |
Evaluating the Psychometric Characteristics of Generated Multiple-Choice Test Items |
url |
http://dx.doi.org/10.1080/08957347.2016.1171768 http://www.tandfonline.com/doi/abs/10.1080/08957347.2016.1171768 http://search.proquest.com/docview/1793587007 |
remote_bool |
false |
author2 |
Lai, Hollis Pugh, Debra Touchie, Claire Boulais, André-Philippe De Champlain, André |
author2Str |
Lai, Hollis Pugh, Debra Touchie, Claire Boulais, André-Philippe De Champlain, André |
ppnlink |
188004599 |
mediatype_str_mv |
n |
isOA_txt |
false |
hochschulschrift_bool |
false |
author2_role |
oth oth oth oth oth |
doi_str |
10.1080/08957347.2016.1171768 |
up_date |
2024-07-04T00:40:44.134Z |
_version_ |
1803606969952501760 |
fullrecord_marcxml |
<?xml version="1.0" encoding="UTF-8"?><collection xmlns="http://www.loc.gov/MARC21/slim"><record><leader>01000caa a2200265 4500</leader><controlfield tag="001">OLC1979325227</controlfield><controlfield tag="003">DE-627</controlfield><controlfield tag="005">20230714203412.0</controlfield><controlfield tag="007">tu</controlfield><controlfield tag="008">160720s2016 xx ||||| 00| ||eng c</controlfield><datafield tag="024" ind1="7" ind2=" "><subfield code="a">10.1080/08957347.2016.1171768</subfield><subfield code="2">doi</subfield></datafield><datafield tag="028" ind1="5" ind2="2"><subfield code="a">PQ20161012</subfield></datafield><datafield tag="035" ind1=" " ind2=" "><subfield code="a">(DE-627)OLC1979325227</subfield></datafield><datafield tag="035" ind1=" " ind2=" "><subfield code="a">(DE-599)GBVOLC1979325227</subfield></datafield><datafield tag="035" ind1=" " ind2=" "><subfield code="a">(PRQ)c1307-7671748b9271ce49ba18eb27999286383c0216db8e575aa33708a633e4facbfb0</subfield></datafield><datafield tag="035" ind1=" " ind2=" "><subfield code="a">(KEY)0166940020160000029000300196evaluatingthepsychometriccharacteristicsofgenerate</subfield></datafield><datafield tag="040" ind1=" " ind2=" "><subfield code="a">DE-627</subfield><subfield code="b">ger</subfield><subfield code="c">DE-627</subfield><subfield code="e">rakwb</subfield></datafield><datafield tag="041" ind1=" " ind2=" "><subfield code="a">eng</subfield></datafield><datafield tag="082" ind1="0" ind2="4"><subfield code="a">370</subfield><subfield code="q">DNB</subfield></datafield><datafield tag="100" ind1="1" ind2=" "><subfield code="a">Gierl, Mark J</subfield><subfield code="e">verfasserin</subfield><subfield code="4">aut</subfield></datafield><datafield tag="245" ind1="1" ind2="0"><subfield code="a">Evaluating the Psychometric Characteristics of Generated Multiple-Choice Test Items</subfield></datafield><datafield tag="264" ind1=" " ind2="1"><subfield code="c">2016</subfield></datafield><datafield tag="336" ind1=" " ind2=" "><subfield code="a">Text</subfield><subfield code="b">txt</subfield><subfield code="2">rdacontent</subfield></datafield><datafield tag="337" ind1=" " ind2=" "><subfield code="a">ohne Hilfsmittel zu benutzen</subfield><subfield code="b">n</subfield><subfield code="2">rdamedia</subfield></datafield><datafield tag="338" ind1=" " ind2=" "><subfield code="a">Band</subfield><subfield code="b">nc</subfield><subfield code="2">rdacarrier</subfield></datafield><datafield tag="520" ind1=" " ind2=" "><subfield code="a">Item development is a time- and resource-intensive process. Automatic item generation integrates cognitive modeling with computer technology to systematically generate test items. To date, however, items generated using cognitive modeling procedures have received limited use in operational testing situations. As a result, the psychometric characteristics of generated multiple-choice test items are largely unknown and undocumented. We present item analysis results from one of the first empirical studies designed to evaluate the psychometric properties of generated multiple-choice items using the results from a high stakes national medical licensure examination. The item analysis results for the correct option revealed that the generated items measured examinees' performance across a broad range of ability levels while, at the same time, providing a consistently strong level of discrimination for each item. Results for the incorrect options revealed that the generated items consistently differentiated the low from the high performing examinees.</subfield></datafield><datafield tag="540" ind1=" " ind2=" "><subfield code="a">Nutzungsrecht: © 2016 Taylor & Francis 2016</subfield></datafield><datafield tag="650" ind1=" " ind2="4"><subfield code="a">Automation</subfield></datafield><datafield tag="650" ind1=" " ind2="4"><subfield code="a">Cognitive models</subfield></datafield><datafield tag="650" ind1=" " ind2="4"><subfield code="a">Information technology</subfield></datafield><datafield tag="650" ind1=" " ind2="4"><subfield code="a">Performance evaluation</subfield></datafield><datafield tag="650" ind1=" " ind2="4"><subfield code="a">Quantitative psychology</subfield></datafield><datafield tag="650" ind1=" " ind2="4"><subfield code="a">Educational tests & measurements</subfield></datafield><datafield tag="700" ind1="1" ind2=" "><subfield code="a">Lai, Hollis</subfield><subfield code="4">oth</subfield></datafield><datafield tag="700" ind1="1" ind2=" "><subfield code="a">Pugh, Debra</subfield><subfield code="4">oth</subfield></datafield><datafield tag="700" ind1="1" ind2=" "><subfield code="a">Touchie, Claire</subfield><subfield code="4">oth</subfield></datafield><datafield tag="700" ind1="1" ind2=" "><subfield code="a">Boulais, André-Philippe</subfield><subfield code="4">oth</subfield></datafield><datafield tag="700" ind1="1" ind2=" "><subfield code="a">De Champlain, André</subfield><subfield code="4">oth</subfield></datafield><datafield tag="773" ind1="0" ind2="8"><subfield code="i">Enthalten in</subfield><subfield code="t">Applied measurement in education</subfield><subfield code="d">Chestnut, Pa.[u.a.] : Routledge, Taylor & Francis Group, 1988</subfield><subfield code="g">29(2016), 3, Seite 196</subfield><subfield code="w">(DE-627)188004599</subfield><subfield code="w">(DE-600)1268255-X</subfield><subfield code="w">(DE-576)056399685</subfield><subfield code="x">0895-7347</subfield><subfield code="7">nnns</subfield></datafield><datafield tag="773" ind1="1" ind2="8"><subfield code="g">volume:29</subfield><subfield code="g">year:2016</subfield><subfield code="g">number:3</subfield><subfield code="g">pages:196</subfield></datafield><datafield tag="856" ind1="4" ind2="1"><subfield code="u">http://dx.doi.org/10.1080/08957347.2016.1171768</subfield><subfield code="3">Volltext</subfield></datafield><datafield tag="856" ind1="4" ind2="2"><subfield code="u">http://www.tandfonline.com/doi/abs/10.1080/08957347.2016.1171768</subfield></datafield><datafield tag="856" ind1="4" ind2="2"><subfield code="u">http://search.proquest.com/docview/1793587007</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_USEFLAG_A</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">SYSFLAG_A</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_OLC</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">SSG-OLC-BIF</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4012</subfield></datafield><datafield tag="951" ind1=" " ind2=" "><subfield code="a">AR</subfield></datafield><datafield tag="952" ind1=" " ind2=" "><subfield code="d">29</subfield><subfield code="j">2016</subfield><subfield code="e">3</subfield><subfield code="h">196</subfield></datafield></record></collection>
|
score |
7.4023705 |