Introducing the Prototypical Stimulus Characteristics Toolbox: Protosc
Abstract Many studies use different categories of images to define their conditions. Since any difference between these categories is a valid candidate to explain category-related behavioral differences, knowledge about the objective image differences between categories is crucial for the interpreta...
Ausführliche Beschreibung
Autor*in: |
Stuit, S. M. [verfasserIn] |
---|
Format: |
E-Artikel |
---|---|
Sprache: |
Englisch |
Erschienen: |
2021 |
---|
Schlagwörter: |
---|
Anmerkung: |
© The Author(s) 2021 |
---|
Übergeordnetes Werk: |
Enthalten in: Behavior research methods, instruments & computers - Austin, Tex. : Psychonomic Society Publ., 1984, 54(2021), 5 vom: 16. Dez., Seite 2422-2432 |
---|---|
Übergeordnetes Werk: |
volume:54 ; year:2021 ; number:5 ; day:16 ; month:12 ; pages:2422-2432 |
Links: |
---|
DOI / URN: |
10.3758/s13428-021-01737-9 |
---|
Katalog-ID: |
SPR048391093 |
---|
LEADER | 01000caa a22002652 4500 | ||
---|---|---|---|
001 | SPR048391093 | ||
003 | DE-627 | ||
005 | 20230509114116.0 | ||
007 | cr uuu---uuuuu | ||
008 | 221019s2021 xx |||||o 00| ||eng c | ||
024 | 7 | |a 10.3758/s13428-021-01737-9 |2 doi | |
035 | |a (DE-627)SPR048391093 | ||
035 | |a (SPR)s13428-021-01737-9-e | ||
040 | |a DE-627 |b ger |c DE-627 |e rakwb | ||
041 | |a eng | ||
100 | 1 | |a Stuit, S. M. |e verfasserin |0 (orcid)0000-0003-3891-2171 |4 aut | |
245 | 1 | 0 | |a Introducing the Prototypical Stimulus Characteristics Toolbox: Protosc |
264 | 1 | |c 2021 | |
336 | |a Text |b txt |2 rdacontent | ||
337 | |a Computermedien |b c |2 rdamedia | ||
338 | |a Online-Ressource |b cr |2 rdacarrier | ||
500 | |a © The Author(s) 2021 | ||
520 | |a Abstract Many studies use different categories of images to define their conditions. Since any difference between these categories is a valid candidate to explain category-related behavioral differences, knowledge about the objective image differences between categories is crucial for the interpretation of the behaviors. However, natural images vary in many image features and not every feature is equally important in describing the differences between the categories. Here, we provide a methodological approach to find as many of the image features as possible, using machine learning performance as a tool, that have predictive value over the category the images belong to. In other words, we describe a means to find the features of a group of images by which the categories can be objectively and quantitatively defined. Note that we are not aiming to provide a means for the best possible decoding performance; instead, our aim is to uncover prototypical characteristics of the categories. To facilitate the use of this method, we offer an open-source, MATLAB-based toolbox that performs such an analysis and aids the user in visualizing the features of relevance. We first applied the toolbox to a mock data set with a ground truth to show the sensitivity of the approach. Next, we applied the toolbox to a set of natural images as a more practical example. | ||
650 | 4 | |a Image Statistics |7 (dpeaa)DE-He213 | |
650 | 4 | |a Machine Learning |7 (dpeaa)DE-He213 | |
650 | 4 | |a Toolbox |7 (dpeaa)DE-He213 | |
650 | 4 | |a Matlab |7 (dpeaa)DE-He213 | |
700 | 1 | |a Paffen, C. L. E. |4 aut | |
700 | 1 | |a Van der Stigchel, S. |4 aut | |
773 | 0 | 8 | |i Enthalten in |t Behavior research methods, instruments & computers |d Austin, Tex. : Psychonomic Society Publ., 1984 |g 54(2021), 5 vom: 16. Dez., Seite 2422-2432 |w (DE-627)32998067X |w (DE-600)2048669-8 |x 1532-5970 |7 nnns |
773 | 1 | 8 | |g volume:54 |g year:2021 |g number:5 |g day:16 |g month:12 |g pages:2422-2432 |
856 | 4 | 0 | |u https://dx.doi.org/10.3758/s13428-021-01737-9 |z kostenfrei |3 Volltext |
912 | |a GBV_USEFLAG_A | ||
912 | |a SYSFLAG_A | ||
912 | |a GBV_SPRINGER | ||
912 | |a GBV_ILN_20 | ||
912 | |a GBV_ILN_22 | ||
912 | |a GBV_ILN_23 | ||
912 | |a GBV_ILN_24 | ||
912 | |a GBV_ILN_31 | ||
912 | |a GBV_ILN_32 | ||
912 | |a GBV_ILN_39 | ||
912 | |a GBV_ILN_40 | ||
912 | |a GBV_ILN_60 | ||
912 | |a GBV_ILN_62 | ||
912 | |a GBV_ILN_65 | ||
912 | |a GBV_ILN_69 | ||
912 | |a GBV_ILN_70 | ||
912 | |a GBV_ILN_73 | ||
912 | |a GBV_ILN_74 | ||
912 | |a GBV_ILN_90 | ||
912 | |a GBV_ILN_95 | ||
912 | |a GBV_ILN_100 | ||
912 | |a GBV_ILN_101 | ||
912 | |a GBV_ILN_105 | ||
912 | |a GBV_ILN_110 | ||
912 | |a GBV_ILN_120 | ||
912 | |a GBV_ILN_138 | ||
912 | |a GBV_ILN_152 | ||
912 | |a GBV_ILN_161 | ||
912 | |a GBV_ILN_171 | ||
912 | |a GBV_ILN_187 | ||
912 | |a GBV_ILN_224 | ||
912 | |a GBV_ILN_250 | ||
912 | |a GBV_ILN_281 | ||
912 | |a GBV_ILN_285 | ||
912 | |a GBV_ILN_293 | ||
912 | |a GBV_ILN_370 | ||
912 | |a GBV_ILN_602 | ||
912 | |a GBV_ILN_702 | ||
912 | |a GBV_ILN_2014 | ||
951 | |a AR | ||
952 | |d 54 |j 2021 |e 5 |b 16 |c 12 |h 2422-2432 |
author_variant |
s m s sm sms c l e p cle clep d s s v dss dssv |
---|---|
matchkey_str |
article:15325970:2021----::nrdcnterttpcltmlshrcei |
hierarchy_sort_str |
2021 |
publishDate |
2021 |
allfields |
10.3758/s13428-021-01737-9 doi (DE-627)SPR048391093 (SPR)s13428-021-01737-9-e DE-627 ger DE-627 rakwb eng Stuit, S. M. verfasserin (orcid)0000-0003-3891-2171 aut Introducing the Prototypical Stimulus Characteristics Toolbox: Protosc 2021 Text txt rdacontent Computermedien c rdamedia Online-Ressource cr rdacarrier © The Author(s) 2021 Abstract Many studies use different categories of images to define their conditions. Since any difference between these categories is a valid candidate to explain category-related behavioral differences, knowledge about the objective image differences between categories is crucial for the interpretation of the behaviors. However, natural images vary in many image features and not every feature is equally important in describing the differences between the categories. Here, we provide a methodological approach to find as many of the image features as possible, using machine learning performance as a tool, that have predictive value over the category the images belong to. In other words, we describe a means to find the features of a group of images by which the categories can be objectively and quantitatively defined. Note that we are not aiming to provide a means for the best possible decoding performance; instead, our aim is to uncover prototypical characteristics of the categories. To facilitate the use of this method, we offer an open-source, MATLAB-based toolbox that performs such an analysis and aids the user in visualizing the features of relevance. We first applied the toolbox to a mock data set with a ground truth to show the sensitivity of the approach. Next, we applied the toolbox to a set of natural images as a more practical example. Image Statistics (dpeaa)DE-He213 Machine Learning (dpeaa)DE-He213 Toolbox (dpeaa)DE-He213 Matlab (dpeaa)DE-He213 Paffen, C. L. E. aut Van der Stigchel, S. aut Enthalten in Behavior research methods, instruments & computers Austin, Tex. : Psychonomic Society Publ., 1984 54(2021), 5 vom: 16. Dez., Seite 2422-2432 (DE-627)32998067X (DE-600)2048669-8 1532-5970 nnns volume:54 year:2021 number:5 day:16 month:12 pages:2422-2432 https://dx.doi.org/10.3758/s13428-021-01737-9 kostenfrei Volltext GBV_USEFLAG_A SYSFLAG_A GBV_SPRINGER GBV_ILN_20 GBV_ILN_22 GBV_ILN_23 GBV_ILN_24 GBV_ILN_31 GBV_ILN_32 GBV_ILN_39 GBV_ILN_40 GBV_ILN_60 GBV_ILN_62 GBV_ILN_65 GBV_ILN_69 GBV_ILN_70 GBV_ILN_73 GBV_ILN_74 GBV_ILN_90 GBV_ILN_95 GBV_ILN_100 GBV_ILN_101 GBV_ILN_105 GBV_ILN_110 GBV_ILN_120 GBV_ILN_138 GBV_ILN_152 GBV_ILN_161 GBV_ILN_171 GBV_ILN_187 GBV_ILN_224 GBV_ILN_250 GBV_ILN_281 GBV_ILN_285 GBV_ILN_293 GBV_ILN_370 GBV_ILN_602 GBV_ILN_702 GBV_ILN_2014 AR 54 2021 5 16 12 2422-2432 |
spelling |
10.3758/s13428-021-01737-9 doi (DE-627)SPR048391093 (SPR)s13428-021-01737-9-e DE-627 ger DE-627 rakwb eng Stuit, S. M. verfasserin (orcid)0000-0003-3891-2171 aut Introducing the Prototypical Stimulus Characteristics Toolbox: Protosc 2021 Text txt rdacontent Computermedien c rdamedia Online-Ressource cr rdacarrier © The Author(s) 2021 Abstract Many studies use different categories of images to define their conditions. Since any difference between these categories is a valid candidate to explain category-related behavioral differences, knowledge about the objective image differences between categories is crucial for the interpretation of the behaviors. However, natural images vary in many image features and not every feature is equally important in describing the differences between the categories. Here, we provide a methodological approach to find as many of the image features as possible, using machine learning performance as a tool, that have predictive value over the category the images belong to. In other words, we describe a means to find the features of a group of images by which the categories can be objectively and quantitatively defined. Note that we are not aiming to provide a means for the best possible decoding performance; instead, our aim is to uncover prototypical characteristics of the categories. To facilitate the use of this method, we offer an open-source, MATLAB-based toolbox that performs such an analysis and aids the user in visualizing the features of relevance. We first applied the toolbox to a mock data set with a ground truth to show the sensitivity of the approach. Next, we applied the toolbox to a set of natural images as a more practical example. Image Statistics (dpeaa)DE-He213 Machine Learning (dpeaa)DE-He213 Toolbox (dpeaa)DE-He213 Matlab (dpeaa)DE-He213 Paffen, C. L. E. aut Van der Stigchel, S. aut Enthalten in Behavior research methods, instruments & computers Austin, Tex. : Psychonomic Society Publ., 1984 54(2021), 5 vom: 16. Dez., Seite 2422-2432 (DE-627)32998067X (DE-600)2048669-8 1532-5970 nnns volume:54 year:2021 number:5 day:16 month:12 pages:2422-2432 https://dx.doi.org/10.3758/s13428-021-01737-9 kostenfrei Volltext GBV_USEFLAG_A SYSFLAG_A GBV_SPRINGER GBV_ILN_20 GBV_ILN_22 GBV_ILN_23 GBV_ILN_24 GBV_ILN_31 GBV_ILN_32 GBV_ILN_39 GBV_ILN_40 GBV_ILN_60 GBV_ILN_62 GBV_ILN_65 GBV_ILN_69 GBV_ILN_70 GBV_ILN_73 GBV_ILN_74 GBV_ILN_90 GBV_ILN_95 GBV_ILN_100 GBV_ILN_101 GBV_ILN_105 GBV_ILN_110 GBV_ILN_120 GBV_ILN_138 GBV_ILN_152 GBV_ILN_161 GBV_ILN_171 GBV_ILN_187 GBV_ILN_224 GBV_ILN_250 GBV_ILN_281 GBV_ILN_285 GBV_ILN_293 GBV_ILN_370 GBV_ILN_602 GBV_ILN_702 GBV_ILN_2014 AR 54 2021 5 16 12 2422-2432 |
allfields_unstemmed |
10.3758/s13428-021-01737-9 doi (DE-627)SPR048391093 (SPR)s13428-021-01737-9-e DE-627 ger DE-627 rakwb eng Stuit, S. M. verfasserin (orcid)0000-0003-3891-2171 aut Introducing the Prototypical Stimulus Characteristics Toolbox: Protosc 2021 Text txt rdacontent Computermedien c rdamedia Online-Ressource cr rdacarrier © The Author(s) 2021 Abstract Many studies use different categories of images to define their conditions. Since any difference between these categories is a valid candidate to explain category-related behavioral differences, knowledge about the objective image differences between categories is crucial for the interpretation of the behaviors. However, natural images vary in many image features and not every feature is equally important in describing the differences between the categories. Here, we provide a methodological approach to find as many of the image features as possible, using machine learning performance as a tool, that have predictive value over the category the images belong to. In other words, we describe a means to find the features of a group of images by which the categories can be objectively and quantitatively defined. Note that we are not aiming to provide a means for the best possible decoding performance; instead, our aim is to uncover prototypical characteristics of the categories. To facilitate the use of this method, we offer an open-source, MATLAB-based toolbox that performs such an analysis and aids the user in visualizing the features of relevance. We first applied the toolbox to a mock data set with a ground truth to show the sensitivity of the approach. Next, we applied the toolbox to a set of natural images as a more practical example. Image Statistics (dpeaa)DE-He213 Machine Learning (dpeaa)DE-He213 Toolbox (dpeaa)DE-He213 Matlab (dpeaa)DE-He213 Paffen, C. L. E. aut Van der Stigchel, S. aut Enthalten in Behavior research methods, instruments & computers Austin, Tex. : Psychonomic Society Publ., 1984 54(2021), 5 vom: 16. Dez., Seite 2422-2432 (DE-627)32998067X (DE-600)2048669-8 1532-5970 nnns volume:54 year:2021 number:5 day:16 month:12 pages:2422-2432 https://dx.doi.org/10.3758/s13428-021-01737-9 kostenfrei Volltext GBV_USEFLAG_A SYSFLAG_A GBV_SPRINGER GBV_ILN_20 GBV_ILN_22 GBV_ILN_23 GBV_ILN_24 GBV_ILN_31 GBV_ILN_32 GBV_ILN_39 GBV_ILN_40 GBV_ILN_60 GBV_ILN_62 GBV_ILN_65 GBV_ILN_69 GBV_ILN_70 GBV_ILN_73 GBV_ILN_74 GBV_ILN_90 GBV_ILN_95 GBV_ILN_100 GBV_ILN_101 GBV_ILN_105 GBV_ILN_110 GBV_ILN_120 GBV_ILN_138 GBV_ILN_152 GBV_ILN_161 GBV_ILN_171 GBV_ILN_187 GBV_ILN_224 GBV_ILN_250 GBV_ILN_281 GBV_ILN_285 GBV_ILN_293 GBV_ILN_370 GBV_ILN_602 GBV_ILN_702 GBV_ILN_2014 AR 54 2021 5 16 12 2422-2432 |
allfieldsGer |
10.3758/s13428-021-01737-9 doi (DE-627)SPR048391093 (SPR)s13428-021-01737-9-e DE-627 ger DE-627 rakwb eng Stuit, S. M. verfasserin (orcid)0000-0003-3891-2171 aut Introducing the Prototypical Stimulus Characteristics Toolbox: Protosc 2021 Text txt rdacontent Computermedien c rdamedia Online-Ressource cr rdacarrier © The Author(s) 2021 Abstract Many studies use different categories of images to define their conditions. Since any difference between these categories is a valid candidate to explain category-related behavioral differences, knowledge about the objective image differences between categories is crucial for the interpretation of the behaviors. However, natural images vary in many image features and not every feature is equally important in describing the differences between the categories. Here, we provide a methodological approach to find as many of the image features as possible, using machine learning performance as a tool, that have predictive value over the category the images belong to. In other words, we describe a means to find the features of a group of images by which the categories can be objectively and quantitatively defined. Note that we are not aiming to provide a means for the best possible decoding performance; instead, our aim is to uncover prototypical characteristics of the categories. To facilitate the use of this method, we offer an open-source, MATLAB-based toolbox that performs such an analysis and aids the user in visualizing the features of relevance. We first applied the toolbox to a mock data set with a ground truth to show the sensitivity of the approach. Next, we applied the toolbox to a set of natural images as a more practical example. Image Statistics (dpeaa)DE-He213 Machine Learning (dpeaa)DE-He213 Toolbox (dpeaa)DE-He213 Matlab (dpeaa)DE-He213 Paffen, C. L. E. aut Van der Stigchel, S. aut Enthalten in Behavior research methods, instruments & computers Austin, Tex. : Psychonomic Society Publ., 1984 54(2021), 5 vom: 16. Dez., Seite 2422-2432 (DE-627)32998067X (DE-600)2048669-8 1532-5970 nnns volume:54 year:2021 number:5 day:16 month:12 pages:2422-2432 https://dx.doi.org/10.3758/s13428-021-01737-9 kostenfrei Volltext GBV_USEFLAG_A SYSFLAG_A GBV_SPRINGER GBV_ILN_20 GBV_ILN_22 GBV_ILN_23 GBV_ILN_24 GBV_ILN_31 GBV_ILN_32 GBV_ILN_39 GBV_ILN_40 GBV_ILN_60 GBV_ILN_62 GBV_ILN_65 GBV_ILN_69 GBV_ILN_70 GBV_ILN_73 GBV_ILN_74 GBV_ILN_90 GBV_ILN_95 GBV_ILN_100 GBV_ILN_101 GBV_ILN_105 GBV_ILN_110 GBV_ILN_120 GBV_ILN_138 GBV_ILN_152 GBV_ILN_161 GBV_ILN_171 GBV_ILN_187 GBV_ILN_224 GBV_ILN_250 GBV_ILN_281 GBV_ILN_285 GBV_ILN_293 GBV_ILN_370 GBV_ILN_602 GBV_ILN_702 GBV_ILN_2014 AR 54 2021 5 16 12 2422-2432 |
allfieldsSound |
10.3758/s13428-021-01737-9 doi (DE-627)SPR048391093 (SPR)s13428-021-01737-9-e DE-627 ger DE-627 rakwb eng Stuit, S. M. verfasserin (orcid)0000-0003-3891-2171 aut Introducing the Prototypical Stimulus Characteristics Toolbox: Protosc 2021 Text txt rdacontent Computermedien c rdamedia Online-Ressource cr rdacarrier © The Author(s) 2021 Abstract Many studies use different categories of images to define their conditions. Since any difference between these categories is a valid candidate to explain category-related behavioral differences, knowledge about the objective image differences between categories is crucial for the interpretation of the behaviors. However, natural images vary in many image features and not every feature is equally important in describing the differences between the categories. Here, we provide a methodological approach to find as many of the image features as possible, using machine learning performance as a tool, that have predictive value over the category the images belong to. In other words, we describe a means to find the features of a group of images by which the categories can be objectively and quantitatively defined. Note that we are not aiming to provide a means for the best possible decoding performance; instead, our aim is to uncover prototypical characteristics of the categories. To facilitate the use of this method, we offer an open-source, MATLAB-based toolbox that performs such an analysis and aids the user in visualizing the features of relevance. We first applied the toolbox to a mock data set with a ground truth to show the sensitivity of the approach. Next, we applied the toolbox to a set of natural images as a more practical example. Image Statistics (dpeaa)DE-He213 Machine Learning (dpeaa)DE-He213 Toolbox (dpeaa)DE-He213 Matlab (dpeaa)DE-He213 Paffen, C. L. E. aut Van der Stigchel, S. aut Enthalten in Behavior research methods, instruments & computers Austin, Tex. : Psychonomic Society Publ., 1984 54(2021), 5 vom: 16. Dez., Seite 2422-2432 (DE-627)32998067X (DE-600)2048669-8 1532-5970 nnns volume:54 year:2021 number:5 day:16 month:12 pages:2422-2432 https://dx.doi.org/10.3758/s13428-021-01737-9 kostenfrei Volltext GBV_USEFLAG_A SYSFLAG_A GBV_SPRINGER GBV_ILN_20 GBV_ILN_22 GBV_ILN_23 GBV_ILN_24 GBV_ILN_31 GBV_ILN_32 GBV_ILN_39 GBV_ILN_40 GBV_ILN_60 GBV_ILN_62 GBV_ILN_65 GBV_ILN_69 GBV_ILN_70 GBV_ILN_73 GBV_ILN_74 GBV_ILN_90 GBV_ILN_95 GBV_ILN_100 GBV_ILN_101 GBV_ILN_105 GBV_ILN_110 GBV_ILN_120 GBV_ILN_138 GBV_ILN_152 GBV_ILN_161 GBV_ILN_171 GBV_ILN_187 GBV_ILN_224 GBV_ILN_250 GBV_ILN_281 GBV_ILN_285 GBV_ILN_293 GBV_ILN_370 GBV_ILN_602 GBV_ILN_702 GBV_ILN_2014 AR 54 2021 5 16 12 2422-2432 |
language |
English |
source |
Enthalten in Behavior research methods, instruments & computers 54(2021), 5 vom: 16. Dez., Seite 2422-2432 volume:54 year:2021 number:5 day:16 month:12 pages:2422-2432 |
sourceStr |
Enthalten in Behavior research methods, instruments & computers 54(2021), 5 vom: 16. Dez., Seite 2422-2432 volume:54 year:2021 number:5 day:16 month:12 pages:2422-2432 |
format_phy_str_mv |
Article |
institution |
findex.gbv.de |
topic_facet |
Image Statistics Machine Learning Toolbox Matlab |
isfreeaccess_bool |
true |
container_title |
Behavior research methods, instruments & computers |
authorswithroles_txt_mv |
Stuit, S. M. @@aut@@ Paffen, C. L. E. @@aut@@ Van der Stigchel, S. @@aut@@ |
publishDateDaySort_date |
2021-12-16T00:00:00Z |
hierarchy_top_id |
32998067X |
id |
SPR048391093 |
language_de |
englisch |
fullrecord |
<?xml version="1.0" encoding="UTF-8"?><collection xmlns="http://www.loc.gov/MARC21/slim"><record><leader>01000caa a22002652 4500</leader><controlfield tag="001">SPR048391093</controlfield><controlfield tag="003">DE-627</controlfield><controlfield tag="005">20230509114116.0</controlfield><controlfield tag="007">cr uuu---uuuuu</controlfield><controlfield tag="008">221019s2021 xx |||||o 00| ||eng c</controlfield><datafield tag="024" ind1="7" ind2=" "><subfield code="a">10.3758/s13428-021-01737-9</subfield><subfield code="2">doi</subfield></datafield><datafield tag="035" ind1=" " ind2=" "><subfield code="a">(DE-627)SPR048391093</subfield></datafield><datafield tag="035" ind1=" " ind2=" "><subfield code="a">(SPR)s13428-021-01737-9-e</subfield></datafield><datafield tag="040" ind1=" " ind2=" "><subfield code="a">DE-627</subfield><subfield code="b">ger</subfield><subfield code="c">DE-627</subfield><subfield code="e">rakwb</subfield></datafield><datafield tag="041" ind1=" " ind2=" "><subfield code="a">eng</subfield></datafield><datafield tag="100" ind1="1" ind2=" "><subfield code="a">Stuit, S. M.</subfield><subfield code="e">verfasserin</subfield><subfield code="0">(orcid)0000-0003-3891-2171</subfield><subfield code="4">aut</subfield></datafield><datafield tag="245" ind1="1" ind2="0"><subfield code="a">Introducing the Prototypical Stimulus Characteristics Toolbox: Protosc</subfield></datafield><datafield tag="264" ind1=" " ind2="1"><subfield code="c">2021</subfield></datafield><datafield tag="336" ind1=" " ind2=" "><subfield code="a">Text</subfield><subfield code="b">txt</subfield><subfield code="2">rdacontent</subfield></datafield><datafield tag="337" ind1=" " ind2=" "><subfield code="a">Computermedien</subfield><subfield code="b">c</subfield><subfield code="2">rdamedia</subfield></datafield><datafield tag="338" ind1=" " ind2=" "><subfield code="a">Online-Ressource</subfield><subfield code="b">cr</subfield><subfield code="2">rdacarrier</subfield></datafield><datafield tag="500" ind1=" " ind2=" "><subfield code="a">© The Author(s) 2021</subfield></datafield><datafield tag="520" ind1=" " ind2=" "><subfield code="a">Abstract Many studies use different categories of images to define their conditions. Since any difference between these categories is a valid candidate to explain category-related behavioral differences, knowledge about the objective image differences between categories is crucial for the interpretation of the behaviors. However, natural images vary in many image features and not every feature is equally important in describing the differences between the categories. Here, we provide a methodological approach to find as many of the image features as possible, using machine learning performance as a tool, that have predictive value over the category the images belong to. In other words, we describe a means to find the features of a group of images by which the categories can be objectively and quantitatively defined. Note that we are not aiming to provide a means for the best possible decoding performance; instead, our aim is to uncover prototypical characteristics of the categories. To facilitate the use of this method, we offer an open-source, MATLAB-based toolbox that performs such an analysis and aids the user in visualizing the features of relevance. We first applied the toolbox to a mock data set with a ground truth to show the sensitivity of the approach. Next, we applied the toolbox to a set of natural images as a more practical example.</subfield></datafield><datafield tag="650" ind1=" " ind2="4"><subfield code="a">Image Statistics</subfield><subfield code="7">(dpeaa)DE-He213</subfield></datafield><datafield tag="650" ind1=" " ind2="4"><subfield code="a">Machine Learning</subfield><subfield code="7">(dpeaa)DE-He213</subfield></datafield><datafield tag="650" ind1=" " ind2="4"><subfield code="a">Toolbox</subfield><subfield code="7">(dpeaa)DE-He213</subfield></datafield><datafield tag="650" ind1=" " ind2="4"><subfield code="a">Matlab</subfield><subfield code="7">(dpeaa)DE-He213</subfield></datafield><datafield tag="700" ind1="1" ind2=" "><subfield code="a">Paffen, C. L. E.</subfield><subfield code="4">aut</subfield></datafield><datafield tag="700" ind1="1" ind2=" "><subfield code="a">Van der Stigchel, S.</subfield><subfield code="4">aut</subfield></datafield><datafield tag="773" ind1="0" ind2="8"><subfield code="i">Enthalten in</subfield><subfield code="t">Behavior research methods, instruments & computers</subfield><subfield code="d">Austin, Tex. : Psychonomic Society Publ., 1984</subfield><subfield code="g">54(2021), 5 vom: 16. Dez., Seite 2422-2432</subfield><subfield code="w">(DE-627)32998067X</subfield><subfield code="w">(DE-600)2048669-8</subfield><subfield code="x">1532-5970</subfield><subfield code="7">nnns</subfield></datafield><datafield tag="773" ind1="1" ind2="8"><subfield code="g">volume:54</subfield><subfield code="g">year:2021</subfield><subfield code="g">number:5</subfield><subfield code="g">day:16</subfield><subfield code="g">month:12</subfield><subfield code="g">pages:2422-2432</subfield></datafield><datafield tag="856" ind1="4" ind2="0"><subfield code="u">https://dx.doi.org/10.3758/s13428-021-01737-9</subfield><subfield code="z">kostenfrei</subfield><subfield code="3">Volltext</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_USEFLAG_A</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">SYSFLAG_A</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_SPRINGER</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_20</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_22</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_23</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_24</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_31</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_32</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_39</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_40</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_60</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_62</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_65</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_69</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_70</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_73</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_74</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_90</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_95</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_100</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_101</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_105</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_110</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_120</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_138</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_152</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_161</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_171</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_187</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_224</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_250</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_281</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_285</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_293</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_370</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_602</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_702</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2014</subfield></datafield><datafield tag="951" ind1=" " ind2=" "><subfield code="a">AR</subfield></datafield><datafield tag="952" ind1=" " ind2=" "><subfield code="d">54</subfield><subfield code="j">2021</subfield><subfield code="e">5</subfield><subfield code="b">16</subfield><subfield code="c">12</subfield><subfield code="h">2422-2432</subfield></datafield></record></collection>
|
author |
Stuit, S. M. |
spellingShingle |
Stuit, S. M. misc Image Statistics misc Machine Learning misc Toolbox misc Matlab Introducing the Prototypical Stimulus Characteristics Toolbox: Protosc |
authorStr |
Stuit, S. M. |
ppnlink_with_tag_str_mv |
@@773@@(DE-627)32998067X |
format |
electronic Article |
delete_txt_mv |
keep |
author_role |
aut aut aut |
collection |
springer |
remote_str |
true |
illustrated |
Not Illustrated |
issn |
1532-5970 |
topic_title |
Introducing the Prototypical Stimulus Characteristics Toolbox: Protosc Image Statistics (dpeaa)DE-He213 Machine Learning (dpeaa)DE-He213 Toolbox (dpeaa)DE-He213 Matlab (dpeaa)DE-He213 |
topic |
misc Image Statistics misc Machine Learning misc Toolbox misc Matlab |
topic_unstemmed |
misc Image Statistics misc Machine Learning misc Toolbox misc Matlab |
topic_browse |
misc Image Statistics misc Machine Learning misc Toolbox misc Matlab |
format_facet |
Elektronische Aufsätze Aufsätze Elektronische Ressource |
format_main_str_mv |
Text Zeitschrift/Artikel |
carriertype_str_mv |
cr |
hierarchy_parent_title |
Behavior research methods, instruments & computers |
hierarchy_parent_id |
32998067X |
hierarchy_top_title |
Behavior research methods, instruments & computers |
isfreeaccess_txt |
true |
familylinks_str_mv |
(DE-627)32998067X (DE-600)2048669-8 |
title |
Introducing the Prototypical Stimulus Characteristics Toolbox: Protosc |
ctrlnum |
(DE-627)SPR048391093 (SPR)s13428-021-01737-9-e |
title_full |
Introducing the Prototypical Stimulus Characteristics Toolbox: Protosc |
author_sort |
Stuit, S. M. |
journal |
Behavior research methods, instruments & computers |
journalStr |
Behavior research methods, instruments & computers |
lang_code |
eng |
isOA_bool |
true |
recordtype |
marc |
publishDateSort |
2021 |
contenttype_str_mv |
txt |
container_start_page |
2422 |
author_browse |
Stuit, S. M. Paffen, C. L. E. Van der Stigchel, S. |
container_volume |
54 |
format_se |
Elektronische Aufsätze |
author-letter |
Stuit, S. M. |
doi_str_mv |
10.3758/s13428-021-01737-9 |
normlink |
(ORCID)0000-0003-3891-2171 |
normlink_prefix_str_mv |
(orcid)0000-0003-3891-2171 |
title_sort |
introducing the prototypical stimulus characteristics toolbox: protosc |
title_auth |
Introducing the Prototypical Stimulus Characteristics Toolbox: Protosc |
abstract |
Abstract Many studies use different categories of images to define their conditions. Since any difference between these categories is a valid candidate to explain category-related behavioral differences, knowledge about the objective image differences between categories is crucial for the interpretation of the behaviors. However, natural images vary in many image features and not every feature is equally important in describing the differences between the categories. Here, we provide a methodological approach to find as many of the image features as possible, using machine learning performance as a tool, that have predictive value over the category the images belong to. In other words, we describe a means to find the features of a group of images by which the categories can be objectively and quantitatively defined. Note that we are not aiming to provide a means for the best possible decoding performance; instead, our aim is to uncover prototypical characteristics of the categories. To facilitate the use of this method, we offer an open-source, MATLAB-based toolbox that performs such an analysis and aids the user in visualizing the features of relevance. We first applied the toolbox to a mock data set with a ground truth to show the sensitivity of the approach. Next, we applied the toolbox to a set of natural images as a more practical example. © The Author(s) 2021 |
abstractGer |
Abstract Many studies use different categories of images to define their conditions. Since any difference between these categories is a valid candidate to explain category-related behavioral differences, knowledge about the objective image differences between categories is crucial for the interpretation of the behaviors. However, natural images vary in many image features and not every feature is equally important in describing the differences between the categories. Here, we provide a methodological approach to find as many of the image features as possible, using machine learning performance as a tool, that have predictive value over the category the images belong to. In other words, we describe a means to find the features of a group of images by which the categories can be objectively and quantitatively defined. Note that we are not aiming to provide a means for the best possible decoding performance; instead, our aim is to uncover prototypical characteristics of the categories. To facilitate the use of this method, we offer an open-source, MATLAB-based toolbox that performs such an analysis and aids the user in visualizing the features of relevance. We first applied the toolbox to a mock data set with a ground truth to show the sensitivity of the approach. Next, we applied the toolbox to a set of natural images as a more practical example. © The Author(s) 2021 |
abstract_unstemmed |
Abstract Many studies use different categories of images to define their conditions. Since any difference between these categories is a valid candidate to explain category-related behavioral differences, knowledge about the objective image differences between categories is crucial for the interpretation of the behaviors. However, natural images vary in many image features and not every feature is equally important in describing the differences between the categories. Here, we provide a methodological approach to find as many of the image features as possible, using machine learning performance as a tool, that have predictive value over the category the images belong to. In other words, we describe a means to find the features of a group of images by which the categories can be objectively and quantitatively defined. Note that we are not aiming to provide a means for the best possible decoding performance; instead, our aim is to uncover prototypical characteristics of the categories. To facilitate the use of this method, we offer an open-source, MATLAB-based toolbox that performs such an analysis and aids the user in visualizing the features of relevance. We first applied the toolbox to a mock data set with a ground truth to show the sensitivity of the approach. Next, we applied the toolbox to a set of natural images as a more practical example. © The Author(s) 2021 |
collection_details |
GBV_USEFLAG_A SYSFLAG_A GBV_SPRINGER GBV_ILN_20 GBV_ILN_22 GBV_ILN_23 GBV_ILN_24 GBV_ILN_31 GBV_ILN_32 GBV_ILN_39 GBV_ILN_40 GBV_ILN_60 GBV_ILN_62 GBV_ILN_65 GBV_ILN_69 GBV_ILN_70 GBV_ILN_73 GBV_ILN_74 GBV_ILN_90 GBV_ILN_95 GBV_ILN_100 GBV_ILN_101 GBV_ILN_105 GBV_ILN_110 GBV_ILN_120 GBV_ILN_138 GBV_ILN_152 GBV_ILN_161 GBV_ILN_171 GBV_ILN_187 GBV_ILN_224 GBV_ILN_250 GBV_ILN_281 GBV_ILN_285 GBV_ILN_293 GBV_ILN_370 GBV_ILN_602 GBV_ILN_702 GBV_ILN_2014 |
container_issue |
5 |
title_short |
Introducing the Prototypical Stimulus Characteristics Toolbox: Protosc |
url |
https://dx.doi.org/10.3758/s13428-021-01737-9 |
remote_bool |
true |
author2 |
Paffen, C. L. E. Van der Stigchel, S. |
author2Str |
Paffen, C. L. E. Van der Stigchel, S. |
ppnlink |
32998067X |
mediatype_str_mv |
c |
isOA_txt |
true |
hochschulschrift_bool |
false |
doi_str |
10.3758/s13428-021-01737-9 |
up_date |
2024-07-03T18:54:22.857Z |
_version_ |
1803585179206287360 |
fullrecord_marcxml |
<?xml version="1.0" encoding="UTF-8"?><collection xmlns="http://www.loc.gov/MARC21/slim"><record><leader>01000caa a22002652 4500</leader><controlfield tag="001">SPR048391093</controlfield><controlfield tag="003">DE-627</controlfield><controlfield tag="005">20230509114116.0</controlfield><controlfield tag="007">cr uuu---uuuuu</controlfield><controlfield tag="008">221019s2021 xx |||||o 00| ||eng c</controlfield><datafield tag="024" ind1="7" ind2=" "><subfield code="a">10.3758/s13428-021-01737-9</subfield><subfield code="2">doi</subfield></datafield><datafield tag="035" ind1=" " ind2=" "><subfield code="a">(DE-627)SPR048391093</subfield></datafield><datafield tag="035" ind1=" " ind2=" "><subfield code="a">(SPR)s13428-021-01737-9-e</subfield></datafield><datafield tag="040" ind1=" " ind2=" "><subfield code="a">DE-627</subfield><subfield code="b">ger</subfield><subfield code="c">DE-627</subfield><subfield code="e">rakwb</subfield></datafield><datafield tag="041" ind1=" " ind2=" "><subfield code="a">eng</subfield></datafield><datafield tag="100" ind1="1" ind2=" "><subfield code="a">Stuit, S. M.</subfield><subfield code="e">verfasserin</subfield><subfield code="0">(orcid)0000-0003-3891-2171</subfield><subfield code="4">aut</subfield></datafield><datafield tag="245" ind1="1" ind2="0"><subfield code="a">Introducing the Prototypical Stimulus Characteristics Toolbox: Protosc</subfield></datafield><datafield tag="264" ind1=" " ind2="1"><subfield code="c">2021</subfield></datafield><datafield tag="336" ind1=" " ind2=" "><subfield code="a">Text</subfield><subfield code="b">txt</subfield><subfield code="2">rdacontent</subfield></datafield><datafield tag="337" ind1=" " ind2=" "><subfield code="a">Computermedien</subfield><subfield code="b">c</subfield><subfield code="2">rdamedia</subfield></datafield><datafield tag="338" ind1=" " ind2=" "><subfield code="a">Online-Ressource</subfield><subfield code="b">cr</subfield><subfield code="2">rdacarrier</subfield></datafield><datafield tag="500" ind1=" " ind2=" "><subfield code="a">© The Author(s) 2021</subfield></datafield><datafield tag="520" ind1=" " ind2=" "><subfield code="a">Abstract Many studies use different categories of images to define their conditions. Since any difference between these categories is a valid candidate to explain category-related behavioral differences, knowledge about the objective image differences between categories is crucial for the interpretation of the behaviors. However, natural images vary in many image features and not every feature is equally important in describing the differences between the categories. Here, we provide a methodological approach to find as many of the image features as possible, using machine learning performance as a tool, that have predictive value over the category the images belong to. In other words, we describe a means to find the features of a group of images by which the categories can be objectively and quantitatively defined. Note that we are not aiming to provide a means for the best possible decoding performance; instead, our aim is to uncover prototypical characteristics of the categories. To facilitate the use of this method, we offer an open-source, MATLAB-based toolbox that performs such an analysis and aids the user in visualizing the features of relevance. We first applied the toolbox to a mock data set with a ground truth to show the sensitivity of the approach. Next, we applied the toolbox to a set of natural images as a more practical example.</subfield></datafield><datafield tag="650" ind1=" " ind2="4"><subfield code="a">Image Statistics</subfield><subfield code="7">(dpeaa)DE-He213</subfield></datafield><datafield tag="650" ind1=" " ind2="4"><subfield code="a">Machine Learning</subfield><subfield code="7">(dpeaa)DE-He213</subfield></datafield><datafield tag="650" ind1=" " ind2="4"><subfield code="a">Toolbox</subfield><subfield code="7">(dpeaa)DE-He213</subfield></datafield><datafield tag="650" ind1=" " ind2="4"><subfield code="a">Matlab</subfield><subfield code="7">(dpeaa)DE-He213</subfield></datafield><datafield tag="700" ind1="1" ind2=" "><subfield code="a">Paffen, C. L. E.</subfield><subfield code="4">aut</subfield></datafield><datafield tag="700" ind1="1" ind2=" "><subfield code="a">Van der Stigchel, S.</subfield><subfield code="4">aut</subfield></datafield><datafield tag="773" ind1="0" ind2="8"><subfield code="i">Enthalten in</subfield><subfield code="t">Behavior research methods, instruments & computers</subfield><subfield code="d">Austin, Tex. : Psychonomic Society Publ., 1984</subfield><subfield code="g">54(2021), 5 vom: 16. Dez., Seite 2422-2432</subfield><subfield code="w">(DE-627)32998067X</subfield><subfield code="w">(DE-600)2048669-8</subfield><subfield code="x">1532-5970</subfield><subfield code="7">nnns</subfield></datafield><datafield tag="773" ind1="1" ind2="8"><subfield code="g">volume:54</subfield><subfield code="g">year:2021</subfield><subfield code="g">number:5</subfield><subfield code="g">day:16</subfield><subfield code="g">month:12</subfield><subfield code="g">pages:2422-2432</subfield></datafield><datafield tag="856" ind1="4" ind2="0"><subfield code="u">https://dx.doi.org/10.3758/s13428-021-01737-9</subfield><subfield code="z">kostenfrei</subfield><subfield code="3">Volltext</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_USEFLAG_A</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">SYSFLAG_A</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_SPRINGER</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_20</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_22</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_23</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_24</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_31</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_32</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_39</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_40</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_60</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_62</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_65</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_69</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_70</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_73</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_74</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_90</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_95</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_100</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_101</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_105</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_110</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_120</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_138</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_152</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_161</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_171</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_187</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_224</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_250</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_281</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_285</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_293</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_370</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_602</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_702</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2014</subfield></datafield><datafield tag="951" ind1=" " ind2=" "><subfield code="a">AR</subfield></datafield><datafield tag="952" ind1=" " ind2=" "><subfield code="d">54</subfield><subfield code="j">2021</subfield><subfield code="e">5</subfield><subfield code="b">16</subfield><subfield code="c">12</subfield><subfield code="h">2422-2432</subfield></datafield></record></collection>
|
score |
7.39966 |