Coding infant engagement in the Face-to-Face Still-Face paradigm using deep neural networks
Background: The Face-to-Face Still-Face (FFSF) task is a validated and commonly used observational measure of mother-infant socio-emotional interactions. With the ascendence of deep learning-based facial emotion recognition, it is possible that common complex tasks, such as the coding of FFSF videos...
Ausführliche Beschreibung
Autor*in: |
Faltyn, Mateusz [verfasserIn] Krzeczkowski, John E. [verfasserIn] Cummings, Mike [verfasserIn] Anwar, Samia [verfasserIn] Zeng, Tammy [verfasserIn] Zahid, Isra [verfasserIn] Ntow, Kwadjo Otu-Boateng [verfasserIn] Van Lieshout, Ryan J. [verfasserIn] |
---|
Format: |
E-Artikel |
---|---|
Sprache: |
Englisch |
Erschienen: |
2023 |
---|
Schlagwörter: |
---|
Übergeordnetes Werk: |
Enthalten in: Infant behavior and development - Amsterdam [u.a.] : Elsevier Science, 1978, 71 |
---|---|
Übergeordnetes Werk: |
volume:71 |
DOI / URN: |
10.1016/j.infbeh.2023.101827 |
---|
Katalog-ID: |
ELV010231390 |
---|
LEADER | 01000naa a22002652 4500 | ||
---|---|---|---|
001 | ELV010231390 | ||
003 | DE-627 | ||
005 | 20230609195112.0 | ||
007 | cr uuu---uuuuu | ||
008 | 230609s2023 xx |||||o 00| ||eng c | ||
024 | 7 | |a 10.1016/j.infbeh.2023.101827 |2 doi | |
035 | |a (DE-627)ELV010231390 | ||
035 | |a (ELSEVIER)S0163-6383(23)00019-X | ||
040 | |a DE-627 |b ger |c DE-627 |e rda | ||
041 | |a eng | ||
082 | 0 | 4 | |a 150 |q VZ |
084 | |a 77.00 |2 bkl | ||
100 | 1 | |a Faltyn, Mateusz |e verfasserin |0 (orcid)0000-0002-7276-3029 |4 aut | |
245 | 1 | 0 | |a Coding infant engagement in the Face-to-Face Still-Face paradigm using deep neural networks |
264 | 1 | |c 2023 | |
336 | |a nicht spezifiziert |b zzz |2 rdacontent | ||
337 | |a Computermedien |b c |2 rdamedia | ||
338 | |a Online-Ressource |b cr |2 rdacarrier | ||
520 | |a Background: The Face-to-Face Still-Face (FFSF) task is a validated and commonly used observational measure of mother-infant socio-emotional interactions. With the ascendence of deep learning-based facial emotion recognition, it is possible that common complex tasks, such as the coding of FFSF videos, could be coded with a high degree of accuracy by deep neural networks (DNNs). The primary objective of this study was to test the accuracy of four DNN image classification models against the coding of infant engagement conducted by two trained independent manual raters.Methods: 68 mother-infant dyads completed the FFSF task at three timepoints. Two trained independent raters undertook second-by-second manual coding of infant engagement into one of four classes: 1) positive affect, 2) neutral affect, 3) object/environment engagement, and 4) negative affect.Results: Training four different DNN models on 40,000 images, we achieved a maximum accuracy of 99.5% on image classification of infant frames taken from recordings of the FFSF task with a maximum inter-rater reliability (Cohen's κ-value) of 0.993.Limitations: This study inherits all sampling and experimental limitations of the original study from which the data was taken, namely a relatively small and primarily White sample.Conclusions: Based on the extremely high classification accuracy, these findings suggest that DNNs could be used to code infant engagement in FFSF recordings. DNN image classification models may also have the potential to improve the efficiency of coding all observational tasks with applications across multiple fields of human behavior research. | ||
650 | 4 | |a Machine learning | |
650 | 4 | |a Deep neural networks | |
650 | 4 | |a Face-to-Face Still-Face Task | |
650 | 4 | |a Developmental psychology | |
650 | 4 | |a Perinatal psychiatry | |
700 | 1 | |a Krzeczkowski, John E. |e verfasserin |4 aut | |
700 | 1 | |a Cummings, Mike |e verfasserin |4 aut | |
700 | 1 | |a Anwar, Samia |e verfasserin |4 aut | |
700 | 1 | |a Zeng, Tammy |e verfasserin |4 aut | |
700 | 1 | |a Zahid, Isra |e verfasserin |4 aut | |
700 | 1 | |a Ntow, Kwadjo Otu-Boateng |e verfasserin |4 aut | |
700 | 1 | |a Van Lieshout, Ryan J. |e verfasserin |4 aut | |
773 | 0 | 8 | |i Enthalten in |t Infant behavior and development |d Amsterdam [u.a.] : Elsevier Science, 1978 |g 71 |h Online-Ressource |w (DE-627)320465349 |w (DE-600)2007808-0 |w (DE-576)098614940 |x 1934-8800 |7 nnns |
773 | 1 | 8 | |g volume:71 |
912 | |a GBV_USEFLAG_U | ||
912 | |a SYSFLAG_U | ||
912 | |a GBV_ELV | ||
912 | |a GBV_ILN_20 | ||
912 | |a GBV_ILN_22 | ||
912 | |a GBV_ILN_23 | ||
912 | |a GBV_ILN_24 | ||
912 | |a GBV_ILN_31 | ||
912 | |a GBV_ILN_32 | ||
912 | |a GBV_ILN_40 | ||
912 | |a GBV_ILN_60 | ||
912 | |a GBV_ILN_62 | ||
912 | |a GBV_ILN_65 | ||
912 | |a GBV_ILN_69 | ||
912 | |a GBV_ILN_70 | ||
912 | |a GBV_ILN_73 | ||
912 | |a GBV_ILN_74 | ||
912 | |a GBV_ILN_90 | ||
912 | |a GBV_ILN_95 | ||
912 | |a GBV_ILN_100 | ||
912 | |a GBV_ILN_101 | ||
912 | |a GBV_ILN_105 | ||
912 | |a GBV_ILN_110 | ||
912 | |a GBV_ILN_151 | ||
912 | |a GBV_ILN_187 | ||
912 | |a GBV_ILN_213 | ||
912 | |a GBV_ILN_224 | ||
912 | |a GBV_ILN_230 | ||
912 | |a GBV_ILN_370 | ||
912 | |a GBV_ILN_602 | ||
912 | |a GBV_ILN_702 | ||
912 | |a GBV_ILN_2001 | ||
912 | |a GBV_ILN_2003 | ||
912 | |a GBV_ILN_2004 | ||
912 | |a GBV_ILN_2005 | ||
912 | |a GBV_ILN_2007 | ||
912 | |a GBV_ILN_2008 | ||
912 | |a GBV_ILN_2009 | ||
912 | |a GBV_ILN_2010 | ||
912 | |a GBV_ILN_2011 | ||
912 | |a GBV_ILN_2014 | ||
912 | |a GBV_ILN_2015 | ||
912 | |a GBV_ILN_2020 | ||
912 | |a GBV_ILN_2021 | ||
912 | |a GBV_ILN_2025 | ||
912 | |a GBV_ILN_2026 | ||
912 | |a GBV_ILN_2027 | ||
912 | |a GBV_ILN_2034 | ||
912 | |a GBV_ILN_2044 | ||
912 | |a GBV_ILN_2048 | ||
912 | |a GBV_ILN_2049 | ||
912 | |a GBV_ILN_2050 | ||
912 | |a GBV_ILN_2055 | ||
912 | |a GBV_ILN_2056 | ||
912 | |a GBV_ILN_2059 | ||
912 | |a GBV_ILN_2061 | ||
912 | |a GBV_ILN_2064 | ||
912 | |a GBV_ILN_2088 | ||
912 | |a GBV_ILN_2106 | ||
912 | |a GBV_ILN_2110 | ||
912 | |a GBV_ILN_2111 | ||
912 | |a GBV_ILN_2112 | ||
912 | |a GBV_ILN_2122 | ||
912 | |a GBV_ILN_2129 | ||
912 | |a GBV_ILN_2143 | ||
912 | |a GBV_ILN_2152 | ||
912 | |a GBV_ILN_2153 | ||
912 | |a GBV_ILN_2190 | ||
912 | |a GBV_ILN_2232 | ||
912 | |a GBV_ILN_2336 | ||
912 | |a GBV_ILN_2470 | ||
912 | |a GBV_ILN_2507 | ||
912 | |a GBV_ILN_4035 | ||
912 | |a GBV_ILN_4037 | ||
912 | |a GBV_ILN_4112 | ||
912 | |a GBV_ILN_4125 | ||
912 | |a GBV_ILN_4242 | ||
912 | |a GBV_ILN_4249 | ||
912 | |a GBV_ILN_4251 | ||
912 | |a GBV_ILN_4305 | ||
912 | |a GBV_ILN_4306 | ||
912 | |a GBV_ILN_4307 | ||
912 | |a GBV_ILN_4313 | ||
912 | |a GBV_ILN_4322 | ||
912 | |a GBV_ILN_4323 | ||
912 | |a GBV_ILN_4324 | ||
912 | |a GBV_ILN_4325 | ||
912 | |a GBV_ILN_4326 | ||
912 | |a GBV_ILN_4333 | ||
912 | |a GBV_ILN_4334 | ||
912 | |a GBV_ILN_4338 | ||
912 | |a GBV_ILN_4393 | ||
912 | |a GBV_ILN_4700 | ||
936 | b | k | |a 77.00 |j Psychologie: Allgemeines |q VZ |
951 | |a AR | ||
952 | |d 71 |
author_variant |
m f mf j e k je jek m c mc s a sa t z tz i z iz k o b n kob kobn l r j v lrj lrjv |
---|---|
matchkey_str |
article:19348800:2023----::oignatnaeetnhfctfcsilaeaaims |
hierarchy_sort_str |
2023 |
bklnumber |
77.00 |
publishDate |
2023 |
allfields |
10.1016/j.infbeh.2023.101827 doi (DE-627)ELV010231390 (ELSEVIER)S0163-6383(23)00019-X DE-627 ger DE-627 rda eng 150 VZ 77.00 bkl Faltyn, Mateusz verfasserin (orcid)0000-0002-7276-3029 aut Coding infant engagement in the Face-to-Face Still-Face paradigm using deep neural networks 2023 nicht spezifiziert zzz rdacontent Computermedien c rdamedia Online-Ressource cr rdacarrier Background: The Face-to-Face Still-Face (FFSF) task is a validated and commonly used observational measure of mother-infant socio-emotional interactions. With the ascendence of deep learning-based facial emotion recognition, it is possible that common complex tasks, such as the coding of FFSF videos, could be coded with a high degree of accuracy by deep neural networks (DNNs). The primary objective of this study was to test the accuracy of four DNN image classification models against the coding of infant engagement conducted by two trained independent manual raters.Methods: 68 mother-infant dyads completed the FFSF task at three timepoints. Two trained independent raters undertook second-by-second manual coding of infant engagement into one of four classes: 1) positive affect, 2) neutral affect, 3) object/environment engagement, and 4) negative affect.Results: Training four different DNN models on 40,000 images, we achieved a maximum accuracy of 99.5% on image classification of infant frames taken from recordings of the FFSF task with a maximum inter-rater reliability (Cohen's κ-value) of 0.993.Limitations: This study inherits all sampling and experimental limitations of the original study from which the data was taken, namely a relatively small and primarily White sample.Conclusions: Based on the extremely high classification accuracy, these findings suggest that DNNs could be used to code infant engagement in FFSF recordings. DNN image classification models may also have the potential to improve the efficiency of coding all observational tasks with applications across multiple fields of human behavior research. Machine learning Deep neural networks Face-to-Face Still-Face Task Developmental psychology Perinatal psychiatry Krzeczkowski, John E. verfasserin aut Cummings, Mike verfasserin aut Anwar, Samia verfasserin aut Zeng, Tammy verfasserin aut Zahid, Isra verfasserin aut Ntow, Kwadjo Otu-Boateng verfasserin aut Van Lieshout, Ryan J. verfasserin aut Enthalten in Infant behavior and development Amsterdam [u.a.] : Elsevier Science, 1978 71 Online-Ressource (DE-627)320465349 (DE-600)2007808-0 (DE-576)098614940 1934-8800 nnns volume:71 GBV_USEFLAG_U SYSFLAG_U GBV_ELV GBV_ILN_20 GBV_ILN_22 GBV_ILN_23 GBV_ILN_24 GBV_ILN_31 GBV_ILN_32 GBV_ILN_40 GBV_ILN_60 GBV_ILN_62 GBV_ILN_65 GBV_ILN_69 GBV_ILN_70 GBV_ILN_73 GBV_ILN_74 GBV_ILN_90 GBV_ILN_95 GBV_ILN_100 GBV_ILN_101 GBV_ILN_105 GBV_ILN_110 GBV_ILN_151 GBV_ILN_187 GBV_ILN_213 GBV_ILN_224 GBV_ILN_230 GBV_ILN_370 GBV_ILN_602 GBV_ILN_702 GBV_ILN_2001 GBV_ILN_2003 GBV_ILN_2004 GBV_ILN_2005 GBV_ILN_2007 GBV_ILN_2008 GBV_ILN_2009 GBV_ILN_2010 GBV_ILN_2011 GBV_ILN_2014 GBV_ILN_2015 GBV_ILN_2020 GBV_ILN_2021 GBV_ILN_2025 GBV_ILN_2026 GBV_ILN_2027 GBV_ILN_2034 GBV_ILN_2044 GBV_ILN_2048 GBV_ILN_2049 GBV_ILN_2050 GBV_ILN_2055 GBV_ILN_2056 GBV_ILN_2059 GBV_ILN_2061 GBV_ILN_2064 GBV_ILN_2088 GBV_ILN_2106 GBV_ILN_2110 GBV_ILN_2111 GBV_ILN_2112 GBV_ILN_2122 GBV_ILN_2129 GBV_ILN_2143 GBV_ILN_2152 GBV_ILN_2153 GBV_ILN_2190 GBV_ILN_2232 GBV_ILN_2336 GBV_ILN_2470 GBV_ILN_2507 GBV_ILN_4035 GBV_ILN_4037 GBV_ILN_4112 GBV_ILN_4125 GBV_ILN_4242 GBV_ILN_4249 GBV_ILN_4251 GBV_ILN_4305 GBV_ILN_4306 GBV_ILN_4307 GBV_ILN_4313 GBV_ILN_4322 GBV_ILN_4323 GBV_ILN_4324 GBV_ILN_4325 GBV_ILN_4326 GBV_ILN_4333 GBV_ILN_4334 GBV_ILN_4338 GBV_ILN_4393 GBV_ILN_4700 77.00 Psychologie: Allgemeines VZ AR 71 |
spelling |
10.1016/j.infbeh.2023.101827 doi (DE-627)ELV010231390 (ELSEVIER)S0163-6383(23)00019-X DE-627 ger DE-627 rda eng 150 VZ 77.00 bkl Faltyn, Mateusz verfasserin (orcid)0000-0002-7276-3029 aut Coding infant engagement in the Face-to-Face Still-Face paradigm using deep neural networks 2023 nicht spezifiziert zzz rdacontent Computermedien c rdamedia Online-Ressource cr rdacarrier Background: The Face-to-Face Still-Face (FFSF) task is a validated and commonly used observational measure of mother-infant socio-emotional interactions. With the ascendence of deep learning-based facial emotion recognition, it is possible that common complex tasks, such as the coding of FFSF videos, could be coded with a high degree of accuracy by deep neural networks (DNNs). The primary objective of this study was to test the accuracy of four DNN image classification models against the coding of infant engagement conducted by two trained independent manual raters.Methods: 68 mother-infant dyads completed the FFSF task at three timepoints. Two trained independent raters undertook second-by-second manual coding of infant engagement into one of four classes: 1) positive affect, 2) neutral affect, 3) object/environment engagement, and 4) negative affect.Results: Training four different DNN models on 40,000 images, we achieved a maximum accuracy of 99.5% on image classification of infant frames taken from recordings of the FFSF task with a maximum inter-rater reliability (Cohen's κ-value) of 0.993.Limitations: This study inherits all sampling and experimental limitations of the original study from which the data was taken, namely a relatively small and primarily White sample.Conclusions: Based on the extremely high classification accuracy, these findings suggest that DNNs could be used to code infant engagement in FFSF recordings. DNN image classification models may also have the potential to improve the efficiency of coding all observational tasks with applications across multiple fields of human behavior research. Machine learning Deep neural networks Face-to-Face Still-Face Task Developmental psychology Perinatal psychiatry Krzeczkowski, John E. verfasserin aut Cummings, Mike verfasserin aut Anwar, Samia verfasserin aut Zeng, Tammy verfasserin aut Zahid, Isra verfasserin aut Ntow, Kwadjo Otu-Boateng verfasserin aut Van Lieshout, Ryan J. verfasserin aut Enthalten in Infant behavior and development Amsterdam [u.a.] : Elsevier Science, 1978 71 Online-Ressource (DE-627)320465349 (DE-600)2007808-0 (DE-576)098614940 1934-8800 nnns volume:71 GBV_USEFLAG_U SYSFLAG_U GBV_ELV GBV_ILN_20 GBV_ILN_22 GBV_ILN_23 GBV_ILN_24 GBV_ILN_31 GBV_ILN_32 GBV_ILN_40 GBV_ILN_60 GBV_ILN_62 GBV_ILN_65 GBV_ILN_69 GBV_ILN_70 GBV_ILN_73 GBV_ILN_74 GBV_ILN_90 GBV_ILN_95 GBV_ILN_100 GBV_ILN_101 GBV_ILN_105 GBV_ILN_110 GBV_ILN_151 GBV_ILN_187 GBV_ILN_213 GBV_ILN_224 GBV_ILN_230 GBV_ILN_370 GBV_ILN_602 GBV_ILN_702 GBV_ILN_2001 GBV_ILN_2003 GBV_ILN_2004 GBV_ILN_2005 GBV_ILN_2007 GBV_ILN_2008 GBV_ILN_2009 GBV_ILN_2010 GBV_ILN_2011 GBV_ILN_2014 GBV_ILN_2015 GBV_ILN_2020 GBV_ILN_2021 GBV_ILN_2025 GBV_ILN_2026 GBV_ILN_2027 GBV_ILN_2034 GBV_ILN_2044 GBV_ILN_2048 GBV_ILN_2049 GBV_ILN_2050 GBV_ILN_2055 GBV_ILN_2056 GBV_ILN_2059 GBV_ILN_2061 GBV_ILN_2064 GBV_ILN_2088 GBV_ILN_2106 GBV_ILN_2110 GBV_ILN_2111 GBV_ILN_2112 GBV_ILN_2122 GBV_ILN_2129 GBV_ILN_2143 GBV_ILN_2152 GBV_ILN_2153 GBV_ILN_2190 GBV_ILN_2232 GBV_ILN_2336 GBV_ILN_2470 GBV_ILN_2507 GBV_ILN_4035 GBV_ILN_4037 GBV_ILN_4112 GBV_ILN_4125 GBV_ILN_4242 GBV_ILN_4249 GBV_ILN_4251 GBV_ILN_4305 GBV_ILN_4306 GBV_ILN_4307 GBV_ILN_4313 GBV_ILN_4322 GBV_ILN_4323 GBV_ILN_4324 GBV_ILN_4325 GBV_ILN_4326 GBV_ILN_4333 GBV_ILN_4334 GBV_ILN_4338 GBV_ILN_4393 GBV_ILN_4700 77.00 Psychologie: Allgemeines VZ AR 71 |
allfields_unstemmed |
10.1016/j.infbeh.2023.101827 doi (DE-627)ELV010231390 (ELSEVIER)S0163-6383(23)00019-X DE-627 ger DE-627 rda eng 150 VZ 77.00 bkl Faltyn, Mateusz verfasserin (orcid)0000-0002-7276-3029 aut Coding infant engagement in the Face-to-Face Still-Face paradigm using deep neural networks 2023 nicht spezifiziert zzz rdacontent Computermedien c rdamedia Online-Ressource cr rdacarrier Background: The Face-to-Face Still-Face (FFSF) task is a validated and commonly used observational measure of mother-infant socio-emotional interactions. With the ascendence of deep learning-based facial emotion recognition, it is possible that common complex tasks, such as the coding of FFSF videos, could be coded with a high degree of accuracy by deep neural networks (DNNs). The primary objective of this study was to test the accuracy of four DNN image classification models against the coding of infant engagement conducted by two trained independent manual raters.Methods: 68 mother-infant dyads completed the FFSF task at three timepoints. Two trained independent raters undertook second-by-second manual coding of infant engagement into one of four classes: 1) positive affect, 2) neutral affect, 3) object/environment engagement, and 4) negative affect.Results: Training four different DNN models on 40,000 images, we achieved a maximum accuracy of 99.5% on image classification of infant frames taken from recordings of the FFSF task with a maximum inter-rater reliability (Cohen's κ-value) of 0.993.Limitations: This study inherits all sampling and experimental limitations of the original study from which the data was taken, namely a relatively small and primarily White sample.Conclusions: Based on the extremely high classification accuracy, these findings suggest that DNNs could be used to code infant engagement in FFSF recordings. DNN image classification models may also have the potential to improve the efficiency of coding all observational tasks with applications across multiple fields of human behavior research. Machine learning Deep neural networks Face-to-Face Still-Face Task Developmental psychology Perinatal psychiatry Krzeczkowski, John E. verfasserin aut Cummings, Mike verfasserin aut Anwar, Samia verfasserin aut Zeng, Tammy verfasserin aut Zahid, Isra verfasserin aut Ntow, Kwadjo Otu-Boateng verfasserin aut Van Lieshout, Ryan J. verfasserin aut Enthalten in Infant behavior and development Amsterdam [u.a.] : Elsevier Science, 1978 71 Online-Ressource (DE-627)320465349 (DE-600)2007808-0 (DE-576)098614940 1934-8800 nnns volume:71 GBV_USEFLAG_U SYSFLAG_U GBV_ELV GBV_ILN_20 GBV_ILN_22 GBV_ILN_23 GBV_ILN_24 GBV_ILN_31 GBV_ILN_32 GBV_ILN_40 GBV_ILN_60 GBV_ILN_62 GBV_ILN_65 GBV_ILN_69 GBV_ILN_70 GBV_ILN_73 GBV_ILN_74 GBV_ILN_90 GBV_ILN_95 GBV_ILN_100 GBV_ILN_101 GBV_ILN_105 GBV_ILN_110 GBV_ILN_151 GBV_ILN_187 GBV_ILN_213 GBV_ILN_224 GBV_ILN_230 GBV_ILN_370 GBV_ILN_602 GBV_ILN_702 GBV_ILN_2001 GBV_ILN_2003 GBV_ILN_2004 GBV_ILN_2005 GBV_ILN_2007 GBV_ILN_2008 GBV_ILN_2009 GBV_ILN_2010 GBV_ILN_2011 GBV_ILN_2014 GBV_ILN_2015 GBV_ILN_2020 GBV_ILN_2021 GBV_ILN_2025 GBV_ILN_2026 GBV_ILN_2027 GBV_ILN_2034 GBV_ILN_2044 GBV_ILN_2048 GBV_ILN_2049 GBV_ILN_2050 GBV_ILN_2055 GBV_ILN_2056 GBV_ILN_2059 GBV_ILN_2061 GBV_ILN_2064 GBV_ILN_2088 GBV_ILN_2106 GBV_ILN_2110 GBV_ILN_2111 GBV_ILN_2112 GBV_ILN_2122 GBV_ILN_2129 GBV_ILN_2143 GBV_ILN_2152 GBV_ILN_2153 GBV_ILN_2190 GBV_ILN_2232 GBV_ILN_2336 GBV_ILN_2470 GBV_ILN_2507 GBV_ILN_4035 GBV_ILN_4037 GBV_ILN_4112 GBV_ILN_4125 GBV_ILN_4242 GBV_ILN_4249 GBV_ILN_4251 GBV_ILN_4305 GBV_ILN_4306 GBV_ILN_4307 GBV_ILN_4313 GBV_ILN_4322 GBV_ILN_4323 GBV_ILN_4324 GBV_ILN_4325 GBV_ILN_4326 GBV_ILN_4333 GBV_ILN_4334 GBV_ILN_4338 GBV_ILN_4393 GBV_ILN_4700 77.00 Psychologie: Allgemeines VZ AR 71 |
allfieldsGer |
10.1016/j.infbeh.2023.101827 doi (DE-627)ELV010231390 (ELSEVIER)S0163-6383(23)00019-X DE-627 ger DE-627 rda eng 150 VZ 77.00 bkl Faltyn, Mateusz verfasserin (orcid)0000-0002-7276-3029 aut Coding infant engagement in the Face-to-Face Still-Face paradigm using deep neural networks 2023 nicht spezifiziert zzz rdacontent Computermedien c rdamedia Online-Ressource cr rdacarrier Background: The Face-to-Face Still-Face (FFSF) task is a validated and commonly used observational measure of mother-infant socio-emotional interactions. With the ascendence of deep learning-based facial emotion recognition, it is possible that common complex tasks, such as the coding of FFSF videos, could be coded with a high degree of accuracy by deep neural networks (DNNs). The primary objective of this study was to test the accuracy of four DNN image classification models against the coding of infant engagement conducted by two trained independent manual raters.Methods: 68 mother-infant dyads completed the FFSF task at three timepoints. Two trained independent raters undertook second-by-second manual coding of infant engagement into one of four classes: 1) positive affect, 2) neutral affect, 3) object/environment engagement, and 4) negative affect.Results: Training four different DNN models on 40,000 images, we achieved a maximum accuracy of 99.5% on image classification of infant frames taken from recordings of the FFSF task with a maximum inter-rater reliability (Cohen's κ-value) of 0.993.Limitations: This study inherits all sampling and experimental limitations of the original study from which the data was taken, namely a relatively small and primarily White sample.Conclusions: Based on the extremely high classification accuracy, these findings suggest that DNNs could be used to code infant engagement in FFSF recordings. DNN image classification models may also have the potential to improve the efficiency of coding all observational tasks with applications across multiple fields of human behavior research. Machine learning Deep neural networks Face-to-Face Still-Face Task Developmental psychology Perinatal psychiatry Krzeczkowski, John E. verfasserin aut Cummings, Mike verfasserin aut Anwar, Samia verfasserin aut Zeng, Tammy verfasserin aut Zahid, Isra verfasserin aut Ntow, Kwadjo Otu-Boateng verfasserin aut Van Lieshout, Ryan J. verfasserin aut Enthalten in Infant behavior and development Amsterdam [u.a.] : Elsevier Science, 1978 71 Online-Ressource (DE-627)320465349 (DE-600)2007808-0 (DE-576)098614940 1934-8800 nnns volume:71 GBV_USEFLAG_U SYSFLAG_U GBV_ELV GBV_ILN_20 GBV_ILN_22 GBV_ILN_23 GBV_ILN_24 GBV_ILN_31 GBV_ILN_32 GBV_ILN_40 GBV_ILN_60 GBV_ILN_62 GBV_ILN_65 GBV_ILN_69 GBV_ILN_70 GBV_ILN_73 GBV_ILN_74 GBV_ILN_90 GBV_ILN_95 GBV_ILN_100 GBV_ILN_101 GBV_ILN_105 GBV_ILN_110 GBV_ILN_151 GBV_ILN_187 GBV_ILN_213 GBV_ILN_224 GBV_ILN_230 GBV_ILN_370 GBV_ILN_602 GBV_ILN_702 GBV_ILN_2001 GBV_ILN_2003 GBV_ILN_2004 GBV_ILN_2005 GBV_ILN_2007 GBV_ILN_2008 GBV_ILN_2009 GBV_ILN_2010 GBV_ILN_2011 GBV_ILN_2014 GBV_ILN_2015 GBV_ILN_2020 GBV_ILN_2021 GBV_ILN_2025 GBV_ILN_2026 GBV_ILN_2027 GBV_ILN_2034 GBV_ILN_2044 GBV_ILN_2048 GBV_ILN_2049 GBV_ILN_2050 GBV_ILN_2055 GBV_ILN_2056 GBV_ILN_2059 GBV_ILN_2061 GBV_ILN_2064 GBV_ILN_2088 GBV_ILN_2106 GBV_ILN_2110 GBV_ILN_2111 GBV_ILN_2112 GBV_ILN_2122 GBV_ILN_2129 GBV_ILN_2143 GBV_ILN_2152 GBV_ILN_2153 GBV_ILN_2190 GBV_ILN_2232 GBV_ILN_2336 GBV_ILN_2470 GBV_ILN_2507 GBV_ILN_4035 GBV_ILN_4037 GBV_ILN_4112 GBV_ILN_4125 GBV_ILN_4242 GBV_ILN_4249 GBV_ILN_4251 GBV_ILN_4305 GBV_ILN_4306 GBV_ILN_4307 GBV_ILN_4313 GBV_ILN_4322 GBV_ILN_4323 GBV_ILN_4324 GBV_ILN_4325 GBV_ILN_4326 GBV_ILN_4333 GBV_ILN_4334 GBV_ILN_4338 GBV_ILN_4393 GBV_ILN_4700 77.00 Psychologie: Allgemeines VZ AR 71 |
allfieldsSound |
10.1016/j.infbeh.2023.101827 doi (DE-627)ELV010231390 (ELSEVIER)S0163-6383(23)00019-X DE-627 ger DE-627 rda eng 150 VZ 77.00 bkl Faltyn, Mateusz verfasserin (orcid)0000-0002-7276-3029 aut Coding infant engagement in the Face-to-Face Still-Face paradigm using deep neural networks 2023 nicht spezifiziert zzz rdacontent Computermedien c rdamedia Online-Ressource cr rdacarrier Background: The Face-to-Face Still-Face (FFSF) task is a validated and commonly used observational measure of mother-infant socio-emotional interactions. With the ascendence of deep learning-based facial emotion recognition, it is possible that common complex tasks, such as the coding of FFSF videos, could be coded with a high degree of accuracy by deep neural networks (DNNs). The primary objective of this study was to test the accuracy of four DNN image classification models against the coding of infant engagement conducted by two trained independent manual raters.Methods: 68 mother-infant dyads completed the FFSF task at three timepoints. Two trained independent raters undertook second-by-second manual coding of infant engagement into one of four classes: 1) positive affect, 2) neutral affect, 3) object/environment engagement, and 4) negative affect.Results: Training four different DNN models on 40,000 images, we achieved a maximum accuracy of 99.5% on image classification of infant frames taken from recordings of the FFSF task with a maximum inter-rater reliability (Cohen's κ-value) of 0.993.Limitations: This study inherits all sampling and experimental limitations of the original study from which the data was taken, namely a relatively small and primarily White sample.Conclusions: Based on the extremely high classification accuracy, these findings suggest that DNNs could be used to code infant engagement in FFSF recordings. DNN image classification models may also have the potential to improve the efficiency of coding all observational tasks with applications across multiple fields of human behavior research. Machine learning Deep neural networks Face-to-Face Still-Face Task Developmental psychology Perinatal psychiatry Krzeczkowski, John E. verfasserin aut Cummings, Mike verfasserin aut Anwar, Samia verfasserin aut Zeng, Tammy verfasserin aut Zahid, Isra verfasserin aut Ntow, Kwadjo Otu-Boateng verfasserin aut Van Lieshout, Ryan J. verfasserin aut Enthalten in Infant behavior and development Amsterdam [u.a.] : Elsevier Science, 1978 71 Online-Ressource (DE-627)320465349 (DE-600)2007808-0 (DE-576)098614940 1934-8800 nnns volume:71 GBV_USEFLAG_U SYSFLAG_U GBV_ELV GBV_ILN_20 GBV_ILN_22 GBV_ILN_23 GBV_ILN_24 GBV_ILN_31 GBV_ILN_32 GBV_ILN_40 GBV_ILN_60 GBV_ILN_62 GBV_ILN_65 GBV_ILN_69 GBV_ILN_70 GBV_ILN_73 GBV_ILN_74 GBV_ILN_90 GBV_ILN_95 GBV_ILN_100 GBV_ILN_101 GBV_ILN_105 GBV_ILN_110 GBV_ILN_151 GBV_ILN_187 GBV_ILN_213 GBV_ILN_224 GBV_ILN_230 GBV_ILN_370 GBV_ILN_602 GBV_ILN_702 GBV_ILN_2001 GBV_ILN_2003 GBV_ILN_2004 GBV_ILN_2005 GBV_ILN_2007 GBV_ILN_2008 GBV_ILN_2009 GBV_ILN_2010 GBV_ILN_2011 GBV_ILN_2014 GBV_ILN_2015 GBV_ILN_2020 GBV_ILN_2021 GBV_ILN_2025 GBV_ILN_2026 GBV_ILN_2027 GBV_ILN_2034 GBV_ILN_2044 GBV_ILN_2048 GBV_ILN_2049 GBV_ILN_2050 GBV_ILN_2055 GBV_ILN_2056 GBV_ILN_2059 GBV_ILN_2061 GBV_ILN_2064 GBV_ILN_2088 GBV_ILN_2106 GBV_ILN_2110 GBV_ILN_2111 GBV_ILN_2112 GBV_ILN_2122 GBV_ILN_2129 GBV_ILN_2143 GBV_ILN_2152 GBV_ILN_2153 GBV_ILN_2190 GBV_ILN_2232 GBV_ILN_2336 GBV_ILN_2470 GBV_ILN_2507 GBV_ILN_4035 GBV_ILN_4037 GBV_ILN_4112 GBV_ILN_4125 GBV_ILN_4242 GBV_ILN_4249 GBV_ILN_4251 GBV_ILN_4305 GBV_ILN_4306 GBV_ILN_4307 GBV_ILN_4313 GBV_ILN_4322 GBV_ILN_4323 GBV_ILN_4324 GBV_ILN_4325 GBV_ILN_4326 GBV_ILN_4333 GBV_ILN_4334 GBV_ILN_4338 GBV_ILN_4393 GBV_ILN_4700 77.00 Psychologie: Allgemeines VZ AR 71 |
language |
English |
source |
Enthalten in Infant behavior and development 71 volume:71 |
sourceStr |
Enthalten in Infant behavior and development 71 volume:71 |
format_phy_str_mv |
Article |
bklname |
Psychologie: Allgemeines |
institution |
findex.gbv.de |
topic_facet |
Machine learning Deep neural networks Face-to-Face Still-Face Task Developmental psychology Perinatal psychiatry |
dewey-raw |
150 |
isfreeaccess_bool |
false |
container_title |
Infant behavior and development |
authorswithroles_txt_mv |
Faltyn, Mateusz @@aut@@ Krzeczkowski, John E. @@aut@@ Cummings, Mike @@aut@@ Anwar, Samia @@aut@@ Zeng, Tammy @@aut@@ Zahid, Isra @@aut@@ Ntow, Kwadjo Otu-Boateng @@aut@@ Van Lieshout, Ryan J. @@aut@@ |
publishDateDaySort_date |
2023-01-01T00:00:00Z |
hierarchy_top_id |
320465349 |
dewey-sort |
3150 |
id |
ELV010231390 |
language_de |
englisch |
fullrecord |
<?xml version="1.0" encoding="UTF-8"?><collection xmlns="http://www.loc.gov/MARC21/slim"><record><leader>01000naa a22002652 4500</leader><controlfield tag="001">ELV010231390</controlfield><controlfield tag="003">DE-627</controlfield><controlfield tag="005">20230609195112.0</controlfield><controlfield tag="007">cr uuu---uuuuu</controlfield><controlfield tag="008">230609s2023 xx |||||o 00| ||eng c</controlfield><datafield tag="024" ind1="7" ind2=" "><subfield code="a">10.1016/j.infbeh.2023.101827</subfield><subfield code="2">doi</subfield></datafield><datafield tag="035" ind1=" " ind2=" "><subfield code="a">(DE-627)ELV010231390</subfield></datafield><datafield tag="035" ind1=" " ind2=" "><subfield code="a">(ELSEVIER)S0163-6383(23)00019-X</subfield></datafield><datafield tag="040" ind1=" " ind2=" "><subfield code="a">DE-627</subfield><subfield code="b">ger</subfield><subfield code="c">DE-627</subfield><subfield code="e">rda</subfield></datafield><datafield tag="041" ind1=" " ind2=" "><subfield code="a">eng</subfield></datafield><datafield tag="082" ind1="0" ind2="4"><subfield code="a">150</subfield><subfield code="q">VZ</subfield></datafield><datafield tag="084" ind1=" " ind2=" "><subfield code="a">77.00</subfield><subfield code="2">bkl</subfield></datafield><datafield tag="100" ind1="1" ind2=" "><subfield code="a">Faltyn, Mateusz</subfield><subfield code="e">verfasserin</subfield><subfield code="0">(orcid)0000-0002-7276-3029</subfield><subfield code="4">aut</subfield></datafield><datafield tag="245" ind1="1" ind2="0"><subfield code="a">Coding infant engagement in the Face-to-Face Still-Face paradigm using deep neural networks</subfield></datafield><datafield tag="264" ind1=" " ind2="1"><subfield code="c">2023</subfield></datafield><datafield tag="336" ind1=" " ind2=" "><subfield code="a">nicht spezifiziert</subfield><subfield code="b">zzz</subfield><subfield code="2">rdacontent</subfield></datafield><datafield tag="337" ind1=" " ind2=" "><subfield code="a">Computermedien</subfield><subfield code="b">c</subfield><subfield code="2">rdamedia</subfield></datafield><datafield tag="338" ind1=" " ind2=" "><subfield code="a">Online-Ressource</subfield><subfield code="b">cr</subfield><subfield code="2">rdacarrier</subfield></datafield><datafield tag="520" ind1=" " ind2=" "><subfield code="a">Background: The Face-to-Face Still-Face (FFSF) task is a validated and commonly used observational measure of mother-infant socio-emotional interactions. With the ascendence of deep learning-based facial emotion recognition, it is possible that common complex tasks, such as the coding of FFSF videos, could be coded with a high degree of accuracy by deep neural networks (DNNs). The primary objective of this study was to test the accuracy of four DNN image classification models against the coding of infant engagement conducted by two trained independent manual raters.Methods: 68 mother-infant dyads completed the FFSF task at three timepoints. Two trained independent raters undertook second-by-second manual coding of infant engagement into one of four classes: 1) positive affect, 2) neutral affect, 3) object/environment engagement, and 4) negative affect.Results: Training four different DNN models on 40,000 images, we achieved a maximum accuracy of 99.5% on image classification of infant frames taken from recordings of the FFSF task with a maximum inter-rater reliability (Cohen's κ-value) of 0.993.Limitations: This study inherits all sampling and experimental limitations of the original study from which the data was taken, namely a relatively small and primarily White sample.Conclusions: Based on the extremely high classification accuracy, these findings suggest that DNNs could be used to code infant engagement in FFSF recordings. DNN image classification models may also have the potential to improve the efficiency of coding all observational tasks with applications across multiple fields of human behavior research.</subfield></datafield><datafield tag="650" ind1=" " ind2="4"><subfield code="a">Machine learning</subfield></datafield><datafield tag="650" ind1=" " ind2="4"><subfield code="a">Deep neural networks</subfield></datafield><datafield tag="650" ind1=" " ind2="4"><subfield code="a">Face-to-Face Still-Face Task</subfield></datafield><datafield tag="650" ind1=" " ind2="4"><subfield code="a">Developmental psychology</subfield></datafield><datafield tag="650" ind1=" " ind2="4"><subfield code="a">Perinatal psychiatry</subfield></datafield><datafield tag="700" ind1="1" ind2=" "><subfield code="a">Krzeczkowski, John E.</subfield><subfield code="e">verfasserin</subfield><subfield code="4">aut</subfield></datafield><datafield tag="700" ind1="1" ind2=" "><subfield code="a">Cummings, Mike</subfield><subfield code="e">verfasserin</subfield><subfield code="4">aut</subfield></datafield><datafield tag="700" ind1="1" ind2=" "><subfield code="a">Anwar, Samia</subfield><subfield code="e">verfasserin</subfield><subfield code="4">aut</subfield></datafield><datafield tag="700" ind1="1" ind2=" "><subfield code="a">Zeng, Tammy</subfield><subfield code="e">verfasserin</subfield><subfield code="4">aut</subfield></datafield><datafield tag="700" ind1="1" ind2=" "><subfield code="a">Zahid, Isra</subfield><subfield code="e">verfasserin</subfield><subfield code="4">aut</subfield></datafield><datafield tag="700" ind1="1" ind2=" "><subfield code="a">Ntow, Kwadjo Otu-Boateng</subfield><subfield code="e">verfasserin</subfield><subfield code="4">aut</subfield></datafield><datafield tag="700" ind1="1" ind2=" "><subfield code="a">Van Lieshout, Ryan J.</subfield><subfield code="e">verfasserin</subfield><subfield code="4">aut</subfield></datafield><datafield tag="773" ind1="0" ind2="8"><subfield code="i">Enthalten in</subfield><subfield code="t">Infant behavior and development</subfield><subfield code="d">Amsterdam [u.a.] : Elsevier Science, 1978</subfield><subfield code="g">71</subfield><subfield code="h">Online-Ressource</subfield><subfield code="w">(DE-627)320465349</subfield><subfield code="w">(DE-600)2007808-0</subfield><subfield code="w">(DE-576)098614940</subfield><subfield code="x">1934-8800</subfield><subfield code="7">nnns</subfield></datafield><datafield tag="773" ind1="1" ind2="8"><subfield code="g">volume:71</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_USEFLAG_U</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">SYSFLAG_U</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ELV</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_20</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_22</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_23</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_24</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_31</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_32</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_40</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_60</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_62</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_65</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_69</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_70</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_73</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_74</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_90</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_95</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_100</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_101</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_105</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_110</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_151</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_187</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_213</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_224</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_230</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_370</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_602</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_702</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2001</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2003</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2004</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2005</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2007</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2008</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2009</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2010</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2011</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2014</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2015</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2020</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2021</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2025</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2026</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2027</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2034</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2044</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2048</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2049</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2050</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2055</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2056</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2059</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2061</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2064</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2088</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2106</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2110</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2111</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2112</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2122</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2129</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2143</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2152</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2153</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2190</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2232</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2336</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2470</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2507</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4035</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4037</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4112</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4125</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4242</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4249</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4251</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4305</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4306</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4307</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4313</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4322</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4323</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4324</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4325</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4326</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4333</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4334</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4338</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4393</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4700</subfield></datafield><datafield tag="936" ind1="b" ind2="k"><subfield code="a">77.00</subfield><subfield code="j">Psychologie: Allgemeines</subfield><subfield code="q">VZ</subfield></datafield><datafield tag="951" ind1=" " ind2=" "><subfield code="a">AR</subfield></datafield><datafield tag="952" ind1=" " ind2=" "><subfield code="d">71</subfield></datafield></record></collection>
|
author |
Faltyn, Mateusz |
spellingShingle |
Faltyn, Mateusz ddc 150 bkl 77.00 misc Machine learning misc Deep neural networks misc Face-to-Face Still-Face Task misc Developmental psychology misc Perinatal psychiatry Coding infant engagement in the Face-to-Face Still-Face paradigm using deep neural networks |
authorStr |
Faltyn, Mateusz |
ppnlink_with_tag_str_mv |
@@773@@(DE-627)320465349 |
format |
electronic Article |
dewey-ones |
150 - Psychology |
delete_txt_mv |
keep |
author_role |
aut aut aut aut aut aut aut aut |
collection |
elsevier |
remote_str |
true |
illustrated |
Not Illustrated |
issn |
1934-8800 |
topic_title |
150 VZ 77.00 bkl Coding infant engagement in the Face-to-Face Still-Face paradigm using deep neural networks Machine learning Deep neural networks Face-to-Face Still-Face Task Developmental psychology Perinatal psychiatry |
topic |
ddc 150 bkl 77.00 misc Machine learning misc Deep neural networks misc Face-to-Face Still-Face Task misc Developmental psychology misc Perinatal psychiatry |
topic_unstemmed |
ddc 150 bkl 77.00 misc Machine learning misc Deep neural networks misc Face-to-Face Still-Face Task misc Developmental psychology misc Perinatal psychiatry |
topic_browse |
ddc 150 bkl 77.00 misc Machine learning misc Deep neural networks misc Face-to-Face Still-Face Task misc Developmental psychology misc Perinatal psychiatry |
format_facet |
Elektronische Aufsätze Aufsätze Elektronische Ressource |
format_main_str_mv |
Text Zeitschrift/Artikel |
carriertype_str_mv |
cr |
hierarchy_parent_title |
Infant behavior and development |
hierarchy_parent_id |
320465349 |
dewey-tens |
150 - Psychology |
hierarchy_top_title |
Infant behavior and development |
isfreeaccess_txt |
false |
familylinks_str_mv |
(DE-627)320465349 (DE-600)2007808-0 (DE-576)098614940 |
title |
Coding infant engagement in the Face-to-Face Still-Face paradigm using deep neural networks |
ctrlnum |
(DE-627)ELV010231390 (ELSEVIER)S0163-6383(23)00019-X |
title_full |
Coding infant engagement in the Face-to-Face Still-Face paradigm using deep neural networks |
author_sort |
Faltyn, Mateusz |
journal |
Infant behavior and development |
journalStr |
Infant behavior and development |
lang_code |
eng |
isOA_bool |
false |
dewey-hundreds |
100 - Philosophy & psychology |
recordtype |
marc |
publishDateSort |
2023 |
contenttype_str_mv |
zzz |
author_browse |
Faltyn, Mateusz Krzeczkowski, John E. Cummings, Mike Anwar, Samia Zeng, Tammy Zahid, Isra Ntow, Kwadjo Otu-Boateng Van Lieshout, Ryan J. |
container_volume |
71 |
class |
150 VZ 77.00 bkl |
format_se |
Elektronische Aufsätze |
author-letter |
Faltyn, Mateusz |
doi_str_mv |
10.1016/j.infbeh.2023.101827 |
normlink |
(ORCID)0000-0002-7276-3029 |
normlink_prefix_str_mv |
(orcid)0000-0002-7276-3029 |
dewey-full |
150 |
author2-role |
verfasserin |
title_sort |
coding infant engagement in the face-to-face still-face paradigm using deep neural networks |
title_auth |
Coding infant engagement in the Face-to-Face Still-Face paradigm using deep neural networks |
abstract |
Background: The Face-to-Face Still-Face (FFSF) task is a validated and commonly used observational measure of mother-infant socio-emotional interactions. With the ascendence of deep learning-based facial emotion recognition, it is possible that common complex tasks, such as the coding of FFSF videos, could be coded with a high degree of accuracy by deep neural networks (DNNs). The primary objective of this study was to test the accuracy of four DNN image classification models against the coding of infant engagement conducted by two trained independent manual raters.Methods: 68 mother-infant dyads completed the FFSF task at three timepoints. Two trained independent raters undertook second-by-second manual coding of infant engagement into one of four classes: 1) positive affect, 2) neutral affect, 3) object/environment engagement, and 4) negative affect.Results: Training four different DNN models on 40,000 images, we achieved a maximum accuracy of 99.5% on image classification of infant frames taken from recordings of the FFSF task with a maximum inter-rater reliability (Cohen's κ-value) of 0.993.Limitations: This study inherits all sampling and experimental limitations of the original study from which the data was taken, namely a relatively small and primarily White sample.Conclusions: Based on the extremely high classification accuracy, these findings suggest that DNNs could be used to code infant engagement in FFSF recordings. DNN image classification models may also have the potential to improve the efficiency of coding all observational tasks with applications across multiple fields of human behavior research. |
abstractGer |
Background: The Face-to-Face Still-Face (FFSF) task is a validated and commonly used observational measure of mother-infant socio-emotional interactions. With the ascendence of deep learning-based facial emotion recognition, it is possible that common complex tasks, such as the coding of FFSF videos, could be coded with a high degree of accuracy by deep neural networks (DNNs). The primary objective of this study was to test the accuracy of four DNN image classification models against the coding of infant engagement conducted by two trained independent manual raters.Methods: 68 mother-infant dyads completed the FFSF task at three timepoints. Two trained independent raters undertook second-by-second manual coding of infant engagement into one of four classes: 1) positive affect, 2) neutral affect, 3) object/environment engagement, and 4) negative affect.Results: Training four different DNN models on 40,000 images, we achieved a maximum accuracy of 99.5% on image classification of infant frames taken from recordings of the FFSF task with a maximum inter-rater reliability (Cohen's κ-value) of 0.993.Limitations: This study inherits all sampling and experimental limitations of the original study from which the data was taken, namely a relatively small and primarily White sample.Conclusions: Based on the extremely high classification accuracy, these findings suggest that DNNs could be used to code infant engagement in FFSF recordings. DNN image classification models may also have the potential to improve the efficiency of coding all observational tasks with applications across multiple fields of human behavior research. |
abstract_unstemmed |
Background: The Face-to-Face Still-Face (FFSF) task is a validated and commonly used observational measure of mother-infant socio-emotional interactions. With the ascendence of deep learning-based facial emotion recognition, it is possible that common complex tasks, such as the coding of FFSF videos, could be coded with a high degree of accuracy by deep neural networks (DNNs). The primary objective of this study was to test the accuracy of four DNN image classification models against the coding of infant engagement conducted by two trained independent manual raters.Methods: 68 mother-infant dyads completed the FFSF task at three timepoints. Two trained independent raters undertook second-by-second manual coding of infant engagement into one of four classes: 1) positive affect, 2) neutral affect, 3) object/environment engagement, and 4) negative affect.Results: Training four different DNN models on 40,000 images, we achieved a maximum accuracy of 99.5% on image classification of infant frames taken from recordings of the FFSF task with a maximum inter-rater reliability (Cohen's κ-value) of 0.993.Limitations: This study inherits all sampling and experimental limitations of the original study from which the data was taken, namely a relatively small and primarily White sample.Conclusions: Based on the extremely high classification accuracy, these findings suggest that DNNs could be used to code infant engagement in FFSF recordings. DNN image classification models may also have the potential to improve the efficiency of coding all observational tasks with applications across multiple fields of human behavior research. |
collection_details |
GBV_USEFLAG_U SYSFLAG_U GBV_ELV GBV_ILN_20 GBV_ILN_22 GBV_ILN_23 GBV_ILN_24 GBV_ILN_31 GBV_ILN_32 GBV_ILN_40 GBV_ILN_60 GBV_ILN_62 GBV_ILN_65 GBV_ILN_69 GBV_ILN_70 GBV_ILN_73 GBV_ILN_74 GBV_ILN_90 GBV_ILN_95 GBV_ILN_100 GBV_ILN_101 GBV_ILN_105 GBV_ILN_110 GBV_ILN_151 GBV_ILN_187 GBV_ILN_213 GBV_ILN_224 GBV_ILN_230 GBV_ILN_370 GBV_ILN_602 GBV_ILN_702 GBV_ILN_2001 GBV_ILN_2003 GBV_ILN_2004 GBV_ILN_2005 GBV_ILN_2007 GBV_ILN_2008 GBV_ILN_2009 GBV_ILN_2010 GBV_ILN_2011 GBV_ILN_2014 GBV_ILN_2015 GBV_ILN_2020 GBV_ILN_2021 GBV_ILN_2025 GBV_ILN_2026 GBV_ILN_2027 GBV_ILN_2034 GBV_ILN_2044 GBV_ILN_2048 GBV_ILN_2049 GBV_ILN_2050 GBV_ILN_2055 GBV_ILN_2056 GBV_ILN_2059 GBV_ILN_2061 GBV_ILN_2064 GBV_ILN_2088 GBV_ILN_2106 GBV_ILN_2110 GBV_ILN_2111 GBV_ILN_2112 GBV_ILN_2122 GBV_ILN_2129 GBV_ILN_2143 GBV_ILN_2152 GBV_ILN_2153 GBV_ILN_2190 GBV_ILN_2232 GBV_ILN_2336 GBV_ILN_2470 GBV_ILN_2507 GBV_ILN_4035 GBV_ILN_4037 GBV_ILN_4112 GBV_ILN_4125 GBV_ILN_4242 GBV_ILN_4249 GBV_ILN_4251 GBV_ILN_4305 GBV_ILN_4306 GBV_ILN_4307 GBV_ILN_4313 GBV_ILN_4322 GBV_ILN_4323 GBV_ILN_4324 GBV_ILN_4325 GBV_ILN_4326 GBV_ILN_4333 GBV_ILN_4334 GBV_ILN_4338 GBV_ILN_4393 GBV_ILN_4700 |
title_short |
Coding infant engagement in the Face-to-Face Still-Face paradigm using deep neural networks |
remote_bool |
true |
author2 |
Krzeczkowski, John E. Cummings, Mike Anwar, Samia Zeng, Tammy Zahid, Isra Ntow, Kwadjo Otu-Boateng Van Lieshout, Ryan J. |
author2Str |
Krzeczkowski, John E. Cummings, Mike Anwar, Samia Zeng, Tammy Zahid, Isra Ntow, Kwadjo Otu-Boateng Van Lieshout, Ryan J. |
ppnlink |
320465349 |
mediatype_str_mv |
c |
isOA_txt |
false |
hochschulschrift_bool |
false |
doi_str |
10.1016/j.infbeh.2023.101827 |
up_date |
2024-07-06T17:16:43.889Z |
_version_ |
1803850826533306368 |
fullrecord_marcxml |
<?xml version="1.0" encoding="UTF-8"?><collection xmlns="http://www.loc.gov/MARC21/slim"><record><leader>01000naa a22002652 4500</leader><controlfield tag="001">ELV010231390</controlfield><controlfield tag="003">DE-627</controlfield><controlfield tag="005">20230609195112.0</controlfield><controlfield tag="007">cr uuu---uuuuu</controlfield><controlfield tag="008">230609s2023 xx |||||o 00| ||eng c</controlfield><datafield tag="024" ind1="7" ind2=" "><subfield code="a">10.1016/j.infbeh.2023.101827</subfield><subfield code="2">doi</subfield></datafield><datafield tag="035" ind1=" " ind2=" "><subfield code="a">(DE-627)ELV010231390</subfield></datafield><datafield tag="035" ind1=" " ind2=" "><subfield code="a">(ELSEVIER)S0163-6383(23)00019-X</subfield></datafield><datafield tag="040" ind1=" " ind2=" "><subfield code="a">DE-627</subfield><subfield code="b">ger</subfield><subfield code="c">DE-627</subfield><subfield code="e">rda</subfield></datafield><datafield tag="041" ind1=" " ind2=" "><subfield code="a">eng</subfield></datafield><datafield tag="082" ind1="0" ind2="4"><subfield code="a">150</subfield><subfield code="q">VZ</subfield></datafield><datafield tag="084" ind1=" " ind2=" "><subfield code="a">77.00</subfield><subfield code="2">bkl</subfield></datafield><datafield tag="100" ind1="1" ind2=" "><subfield code="a">Faltyn, Mateusz</subfield><subfield code="e">verfasserin</subfield><subfield code="0">(orcid)0000-0002-7276-3029</subfield><subfield code="4">aut</subfield></datafield><datafield tag="245" ind1="1" ind2="0"><subfield code="a">Coding infant engagement in the Face-to-Face Still-Face paradigm using deep neural networks</subfield></datafield><datafield tag="264" ind1=" " ind2="1"><subfield code="c">2023</subfield></datafield><datafield tag="336" ind1=" " ind2=" "><subfield code="a">nicht spezifiziert</subfield><subfield code="b">zzz</subfield><subfield code="2">rdacontent</subfield></datafield><datafield tag="337" ind1=" " ind2=" "><subfield code="a">Computermedien</subfield><subfield code="b">c</subfield><subfield code="2">rdamedia</subfield></datafield><datafield tag="338" ind1=" " ind2=" "><subfield code="a">Online-Ressource</subfield><subfield code="b">cr</subfield><subfield code="2">rdacarrier</subfield></datafield><datafield tag="520" ind1=" " ind2=" "><subfield code="a">Background: The Face-to-Face Still-Face (FFSF) task is a validated and commonly used observational measure of mother-infant socio-emotional interactions. With the ascendence of deep learning-based facial emotion recognition, it is possible that common complex tasks, such as the coding of FFSF videos, could be coded with a high degree of accuracy by deep neural networks (DNNs). The primary objective of this study was to test the accuracy of four DNN image classification models against the coding of infant engagement conducted by two trained independent manual raters.Methods: 68 mother-infant dyads completed the FFSF task at three timepoints. Two trained independent raters undertook second-by-second manual coding of infant engagement into one of four classes: 1) positive affect, 2) neutral affect, 3) object/environment engagement, and 4) negative affect.Results: Training four different DNN models on 40,000 images, we achieved a maximum accuracy of 99.5% on image classification of infant frames taken from recordings of the FFSF task with a maximum inter-rater reliability (Cohen's κ-value) of 0.993.Limitations: This study inherits all sampling and experimental limitations of the original study from which the data was taken, namely a relatively small and primarily White sample.Conclusions: Based on the extremely high classification accuracy, these findings suggest that DNNs could be used to code infant engagement in FFSF recordings. DNN image classification models may also have the potential to improve the efficiency of coding all observational tasks with applications across multiple fields of human behavior research.</subfield></datafield><datafield tag="650" ind1=" " ind2="4"><subfield code="a">Machine learning</subfield></datafield><datafield tag="650" ind1=" " ind2="4"><subfield code="a">Deep neural networks</subfield></datafield><datafield tag="650" ind1=" " ind2="4"><subfield code="a">Face-to-Face Still-Face Task</subfield></datafield><datafield tag="650" ind1=" " ind2="4"><subfield code="a">Developmental psychology</subfield></datafield><datafield tag="650" ind1=" " ind2="4"><subfield code="a">Perinatal psychiatry</subfield></datafield><datafield tag="700" ind1="1" ind2=" "><subfield code="a">Krzeczkowski, John E.</subfield><subfield code="e">verfasserin</subfield><subfield code="4">aut</subfield></datafield><datafield tag="700" ind1="1" ind2=" "><subfield code="a">Cummings, Mike</subfield><subfield code="e">verfasserin</subfield><subfield code="4">aut</subfield></datafield><datafield tag="700" ind1="1" ind2=" "><subfield code="a">Anwar, Samia</subfield><subfield code="e">verfasserin</subfield><subfield code="4">aut</subfield></datafield><datafield tag="700" ind1="1" ind2=" "><subfield code="a">Zeng, Tammy</subfield><subfield code="e">verfasserin</subfield><subfield code="4">aut</subfield></datafield><datafield tag="700" ind1="1" ind2=" "><subfield code="a">Zahid, Isra</subfield><subfield code="e">verfasserin</subfield><subfield code="4">aut</subfield></datafield><datafield tag="700" ind1="1" ind2=" "><subfield code="a">Ntow, Kwadjo Otu-Boateng</subfield><subfield code="e">verfasserin</subfield><subfield code="4">aut</subfield></datafield><datafield tag="700" ind1="1" ind2=" "><subfield code="a">Van Lieshout, Ryan J.</subfield><subfield code="e">verfasserin</subfield><subfield code="4">aut</subfield></datafield><datafield tag="773" ind1="0" ind2="8"><subfield code="i">Enthalten in</subfield><subfield code="t">Infant behavior and development</subfield><subfield code="d">Amsterdam [u.a.] : Elsevier Science, 1978</subfield><subfield code="g">71</subfield><subfield code="h">Online-Ressource</subfield><subfield code="w">(DE-627)320465349</subfield><subfield code="w">(DE-600)2007808-0</subfield><subfield code="w">(DE-576)098614940</subfield><subfield code="x">1934-8800</subfield><subfield code="7">nnns</subfield></datafield><datafield tag="773" ind1="1" ind2="8"><subfield code="g">volume:71</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_USEFLAG_U</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">SYSFLAG_U</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ELV</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_20</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_22</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_23</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_24</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_31</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_32</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_40</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_60</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_62</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_65</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_69</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_70</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_73</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_74</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_90</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_95</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_100</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_101</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_105</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_110</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_151</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_187</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_213</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_224</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_230</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_370</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_602</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_702</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2001</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2003</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2004</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2005</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2007</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2008</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2009</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2010</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2011</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2014</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2015</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2020</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2021</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2025</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2026</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2027</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2034</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2044</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2048</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2049</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2050</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2055</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2056</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2059</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2061</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2064</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2088</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2106</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2110</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2111</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2112</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2122</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2129</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2143</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2152</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2153</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2190</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2232</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2336</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2470</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2507</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4035</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4037</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4112</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4125</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4242</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4249</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4251</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4305</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4306</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4307</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4313</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4322</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4323</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4324</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4325</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4326</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4333</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4334</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4338</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4393</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4700</subfield></datafield><datafield tag="936" ind1="b" ind2="k"><subfield code="a">77.00</subfield><subfield code="j">Psychologie: Allgemeines</subfield><subfield code="q">VZ</subfield></datafield><datafield tag="951" ind1=" " ind2=" "><subfield code="a">AR</subfield></datafield><datafield tag="952" ind1=" " ind2=" "><subfield code="d">71</subfield></datafield></record></collection>
|
score |
7.400794 |