REMoDNaV: robust eye-movement classification for dynamic stimulation
Abstract Tracking of eye movements is an established measurement for many types of experimental paradigms. More complex and more prolonged visual stimuli have made algorithmic approaches to eye-movement event classification the most pragmatic option. A recent analysis revealed that many current algo...
Ausführliche Beschreibung
Autor*in: |
Dar, Asim H. [verfasserIn] Wagner, Adina S. [verfasserIn] Hanke, Michael [verfasserIn] |
---|
Format: |
E-Artikel |
---|---|
Sprache: |
Englisch |
Erschienen: |
2020 |
---|
Schlagwörter: |
Adaptive classification algorithm |
---|
Übergeordnetes Werk: |
Enthalten in: Behavior research methods, instruments & computers - Austin, Tex. : Psychonomic Society Publ., 1984, 53(2020), 1 vom: 24. Juli, Seite 399-414 |
---|---|
Übergeordnetes Werk: |
volume:53 ; year:2020 ; number:1 ; day:24 ; month:07 ; pages:399-414 |
Links: |
---|
DOI / URN: |
10.3758/s13428-020-01428-x |
---|
Katalog-ID: |
SPR043150128 |
---|
LEADER | 01000caa a22002652 4500 | ||
---|---|---|---|
001 | SPR043150128 | ||
003 | DE-627 | ||
005 | 20220111192940.0 | ||
007 | cr uuu---uuuuu | ||
008 | 210215s2020 xx |||||o 00| ||eng c | ||
024 | 7 | |a 10.3758/s13428-020-01428-x |2 doi | |
035 | |a (DE-627)SPR043150128 | ||
035 | |a (DE-599)SPRs13428-020-01428-x-e | ||
035 | |a (SPR)s13428-020-01428-x-e | ||
040 | |a DE-627 |b ger |c DE-627 |e rakwb | ||
041 | |a eng | ||
082 | 0 | 4 | |a 150 |q ASE |
084 | |a 77.00 |2 bkl | ||
100 | 1 | |a Dar, Asim H. |e verfasserin |4 aut | |
245 | 1 | 0 | |a REMoDNaV: robust eye-movement classification for dynamic stimulation |
264 | 1 | |c 2020 | |
336 | |a Text |b txt |2 rdacontent | ||
337 | |a Computermedien |b c |2 rdamedia | ||
338 | |a Online-Ressource |b cr |2 rdacarrier | ||
520 | |a Abstract Tracking of eye movements is an established measurement for many types of experimental paradigms. More complex and more prolonged visual stimuli have made algorithmic approaches to eye-movement event classification the most pragmatic option. A recent analysis revealed that many current algorithms are lackluster when it comes to data from viewing dynamic stimuli such as video sequences. Here we present an event classification algorithm—built on an existing velocity-based approach—that is suitable for both static and dynamic stimulation, and is capable of classifying saccades, post-saccadic oscillations, fixations, and smooth pursuit events. We validated classification performance and robustness on three public datasets: 1) manually annotated, trial-based gaze trajectories for viewing static images, moving dots, and short video sequences, 2) lab-quality gaze recordings for a feature-length movie, and 3) gaze recordings acquired under suboptimal lighting conditions inside the bore of a magnetic resonance imaging (MRI) scanner for the same full-length movie. We found that the proposed algorithm performs on par or better compared to state-of-the-art alternatives for static stimulation. Moreover, it yields eye-movement events with biologically plausible characteristics on prolonged dynamic recordings. Lastly, algorithm performance is robust on data acquired under suboptimal conditions that exhibit a temporally varying noise level. These results indicate that the proposed algorithm is a robust tool with improved classification accuracy across a range of use cases. The algorithm is cross-platform compatible, implemented using the Python programming language, and readily available as free and open-source software from public sources. | ||
650 | 4 | |a Eye tracking |7 (dpeaa)DE-He213 | |
650 | 4 | |a Adaptive classification algorithm |7 (dpeaa)DE-He213 | |
650 | 4 | |a Saccade classification algorithm |7 (dpeaa)DE-He213 | |
650 | 4 | |a Statistical saccade analysis |7 (dpeaa)DE-He213 | |
650 | 4 | |a Glissade classification |7 (dpeaa)DE-He213 | |
650 | 4 | |a Adaptive threshold algorithm |7 (dpeaa)DE-He213 | |
650 | 4 | |a Data preprocessing |7 (dpeaa)DE-He213 | |
700 | 1 | |a Wagner, Adina S. |e verfasserin |4 aut | |
700 | 1 | |a Hanke, Michael |e verfasserin |4 aut | |
773 | 0 | 8 | |i Enthalten in |t Behavior research methods, instruments & computers |d Austin, Tex. : Psychonomic Society Publ., 1984 |g 53(2020), 1 vom: 24. Juli, Seite 399-414 |w (DE-627)32998067X |w (DE-600)2048669-8 |x 1532-5970 |7 nnns |
773 | 1 | 8 | |g volume:53 |g year:2020 |g number:1 |g day:24 |g month:07 |g pages:399-414 |
856 | 4 | 0 | |u https://dx.doi.org/10.3758/s13428-020-01428-x |z kostenfrei |3 Volltext |
912 | |a GBV_USEFLAG_A | ||
912 | |a SYSFLAG_A | ||
912 | |a GBV_SPRINGER | ||
912 | |a GBV_ILN_20 | ||
912 | |a GBV_ILN_22 | ||
912 | |a GBV_ILN_23 | ||
912 | |a GBV_ILN_24 | ||
912 | |a GBV_ILN_31 | ||
912 | |a GBV_ILN_32 | ||
912 | |a GBV_ILN_39 | ||
912 | |a GBV_ILN_40 | ||
912 | |a GBV_ILN_60 | ||
912 | |a GBV_ILN_62 | ||
912 | |a GBV_ILN_65 | ||
912 | |a GBV_ILN_69 | ||
912 | |a GBV_ILN_70 | ||
912 | |a GBV_ILN_73 | ||
912 | |a GBV_ILN_74 | ||
912 | |a GBV_ILN_90 | ||
912 | |a GBV_ILN_95 | ||
912 | |a GBV_ILN_100 | ||
912 | |a GBV_ILN_101 | ||
912 | |a GBV_ILN_105 | ||
912 | |a GBV_ILN_110 | ||
912 | |a GBV_ILN_120 | ||
912 | |a GBV_ILN_138 | ||
912 | |a GBV_ILN_152 | ||
912 | |a GBV_ILN_161 | ||
912 | |a GBV_ILN_171 | ||
912 | |a GBV_ILN_187 | ||
912 | |a GBV_ILN_224 | ||
912 | |a GBV_ILN_250 | ||
912 | |a GBV_ILN_281 | ||
912 | |a GBV_ILN_285 | ||
912 | |a GBV_ILN_293 | ||
912 | |a GBV_ILN_370 | ||
912 | |a GBV_ILN_602 | ||
912 | |a GBV_ILN_702 | ||
912 | |a GBV_ILN_2014 | ||
936 | b | k | |a 77.00 |q ASE |
951 | |a AR | ||
952 | |d 53 |j 2020 |e 1 |b 24 |c 07 |h 399-414 |
author_variant |
a h d ah ahd a s w as asw m h mh |
---|---|
matchkey_str |
article:15325970:2020----::eonvoutymvmncasfctofry |
hierarchy_sort_str |
2020 |
bklnumber |
77.00 |
publishDate |
2020 |
allfields |
10.3758/s13428-020-01428-x doi (DE-627)SPR043150128 (DE-599)SPRs13428-020-01428-x-e (SPR)s13428-020-01428-x-e DE-627 ger DE-627 rakwb eng 150 ASE 77.00 bkl Dar, Asim H. verfasserin aut REMoDNaV: robust eye-movement classification for dynamic stimulation 2020 Text txt rdacontent Computermedien c rdamedia Online-Ressource cr rdacarrier Abstract Tracking of eye movements is an established measurement for many types of experimental paradigms. More complex and more prolonged visual stimuli have made algorithmic approaches to eye-movement event classification the most pragmatic option. A recent analysis revealed that many current algorithms are lackluster when it comes to data from viewing dynamic stimuli such as video sequences. Here we present an event classification algorithm—built on an existing velocity-based approach—that is suitable for both static and dynamic stimulation, and is capable of classifying saccades, post-saccadic oscillations, fixations, and smooth pursuit events. We validated classification performance and robustness on three public datasets: 1) manually annotated, trial-based gaze trajectories for viewing static images, moving dots, and short video sequences, 2) lab-quality gaze recordings for a feature-length movie, and 3) gaze recordings acquired under suboptimal lighting conditions inside the bore of a magnetic resonance imaging (MRI) scanner for the same full-length movie. We found that the proposed algorithm performs on par or better compared to state-of-the-art alternatives for static stimulation. Moreover, it yields eye-movement events with biologically plausible characteristics on prolonged dynamic recordings. Lastly, algorithm performance is robust on data acquired under suboptimal conditions that exhibit a temporally varying noise level. These results indicate that the proposed algorithm is a robust tool with improved classification accuracy across a range of use cases. The algorithm is cross-platform compatible, implemented using the Python programming language, and readily available as free and open-source software from public sources. Eye tracking (dpeaa)DE-He213 Adaptive classification algorithm (dpeaa)DE-He213 Saccade classification algorithm (dpeaa)DE-He213 Statistical saccade analysis (dpeaa)DE-He213 Glissade classification (dpeaa)DE-He213 Adaptive threshold algorithm (dpeaa)DE-He213 Data preprocessing (dpeaa)DE-He213 Wagner, Adina S. verfasserin aut Hanke, Michael verfasserin aut Enthalten in Behavior research methods, instruments & computers Austin, Tex. : Psychonomic Society Publ., 1984 53(2020), 1 vom: 24. Juli, Seite 399-414 (DE-627)32998067X (DE-600)2048669-8 1532-5970 nnns volume:53 year:2020 number:1 day:24 month:07 pages:399-414 https://dx.doi.org/10.3758/s13428-020-01428-x kostenfrei Volltext GBV_USEFLAG_A SYSFLAG_A GBV_SPRINGER GBV_ILN_20 GBV_ILN_22 GBV_ILN_23 GBV_ILN_24 GBV_ILN_31 GBV_ILN_32 GBV_ILN_39 GBV_ILN_40 GBV_ILN_60 GBV_ILN_62 GBV_ILN_65 GBV_ILN_69 GBV_ILN_70 GBV_ILN_73 GBV_ILN_74 GBV_ILN_90 GBV_ILN_95 GBV_ILN_100 GBV_ILN_101 GBV_ILN_105 GBV_ILN_110 GBV_ILN_120 GBV_ILN_138 GBV_ILN_152 GBV_ILN_161 GBV_ILN_171 GBV_ILN_187 GBV_ILN_224 GBV_ILN_250 GBV_ILN_281 GBV_ILN_285 GBV_ILN_293 GBV_ILN_370 GBV_ILN_602 GBV_ILN_702 GBV_ILN_2014 77.00 ASE AR 53 2020 1 24 07 399-414 |
spelling |
10.3758/s13428-020-01428-x doi (DE-627)SPR043150128 (DE-599)SPRs13428-020-01428-x-e (SPR)s13428-020-01428-x-e DE-627 ger DE-627 rakwb eng 150 ASE 77.00 bkl Dar, Asim H. verfasserin aut REMoDNaV: robust eye-movement classification for dynamic stimulation 2020 Text txt rdacontent Computermedien c rdamedia Online-Ressource cr rdacarrier Abstract Tracking of eye movements is an established measurement for many types of experimental paradigms. More complex and more prolonged visual stimuli have made algorithmic approaches to eye-movement event classification the most pragmatic option. A recent analysis revealed that many current algorithms are lackluster when it comes to data from viewing dynamic stimuli such as video sequences. Here we present an event classification algorithm—built on an existing velocity-based approach—that is suitable for both static and dynamic stimulation, and is capable of classifying saccades, post-saccadic oscillations, fixations, and smooth pursuit events. We validated classification performance and robustness on three public datasets: 1) manually annotated, trial-based gaze trajectories for viewing static images, moving dots, and short video sequences, 2) lab-quality gaze recordings for a feature-length movie, and 3) gaze recordings acquired under suboptimal lighting conditions inside the bore of a magnetic resonance imaging (MRI) scanner for the same full-length movie. We found that the proposed algorithm performs on par or better compared to state-of-the-art alternatives for static stimulation. Moreover, it yields eye-movement events with biologically plausible characteristics on prolonged dynamic recordings. Lastly, algorithm performance is robust on data acquired under suboptimal conditions that exhibit a temporally varying noise level. These results indicate that the proposed algorithm is a robust tool with improved classification accuracy across a range of use cases. The algorithm is cross-platform compatible, implemented using the Python programming language, and readily available as free and open-source software from public sources. Eye tracking (dpeaa)DE-He213 Adaptive classification algorithm (dpeaa)DE-He213 Saccade classification algorithm (dpeaa)DE-He213 Statistical saccade analysis (dpeaa)DE-He213 Glissade classification (dpeaa)DE-He213 Adaptive threshold algorithm (dpeaa)DE-He213 Data preprocessing (dpeaa)DE-He213 Wagner, Adina S. verfasserin aut Hanke, Michael verfasserin aut Enthalten in Behavior research methods, instruments & computers Austin, Tex. : Psychonomic Society Publ., 1984 53(2020), 1 vom: 24. Juli, Seite 399-414 (DE-627)32998067X (DE-600)2048669-8 1532-5970 nnns volume:53 year:2020 number:1 day:24 month:07 pages:399-414 https://dx.doi.org/10.3758/s13428-020-01428-x kostenfrei Volltext GBV_USEFLAG_A SYSFLAG_A GBV_SPRINGER GBV_ILN_20 GBV_ILN_22 GBV_ILN_23 GBV_ILN_24 GBV_ILN_31 GBV_ILN_32 GBV_ILN_39 GBV_ILN_40 GBV_ILN_60 GBV_ILN_62 GBV_ILN_65 GBV_ILN_69 GBV_ILN_70 GBV_ILN_73 GBV_ILN_74 GBV_ILN_90 GBV_ILN_95 GBV_ILN_100 GBV_ILN_101 GBV_ILN_105 GBV_ILN_110 GBV_ILN_120 GBV_ILN_138 GBV_ILN_152 GBV_ILN_161 GBV_ILN_171 GBV_ILN_187 GBV_ILN_224 GBV_ILN_250 GBV_ILN_281 GBV_ILN_285 GBV_ILN_293 GBV_ILN_370 GBV_ILN_602 GBV_ILN_702 GBV_ILN_2014 77.00 ASE AR 53 2020 1 24 07 399-414 |
allfields_unstemmed |
10.3758/s13428-020-01428-x doi (DE-627)SPR043150128 (DE-599)SPRs13428-020-01428-x-e (SPR)s13428-020-01428-x-e DE-627 ger DE-627 rakwb eng 150 ASE 77.00 bkl Dar, Asim H. verfasserin aut REMoDNaV: robust eye-movement classification for dynamic stimulation 2020 Text txt rdacontent Computermedien c rdamedia Online-Ressource cr rdacarrier Abstract Tracking of eye movements is an established measurement for many types of experimental paradigms. More complex and more prolonged visual stimuli have made algorithmic approaches to eye-movement event classification the most pragmatic option. A recent analysis revealed that many current algorithms are lackluster when it comes to data from viewing dynamic stimuli such as video sequences. Here we present an event classification algorithm—built on an existing velocity-based approach—that is suitable for both static and dynamic stimulation, and is capable of classifying saccades, post-saccadic oscillations, fixations, and smooth pursuit events. We validated classification performance and robustness on three public datasets: 1) manually annotated, trial-based gaze trajectories for viewing static images, moving dots, and short video sequences, 2) lab-quality gaze recordings for a feature-length movie, and 3) gaze recordings acquired under suboptimal lighting conditions inside the bore of a magnetic resonance imaging (MRI) scanner for the same full-length movie. We found that the proposed algorithm performs on par or better compared to state-of-the-art alternatives for static stimulation. Moreover, it yields eye-movement events with biologically plausible characteristics on prolonged dynamic recordings. Lastly, algorithm performance is robust on data acquired under suboptimal conditions that exhibit a temporally varying noise level. These results indicate that the proposed algorithm is a robust tool with improved classification accuracy across a range of use cases. The algorithm is cross-platform compatible, implemented using the Python programming language, and readily available as free and open-source software from public sources. Eye tracking (dpeaa)DE-He213 Adaptive classification algorithm (dpeaa)DE-He213 Saccade classification algorithm (dpeaa)DE-He213 Statistical saccade analysis (dpeaa)DE-He213 Glissade classification (dpeaa)DE-He213 Adaptive threshold algorithm (dpeaa)DE-He213 Data preprocessing (dpeaa)DE-He213 Wagner, Adina S. verfasserin aut Hanke, Michael verfasserin aut Enthalten in Behavior research methods, instruments & computers Austin, Tex. : Psychonomic Society Publ., 1984 53(2020), 1 vom: 24. Juli, Seite 399-414 (DE-627)32998067X (DE-600)2048669-8 1532-5970 nnns volume:53 year:2020 number:1 day:24 month:07 pages:399-414 https://dx.doi.org/10.3758/s13428-020-01428-x kostenfrei Volltext GBV_USEFLAG_A SYSFLAG_A GBV_SPRINGER GBV_ILN_20 GBV_ILN_22 GBV_ILN_23 GBV_ILN_24 GBV_ILN_31 GBV_ILN_32 GBV_ILN_39 GBV_ILN_40 GBV_ILN_60 GBV_ILN_62 GBV_ILN_65 GBV_ILN_69 GBV_ILN_70 GBV_ILN_73 GBV_ILN_74 GBV_ILN_90 GBV_ILN_95 GBV_ILN_100 GBV_ILN_101 GBV_ILN_105 GBV_ILN_110 GBV_ILN_120 GBV_ILN_138 GBV_ILN_152 GBV_ILN_161 GBV_ILN_171 GBV_ILN_187 GBV_ILN_224 GBV_ILN_250 GBV_ILN_281 GBV_ILN_285 GBV_ILN_293 GBV_ILN_370 GBV_ILN_602 GBV_ILN_702 GBV_ILN_2014 77.00 ASE AR 53 2020 1 24 07 399-414 |
allfieldsGer |
10.3758/s13428-020-01428-x doi (DE-627)SPR043150128 (DE-599)SPRs13428-020-01428-x-e (SPR)s13428-020-01428-x-e DE-627 ger DE-627 rakwb eng 150 ASE 77.00 bkl Dar, Asim H. verfasserin aut REMoDNaV: robust eye-movement classification for dynamic stimulation 2020 Text txt rdacontent Computermedien c rdamedia Online-Ressource cr rdacarrier Abstract Tracking of eye movements is an established measurement for many types of experimental paradigms. More complex and more prolonged visual stimuli have made algorithmic approaches to eye-movement event classification the most pragmatic option. A recent analysis revealed that many current algorithms are lackluster when it comes to data from viewing dynamic stimuli such as video sequences. Here we present an event classification algorithm—built on an existing velocity-based approach—that is suitable for both static and dynamic stimulation, and is capable of classifying saccades, post-saccadic oscillations, fixations, and smooth pursuit events. We validated classification performance and robustness on three public datasets: 1) manually annotated, trial-based gaze trajectories for viewing static images, moving dots, and short video sequences, 2) lab-quality gaze recordings for a feature-length movie, and 3) gaze recordings acquired under suboptimal lighting conditions inside the bore of a magnetic resonance imaging (MRI) scanner for the same full-length movie. We found that the proposed algorithm performs on par or better compared to state-of-the-art alternatives for static stimulation. Moreover, it yields eye-movement events with biologically plausible characteristics on prolonged dynamic recordings. Lastly, algorithm performance is robust on data acquired under suboptimal conditions that exhibit a temporally varying noise level. These results indicate that the proposed algorithm is a robust tool with improved classification accuracy across a range of use cases. The algorithm is cross-platform compatible, implemented using the Python programming language, and readily available as free and open-source software from public sources. Eye tracking (dpeaa)DE-He213 Adaptive classification algorithm (dpeaa)DE-He213 Saccade classification algorithm (dpeaa)DE-He213 Statistical saccade analysis (dpeaa)DE-He213 Glissade classification (dpeaa)DE-He213 Adaptive threshold algorithm (dpeaa)DE-He213 Data preprocessing (dpeaa)DE-He213 Wagner, Adina S. verfasserin aut Hanke, Michael verfasserin aut Enthalten in Behavior research methods, instruments & computers Austin, Tex. : Psychonomic Society Publ., 1984 53(2020), 1 vom: 24. Juli, Seite 399-414 (DE-627)32998067X (DE-600)2048669-8 1532-5970 nnns volume:53 year:2020 number:1 day:24 month:07 pages:399-414 https://dx.doi.org/10.3758/s13428-020-01428-x kostenfrei Volltext GBV_USEFLAG_A SYSFLAG_A GBV_SPRINGER GBV_ILN_20 GBV_ILN_22 GBV_ILN_23 GBV_ILN_24 GBV_ILN_31 GBV_ILN_32 GBV_ILN_39 GBV_ILN_40 GBV_ILN_60 GBV_ILN_62 GBV_ILN_65 GBV_ILN_69 GBV_ILN_70 GBV_ILN_73 GBV_ILN_74 GBV_ILN_90 GBV_ILN_95 GBV_ILN_100 GBV_ILN_101 GBV_ILN_105 GBV_ILN_110 GBV_ILN_120 GBV_ILN_138 GBV_ILN_152 GBV_ILN_161 GBV_ILN_171 GBV_ILN_187 GBV_ILN_224 GBV_ILN_250 GBV_ILN_281 GBV_ILN_285 GBV_ILN_293 GBV_ILN_370 GBV_ILN_602 GBV_ILN_702 GBV_ILN_2014 77.00 ASE AR 53 2020 1 24 07 399-414 |
allfieldsSound |
10.3758/s13428-020-01428-x doi (DE-627)SPR043150128 (DE-599)SPRs13428-020-01428-x-e (SPR)s13428-020-01428-x-e DE-627 ger DE-627 rakwb eng 150 ASE 77.00 bkl Dar, Asim H. verfasserin aut REMoDNaV: robust eye-movement classification for dynamic stimulation 2020 Text txt rdacontent Computermedien c rdamedia Online-Ressource cr rdacarrier Abstract Tracking of eye movements is an established measurement for many types of experimental paradigms. More complex and more prolonged visual stimuli have made algorithmic approaches to eye-movement event classification the most pragmatic option. A recent analysis revealed that many current algorithms are lackluster when it comes to data from viewing dynamic stimuli such as video sequences. Here we present an event classification algorithm—built on an existing velocity-based approach—that is suitable for both static and dynamic stimulation, and is capable of classifying saccades, post-saccadic oscillations, fixations, and smooth pursuit events. We validated classification performance and robustness on three public datasets: 1) manually annotated, trial-based gaze trajectories for viewing static images, moving dots, and short video sequences, 2) lab-quality gaze recordings for a feature-length movie, and 3) gaze recordings acquired under suboptimal lighting conditions inside the bore of a magnetic resonance imaging (MRI) scanner for the same full-length movie. We found that the proposed algorithm performs on par or better compared to state-of-the-art alternatives for static stimulation. Moreover, it yields eye-movement events with biologically plausible characteristics on prolonged dynamic recordings. Lastly, algorithm performance is robust on data acquired under suboptimal conditions that exhibit a temporally varying noise level. These results indicate that the proposed algorithm is a robust tool with improved classification accuracy across a range of use cases. The algorithm is cross-platform compatible, implemented using the Python programming language, and readily available as free and open-source software from public sources. Eye tracking (dpeaa)DE-He213 Adaptive classification algorithm (dpeaa)DE-He213 Saccade classification algorithm (dpeaa)DE-He213 Statistical saccade analysis (dpeaa)DE-He213 Glissade classification (dpeaa)DE-He213 Adaptive threshold algorithm (dpeaa)DE-He213 Data preprocessing (dpeaa)DE-He213 Wagner, Adina S. verfasserin aut Hanke, Michael verfasserin aut Enthalten in Behavior research methods, instruments & computers Austin, Tex. : Psychonomic Society Publ., 1984 53(2020), 1 vom: 24. Juli, Seite 399-414 (DE-627)32998067X (DE-600)2048669-8 1532-5970 nnns volume:53 year:2020 number:1 day:24 month:07 pages:399-414 https://dx.doi.org/10.3758/s13428-020-01428-x kostenfrei Volltext GBV_USEFLAG_A SYSFLAG_A GBV_SPRINGER GBV_ILN_20 GBV_ILN_22 GBV_ILN_23 GBV_ILN_24 GBV_ILN_31 GBV_ILN_32 GBV_ILN_39 GBV_ILN_40 GBV_ILN_60 GBV_ILN_62 GBV_ILN_65 GBV_ILN_69 GBV_ILN_70 GBV_ILN_73 GBV_ILN_74 GBV_ILN_90 GBV_ILN_95 GBV_ILN_100 GBV_ILN_101 GBV_ILN_105 GBV_ILN_110 GBV_ILN_120 GBV_ILN_138 GBV_ILN_152 GBV_ILN_161 GBV_ILN_171 GBV_ILN_187 GBV_ILN_224 GBV_ILN_250 GBV_ILN_281 GBV_ILN_285 GBV_ILN_293 GBV_ILN_370 GBV_ILN_602 GBV_ILN_702 GBV_ILN_2014 77.00 ASE AR 53 2020 1 24 07 399-414 |
language |
English |
source |
Enthalten in Behavior research methods, instruments & computers 53(2020), 1 vom: 24. Juli, Seite 399-414 volume:53 year:2020 number:1 day:24 month:07 pages:399-414 |
sourceStr |
Enthalten in Behavior research methods, instruments & computers 53(2020), 1 vom: 24. Juli, Seite 399-414 volume:53 year:2020 number:1 day:24 month:07 pages:399-414 |
format_phy_str_mv |
Article |
institution |
findex.gbv.de |
topic_facet |
Eye tracking Adaptive classification algorithm Saccade classification algorithm Statistical saccade analysis Glissade classification Adaptive threshold algorithm Data preprocessing |
dewey-raw |
150 |
isfreeaccess_bool |
true |
container_title |
Behavior research methods, instruments & computers |
authorswithroles_txt_mv |
Dar, Asim H. @@aut@@ Wagner, Adina S. @@aut@@ Hanke, Michael @@aut@@ |
publishDateDaySort_date |
2020-07-24T00:00:00Z |
hierarchy_top_id |
32998067X |
dewey-sort |
3150 |
id |
SPR043150128 |
language_de |
englisch |
fullrecord |
<?xml version="1.0" encoding="UTF-8"?><collection xmlns="http://www.loc.gov/MARC21/slim"><record><leader>01000caa a22002652 4500</leader><controlfield tag="001">SPR043150128</controlfield><controlfield tag="003">DE-627</controlfield><controlfield tag="005">20220111192940.0</controlfield><controlfield tag="007">cr uuu---uuuuu</controlfield><controlfield tag="008">210215s2020 xx |||||o 00| ||eng c</controlfield><datafield tag="024" ind1="7" ind2=" "><subfield code="a">10.3758/s13428-020-01428-x</subfield><subfield code="2">doi</subfield></datafield><datafield tag="035" ind1=" " ind2=" "><subfield code="a">(DE-627)SPR043150128</subfield></datafield><datafield tag="035" ind1=" " ind2=" "><subfield code="a">(DE-599)SPRs13428-020-01428-x-e</subfield></datafield><datafield tag="035" ind1=" " ind2=" "><subfield code="a">(SPR)s13428-020-01428-x-e</subfield></datafield><datafield tag="040" ind1=" " ind2=" "><subfield code="a">DE-627</subfield><subfield code="b">ger</subfield><subfield code="c">DE-627</subfield><subfield code="e">rakwb</subfield></datafield><datafield tag="041" ind1=" " ind2=" "><subfield code="a">eng</subfield></datafield><datafield tag="082" ind1="0" ind2="4"><subfield code="a">150</subfield><subfield code="q">ASE</subfield></datafield><datafield tag="084" ind1=" " ind2=" "><subfield code="a">77.00</subfield><subfield code="2">bkl</subfield></datafield><datafield tag="100" ind1="1" ind2=" "><subfield code="a">Dar, Asim H.</subfield><subfield code="e">verfasserin</subfield><subfield code="4">aut</subfield></datafield><datafield tag="245" ind1="1" ind2="0"><subfield code="a">REMoDNaV: robust eye-movement classification for dynamic stimulation</subfield></datafield><datafield tag="264" ind1=" " ind2="1"><subfield code="c">2020</subfield></datafield><datafield tag="336" ind1=" " ind2=" "><subfield code="a">Text</subfield><subfield code="b">txt</subfield><subfield code="2">rdacontent</subfield></datafield><datafield tag="337" ind1=" " ind2=" "><subfield code="a">Computermedien</subfield><subfield code="b">c</subfield><subfield code="2">rdamedia</subfield></datafield><datafield tag="338" ind1=" " ind2=" "><subfield code="a">Online-Ressource</subfield><subfield code="b">cr</subfield><subfield code="2">rdacarrier</subfield></datafield><datafield tag="520" ind1=" " ind2=" "><subfield code="a">Abstract Tracking of eye movements is an established measurement for many types of experimental paradigms. More complex and more prolonged visual stimuli have made algorithmic approaches to eye-movement event classification the most pragmatic option. A recent analysis revealed that many current algorithms are lackluster when it comes to data from viewing dynamic stimuli such as video sequences. Here we present an event classification algorithm—built on an existing velocity-based approach—that is suitable for both static and dynamic stimulation, and is capable of classifying saccades, post-saccadic oscillations, fixations, and smooth pursuit events. We validated classification performance and robustness on three public datasets: 1) manually annotated, trial-based gaze trajectories for viewing static images, moving dots, and short video sequences, 2) lab-quality gaze recordings for a feature-length movie, and 3) gaze recordings acquired under suboptimal lighting conditions inside the bore of a magnetic resonance imaging (MRI) scanner for the same full-length movie. We found that the proposed algorithm performs on par or better compared to state-of-the-art alternatives for static stimulation. Moreover, it yields eye-movement events with biologically plausible characteristics on prolonged dynamic recordings. Lastly, algorithm performance is robust on data acquired under suboptimal conditions that exhibit a temporally varying noise level. These results indicate that the proposed algorithm is a robust tool with improved classification accuracy across a range of use cases. The algorithm is cross-platform compatible, implemented using the Python programming language, and readily available as free and open-source software from public sources.</subfield></datafield><datafield tag="650" ind1=" " ind2="4"><subfield code="a">Eye tracking</subfield><subfield code="7">(dpeaa)DE-He213</subfield></datafield><datafield tag="650" ind1=" " ind2="4"><subfield code="a">Adaptive classification algorithm</subfield><subfield code="7">(dpeaa)DE-He213</subfield></datafield><datafield tag="650" ind1=" " ind2="4"><subfield code="a">Saccade classification algorithm</subfield><subfield code="7">(dpeaa)DE-He213</subfield></datafield><datafield tag="650" ind1=" " ind2="4"><subfield code="a">Statistical saccade analysis</subfield><subfield code="7">(dpeaa)DE-He213</subfield></datafield><datafield tag="650" ind1=" " ind2="4"><subfield code="a">Glissade classification</subfield><subfield code="7">(dpeaa)DE-He213</subfield></datafield><datafield tag="650" ind1=" " ind2="4"><subfield code="a">Adaptive threshold algorithm</subfield><subfield code="7">(dpeaa)DE-He213</subfield></datafield><datafield tag="650" ind1=" " ind2="4"><subfield code="a">Data preprocessing</subfield><subfield code="7">(dpeaa)DE-He213</subfield></datafield><datafield tag="700" ind1="1" ind2=" "><subfield code="a">Wagner, Adina S.</subfield><subfield code="e">verfasserin</subfield><subfield code="4">aut</subfield></datafield><datafield tag="700" ind1="1" ind2=" "><subfield code="a">Hanke, Michael</subfield><subfield code="e">verfasserin</subfield><subfield code="4">aut</subfield></datafield><datafield tag="773" ind1="0" ind2="8"><subfield code="i">Enthalten in</subfield><subfield code="t">Behavior research methods, instruments & computers</subfield><subfield code="d">Austin, Tex. : Psychonomic Society Publ., 1984</subfield><subfield code="g">53(2020), 1 vom: 24. Juli, Seite 399-414</subfield><subfield code="w">(DE-627)32998067X</subfield><subfield code="w">(DE-600)2048669-8</subfield><subfield code="x">1532-5970</subfield><subfield code="7">nnns</subfield></datafield><datafield tag="773" ind1="1" ind2="8"><subfield code="g">volume:53</subfield><subfield code="g">year:2020</subfield><subfield code="g">number:1</subfield><subfield code="g">day:24</subfield><subfield code="g">month:07</subfield><subfield code="g">pages:399-414</subfield></datafield><datafield tag="856" ind1="4" ind2="0"><subfield code="u">https://dx.doi.org/10.3758/s13428-020-01428-x</subfield><subfield code="z">kostenfrei</subfield><subfield code="3">Volltext</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_USEFLAG_A</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">SYSFLAG_A</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_SPRINGER</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_20</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_22</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_23</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_24</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_31</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_32</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_39</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_40</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_60</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_62</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_65</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_69</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_70</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_73</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_74</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_90</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_95</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_100</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_101</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_105</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_110</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_120</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_138</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_152</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_161</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_171</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_187</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_224</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_250</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_281</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_285</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_293</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_370</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_602</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_702</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2014</subfield></datafield><datafield tag="936" ind1="b" ind2="k"><subfield code="a">77.00</subfield><subfield code="q">ASE</subfield></datafield><datafield tag="951" ind1=" " ind2=" "><subfield code="a">AR</subfield></datafield><datafield tag="952" ind1=" " ind2=" "><subfield code="d">53</subfield><subfield code="j">2020</subfield><subfield code="e">1</subfield><subfield code="b">24</subfield><subfield code="c">07</subfield><subfield code="h">399-414</subfield></datafield></record></collection>
|
author |
Dar, Asim H. |
spellingShingle |
Dar, Asim H. ddc 150 bkl 77.00 misc Eye tracking misc Adaptive classification algorithm misc Saccade classification algorithm misc Statistical saccade analysis misc Glissade classification misc Adaptive threshold algorithm misc Data preprocessing REMoDNaV: robust eye-movement classification for dynamic stimulation |
authorStr |
Dar, Asim H. |
ppnlink_with_tag_str_mv |
@@773@@(DE-627)32998067X |
format |
electronic Article |
dewey-ones |
150 - Psychology |
delete_txt_mv |
keep |
author_role |
aut aut aut |
collection |
springer |
remote_str |
true |
illustrated |
Not Illustrated |
issn |
1532-5970 |
topic_title |
150 ASE 77.00 bkl REMoDNaV: robust eye-movement classification for dynamic stimulation Eye tracking (dpeaa)DE-He213 Adaptive classification algorithm (dpeaa)DE-He213 Saccade classification algorithm (dpeaa)DE-He213 Statistical saccade analysis (dpeaa)DE-He213 Glissade classification (dpeaa)DE-He213 Adaptive threshold algorithm (dpeaa)DE-He213 Data preprocessing (dpeaa)DE-He213 |
topic |
ddc 150 bkl 77.00 misc Eye tracking misc Adaptive classification algorithm misc Saccade classification algorithm misc Statistical saccade analysis misc Glissade classification misc Adaptive threshold algorithm misc Data preprocessing |
topic_unstemmed |
ddc 150 bkl 77.00 misc Eye tracking misc Adaptive classification algorithm misc Saccade classification algorithm misc Statistical saccade analysis misc Glissade classification misc Adaptive threshold algorithm misc Data preprocessing |
topic_browse |
ddc 150 bkl 77.00 misc Eye tracking misc Adaptive classification algorithm misc Saccade classification algorithm misc Statistical saccade analysis misc Glissade classification misc Adaptive threshold algorithm misc Data preprocessing |
format_facet |
Elektronische Aufsätze Aufsätze Elektronische Ressource |
format_main_str_mv |
Text Zeitschrift/Artikel |
carriertype_str_mv |
cr |
hierarchy_parent_title |
Behavior research methods, instruments & computers |
hierarchy_parent_id |
32998067X |
dewey-tens |
150 - Psychology |
hierarchy_top_title |
Behavior research methods, instruments & computers |
isfreeaccess_txt |
true |
familylinks_str_mv |
(DE-627)32998067X (DE-600)2048669-8 |
title |
REMoDNaV: robust eye-movement classification for dynamic stimulation |
ctrlnum |
(DE-627)SPR043150128 (DE-599)SPRs13428-020-01428-x-e (SPR)s13428-020-01428-x-e |
title_full |
REMoDNaV: robust eye-movement classification for dynamic stimulation |
author_sort |
Dar, Asim H. |
journal |
Behavior research methods, instruments & computers |
journalStr |
Behavior research methods, instruments & computers |
lang_code |
eng |
isOA_bool |
true |
dewey-hundreds |
100 - Philosophy & psychology |
recordtype |
marc |
publishDateSort |
2020 |
contenttype_str_mv |
txt |
container_start_page |
399 |
author_browse |
Dar, Asim H. Wagner, Adina S. Hanke, Michael |
container_volume |
53 |
class |
150 ASE 77.00 bkl |
format_se |
Elektronische Aufsätze |
author-letter |
Dar, Asim H. |
doi_str_mv |
10.3758/s13428-020-01428-x |
dewey-full |
150 |
author2-role |
verfasserin |
title_sort |
remodnav: robust eye-movement classification for dynamic stimulation |
title_auth |
REMoDNaV: robust eye-movement classification for dynamic stimulation |
abstract |
Abstract Tracking of eye movements is an established measurement for many types of experimental paradigms. More complex and more prolonged visual stimuli have made algorithmic approaches to eye-movement event classification the most pragmatic option. A recent analysis revealed that many current algorithms are lackluster when it comes to data from viewing dynamic stimuli such as video sequences. Here we present an event classification algorithm—built on an existing velocity-based approach—that is suitable for both static and dynamic stimulation, and is capable of classifying saccades, post-saccadic oscillations, fixations, and smooth pursuit events. We validated classification performance and robustness on three public datasets: 1) manually annotated, trial-based gaze trajectories for viewing static images, moving dots, and short video sequences, 2) lab-quality gaze recordings for a feature-length movie, and 3) gaze recordings acquired under suboptimal lighting conditions inside the bore of a magnetic resonance imaging (MRI) scanner for the same full-length movie. We found that the proposed algorithm performs on par or better compared to state-of-the-art alternatives for static stimulation. Moreover, it yields eye-movement events with biologically plausible characteristics on prolonged dynamic recordings. Lastly, algorithm performance is robust on data acquired under suboptimal conditions that exhibit a temporally varying noise level. These results indicate that the proposed algorithm is a robust tool with improved classification accuracy across a range of use cases. The algorithm is cross-platform compatible, implemented using the Python programming language, and readily available as free and open-source software from public sources. |
abstractGer |
Abstract Tracking of eye movements is an established measurement for many types of experimental paradigms. More complex and more prolonged visual stimuli have made algorithmic approaches to eye-movement event classification the most pragmatic option. A recent analysis revealed that many current algorithms are lackluster when it comes to data from viewing dynamic stimuli such as video sequences. Here we present an event classification algorithm—built on an existing velocity-based approach—that is suitable for both static and dynamic stimulation, and is capable of classifying saccades, post-saccadic oscillations, fixations, and smooth pursuit events. We validated classification performance and robustness on three public datasets: 1) manually annotated, trial-based gaze trajectories for viewing static images, moving dots, and short video sequences, 2) lab-quality gaze recordings for a feature-length movie, and 3) gaze recordings acquired under suboptimal lighting conditions inside the bore of a magnetic resonance imaging (MRI) scanner for the same full-length movie. We found that the proposed algorithm performs on par or better compared to state-of-the-art alternatives for static stimulation. Moreover, it yields eye-movement events with biologically plausible characteristics on prolonged dynamic recordings. Lastly, algorithm performance is robust on data acquired under suboptimal conditions that exhibit a temporally varying noise level. These results indicate that the proposed algorithm is a robust tool with improved classification accuracy across a range of use cases. The algorithm is cross-platform compatible, implemented using the Python programming language, and readily available as free and open-source software from public sources. |
abstract_unstemmed |
Abstract Tracking of eye movements is an established measurement for many types of experimental paradigms. More complex and more prolonged visual stimuli have made algorithmic approaches to eye-movement event classification the most pragmatic option. A recent analysis revealed that many current algorithms are lackluster when it comes to data from viewing dynamic stimuli such as video sequences. Here we present an event classification algorithm—built on an existing velocity-based approach—that is suitable for both static and dynamic stimulation, and is capable of classifying saccades, post-saccadic oscillations, fixations, and smooth pursuit events. We validated classification performance and robustness on three public datasets: 1) manually annotated, trial-based gaze trajectories for viewing static images, moving dots, and short video sequences, 2) lab-quality gaze recordings for a feature-length movie, and 3) gaze recordings acquired under suboptimal lighting conditions inside the bore of a magnetic resonance imaging (MRI) scanner for the same full-length movie. We found that the proposed algorithm performs on par or better compared to state-of-the-art alternatives for static stimulation. Moreover, it yields eye-movement events with biologically plausible characteristics on prolonged dynamic recordings. Lastly, algorithm performance is robust on data acquired under suboptimal conditions that exhibit a temporally varying noise level. These results indicate that the proposed algorithm is a robust tool with improved classification accuracy across a range of use cases. The algorithm is cross-platform compatible, implemented using the Python programming language, and readily available as free and open-source software from public sources. |
collection_details |
GBV_USEFLAG_A SYSFLAG_A GBV_SPRINGER GBV_ILN_20 GBV_ILN_22 GBV_ILN_23 GBV_ILN_24 GBV_ILN_31 GBV_ILN_32 GBV_ILN_39 GBV_ILN_40 GBV_ILN_60 GBV_ILN_62 GBV_ILN_65 GBV_ILN_69 GBV_ILN_70 GBV_ILN_73 GBV_ILN_74 GBV_ILN_90 GBV_ILN_95 GBV_ILN_100 GBV_ILN_101 GBV_ILN_105 GBV_ILN_110 GBV_ILN_120 GBV_ILN_138 GBV_ILN_152 GBV_ILN_161 GBV_ILN_171 GBV_ILN_187 GBV_ILN_224 GBV_ILN_250 GBV_ILN_281 GBV_ILN_285 GBV_ILN_293 GBV_ILN_370 GBV_ILN_602 GBV_ILN_702 GBV_ILN_2014 |
container_issue |
1 |
title_short |
REMoDNaV: robust eye-movement classification for dynamic stimulation |
url |
https://dx.doi.org/10.3758/s13428-020-01428-x |
remote_bool |
true |
author2 |
Wagner, Adina S. Hanke, Michael |
author2Str |
Wagner, Adina S. Hanke, Michael |
ppnlink |
32998067X |
mediatype_str_mv |
c |
isOA_txt |
true |
hochschulschrift_bool |
false |
doi_str |
10.3758/s13428-020-01428-x |
up_date |
2024-07-03T16:54:11.655Z |
_version_ |
1803577617709793280 |
fullrecord_marcxml |
<?xml version="1.0" encoding="UTF-8"?><collection xmlns="http://www.loc.gov/MARC21/slim"><record><leader>01000caa a22002652 4500</leader><controlfield tag="001">SPR043150128</controlfield><controlfield tag="003">DE-627</controlfield><controlfield tag="005">20220111192940.0</controlfield><controlfield tag="007">cr uuu---uuuuu</controlfield><controlfield tag="008">210215s2020 xx |||||o 00| ||eng c</controlfield><datafield tag="024" ind1="7" ind2=" "><subfield code="a">10.3758/s13428-020-01428-x</subfield><subfield code="2">doi</subfield></datafield><datafield tag="035" ind1=" " ind2=" "><subfield code="a">(DE-627)SPR043150128</subfield></datafield><datafield tag="035" ind1=" " ind2=" "><subfield code="a">(DE-599)SPRs13428-020-01428-x-e</subfield></datafield><datafield tag="035" ind1=" " ind2=" "><subfield code="a">(SPR)s13428-020-01428-x-e</subfield></datafield><datafield tag="040" ind1=" " ind2=" "><subfield code="a">DE-627</subfield><subfield code="b">ger</subfield><subfield code="c">DE-627</subfield><subfield code="e">rakwb</subfield></datafield><datafield tag="041" ind1=" " ind2=" "><subfield code="a">eng</subfield></datafield><datafield tag="082" ind1="0" ind2="4"><subfield code="a">150</subfield><subfield code="q">ASE</subfield></datafield><datafield tag="084" ind1=" " ind2=" "><subfield code="a">77.00</subfield><subfield code="2">bkl</subfield></datafield><datafield tag="100" ind1="1" ind2=" "><subfield code="a">Dar, Asim H.</subfield><subfield code="e">verfasserin</subfield><subfield code="4">aut</subfield></datafield><datafield tag="245" ind1="1" ind2="0"><subfield code="a">REMoDNaV: robust eye-movement classification for dynamic stimulation</subfield></datafield><datafield tag="264" ind1=" " ind2="1"><subfield code="c">2020</subfield></datafield><datafield tag="336" ind1=" " ind2=" "><subfield code="a">Text</subfield><subfield code="b">txt</subfield><subfield code="2">rdacontent</subfield></datafield><datafield tag="337" ind1=" " ind2=" "><subfield code="a">Computermedien</subfield><subfield code="b">c</subfield><subfield code="2">rdamedia</subfield></datafield><datafield tag="338" ind1=" " ind2=" "><subfield code="a">Online-Ressource</subfield><subfield code="b">cr</subfield><subfield code="2">rdacarrier</subfield></datafield><datafield tag="520" ind1=" " ind2=" "><subfield code="a">Abstract Tracking of eye movements is an established measurement for many types of experimental paradigms. More complex and more prolonged visual stimuli have made algorithmic approaches to eye-movement event classification the most pragmatic option. A recent analysis revealed that many current algorithms are lackluster when it comes to data from viewing dynamic stimuli such as video sequences. Here we present an event classification algorithm—built on an existing velocity-based approach—that is suitable for both static and dynamic stimulation, and is capable of classifying saccades, post-saccadic oscillations, fixations, and smooth pursuit events. We validated classification performance and robustness on three public datasets: 1) manually annotated, trial-based gaze trajectories for viewing static images, moving dots, and short video sequences, 2) lab-quality gaze recordings for a feature-length movie, and 3) gaze recordings acquired under suboptimal lighting conditions inside the bore of a magnetic resonance imaging (MRI) scanner for the same full-length movie. We found that the proposed algorithm performs on par or better compared to state-of-the-art alternatives for static stimulation. Moreover, it yields eye-movement events with biologically plausible characteristics on prolonged dynamic recordings. Lastly, algorithm performance is robust on data acquired under suboptimal conditions that exhibit a temporally varying noise level. These results indicate that the proposed algorithm is a robust tool with improved classification accuracy across a range of use cases. The algorithm is cross-platform compatible, implemented using the Python programming language, and readily available as free and open-source software from public sources.</subfield></datafield><datafield tag="650" ind1=" " ind2="4"><subfield code="a">Eye tracking</subfield><subfield code="7">(dpeaa)DE-He213</subfield></datafield><datafield tag="650" ind1=" " ind2="4"><subfield code="a">Adaptive classification algorithm</subfield><subfield code="7">(dpeaa)DE-He213</subfield></datafield><datafield tag="650" ind1=" " ind2="4"><subfield code="a">Saccade classification algorithm</subfield><subfield code="7">(dpeaa)DE-He213</subfield></datafield><datafield tag="650" ind1=" " ind2="4"><subfield code="a">Statistical saccade analysis</subfield><subfield code="7">(dpeaa)DE-He213</subfield></datafield><datafield tag="650" ind1=" " ind2="4"><subfield code="a">Glissade classification</subfield><subfield code="7">(dpeaa)DE-He213</subfield></datafield><datafield tag="650" ind1=" " ind2="4"><subfield code="a">Adaptive threshold algorithm</subfield><subfield code="7">(dpeaa)DE-He213</subfield></datafield><datafield tag="650" ind1=" " ind2="4"><subfield code="a">Data preprocessing</subfield><subfield code="7">(dpeaa)DE-He213</subfield></datafield><datafield tag="700" ind1="1" ind2=" "><subfield code="a">Wagner, Adina S.</subfield><subfield code="e">verfasserin</subfield><subfield code="4">aut</subfield></datafield><datafield tag="700" ind1="1" ind2=" "><subfield code="a">Hanke, Michael</subfield><subfield code="e">verfasserin</subfield><subfield code="4">aut</subfield></datafield><datafield tag="773" ind1="0" ind2="8"><subfield code="i">Enthalten in</subfield><subfield code="t">Behavior research methods, instruments & computers</subfield><subfield code="d">Austin, Tex. : Psychonomic Society Publ., 1984</subfield><subfield code="g">53(2020), 1 vom: 24. Juli, Seite 399-414</subfield><subfield code="w">(DE-627)32998067X</subfield><subfield code="w">(DE-600)2048669-8</subfield><subfield code="x">1532-5970</subfield><subfield code="7">nnns</subfield></datafield><datafield tag="773" ind1="1" ind2="8"><subfield code="g">volume:53</subfield><subfield code="g">year:2020</subfield><subfield code="g">number:1</subfield><subfield code="g">day:24</subfield><subfield code="g">month:07</subfield><subfield code="g">pages:399-414</subfield></datafield><datafield tag="856" ind1="4" ind2="0"><subfield code="u">https://dx.doi.org/10.3758/s13428-020-01428-x</subfield><subfield code="z">kostenfrei</subfield><subfield code="3">Volltext</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_USEFLAG_A</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">SYSFLAG_A</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_SPRINGER</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_20</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_22</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_23</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_24</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_31</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_32</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_39</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_40</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_60</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_62</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_65</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_69</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_70</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_73</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_74</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_90</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_95</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_100</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_101</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_105</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_110</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_120</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_138</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_152</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_161</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_171</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_187</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_224</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_250</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_281</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_285</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_293</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_370</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_602</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_702</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2014</subfield></datafield><datafield tag="936" ind1="b" ind2="k"><subfield code="a">77.00</subfield><subfield code="q">ASE</subfield></datafield><datafield tag="951" ind1=" " ind2=" "><subfield code="a">AR</subfield></datafield><datafield tag="952" ind1=" " ind2=" "><subfield code="d">53</subfield><subfield code="j">2020</subfield><subfield code="e">1</subfield><subfield code="b">24</subfield><subfield code="c">07</subfield><subfield code="h">399-414</subfield></datafield></record></collection>
|
score |
7.398917 |