Investigating non-visual eye movements non-intrusively: Comparing manual and automatic annotation styles

Bitte benutzen Sie diese Kennung, um auf die Ressource zu verweisen:
https://doi.org/10.48693/256
Open Access logo originally created by the Public Library of Science (PLoS)
Langanzeige der Metadaten
DC ElementWertSprache
dc.creatorStüber, Jeremias-
dc.creatorJunctorius, Lina-
dc.creatorHohenberger, Annette-
dc.date.accessioned2023-02-17T12:58:53Z-
dc.date.available2023-02-17T12:58:53Z-
dc.date.issued2022-04-22-
dc.identifier.citationStüber, J., Junctorius, L., & Hohenberger, A. (2022): Investigating non-visual eye movements non-intrusively: Comparing manual and automatic annotation styles. Journal of Eye Movement Research, 15(2).ger
dc.identifier.urihttps://doi.org/10.48693/256-
dc.identifier.urihttps://osnadocs.ub.uni-osnabrueck.de/handle/ds-202302178315-
dc.description.abstractNon-visual eye-movements (NVEMs) are eye movements that do not serve the provision of visual information. As of yet, their cognitive origins and meaning remain under-explored in eye-movement research. The first problem presenting itself in pursuit of their study is one of annotation: in virtue of their being non-visual, they are not necessarily bound to a specific surface or object of interest, rendering conventional eye-trackers nonideal for their study. This, however, makes it potentially viable to investigate them without requiring high resolution data. In this report, we present two approaches to annotating NVEM data – one of them grid-based, involving manual annotation in ELAN (Max Planck Institute for Psycholinguistics: The Language Archive, 2019), the other one Cartesian coordinate-based, derived algorithmically through OpenFace (Baltrušaitis et al., 2018). We evaluated a) the two approaches in themselves, e.g. in terms of consistency, as well as b) their compatibility, i.e. the possibilities of mapping one to the other. In the case of a), we found good overall consistency in both approaches, in the case of b), there is evidence for the eventual possibility of mapping the OpenFace gaze estimations onto the manual coding grid.eng
dc.relationhttps://doi.org/10.16910/jemr.15.2.1ger
dc.rightsAttribution 4.0 International*
dc.rights.urihttp://creativecommons.org/licenses/by/4.0/*
dc.subjectEye movementeng
dc.subjectnon-visualeng
dc.subjectgazeeng
dc.subjectsaccadeseng
dc.subjectannotationeng
dc.subjectusabilityeng
dc.subject.ddc150 - Psychologieger
dc.subject.ddc004 - Informatikger
dc.titleInvestigating non-visual eye movements non-intrusively: Comparing manual and automatic annotation styleseng
dc.typeEinzelbeitrag in einer wissenschaftlichen Zeitschrift [Article]ger
dc.identifier.doi10.16910/jemr.15.2.1-
Enthalten in den Sammlungen:FB08 - Hochschulschriften
Open-Access-Publikationsfonds

Dateien zu dieser Ressource:
Datei Beschreibung GrößeFormat 
jemr_Stueber_etal_2022.pdfArticle19,33 MBAdobe PDF
jemr_Stueber_etal_2022.pdf
Miniaturbild
Öffnen/Anzeigen


Diese Ressource wurde unter folgender Copyright-Bestimmung veröffentlicht: Lizenz von Creative Commons Creative Commons