Biologically Inspired Deep Learning Model for Efficient Foveal-Peripheral Vision

Bitte benutzen Sie diese Kennung, um auf die Ressource zu verweisen:
https://doi.org/10.48693/63
Open Access logo originally created by the Public Library of Science (PLoS)
Langanzeige der Metadaten
DC ElementWertSprache
dc.creatorLukanov, Hristofor-
dc.creatorKönig, Peter-
dc.creatorPipa, Gordon-
dc.date.accessioned2022-02-10T16:16:40Z-
dc.date.available2022-02-10T16:16:40Z-
dc.date.issued2021-11-22-
dc.identifier.citationLukanov H, König P and Pipa G (2021): Biologically Inspired Deep Learning Model for Efficient Foveal-Peripheral Vision. Front. Comput. Neurosci. 15:746204.ger
dc.identifier.urihttps://doi.org/10.48693/63-
dc.identifier.urihttps://osnadocs.ub.uni-osnabrueck.de/handle/ds-202202106316-
dc.description.abstractWhile abundant in biology, foveated vision is nearly absent from computational models and especially deep learning architectures. Despite considerable hardware improvements, training deep neural networks still presents a challenge and constraints complexity of models. Here we propose an end-to-end neural model for foveal-peripheral vision, inspired by retino-cortical mapping in primates and humans. Our model has an efficient sampling technique for compressing the visual signal such that a small portion of the scene is perceived in high resolution while a large field of view is maintained in low resolution. An attention mechanism for performing “eye-movements” assists the agent in collecting detailed information incrementally from the observed scene. Our model achieves comparable results to a similar neural architecture trained on full-resolution data for image classification and outperforms it at video classification tasks. At the same time, because of the smaller size of its input, it can reduce computational effort tenfold and uses several times less memory. Moreover, we present an easy to implement bottom-up and top-down attention mechanism which relies on task-relevant features and is therefore a convenient byproduct of the main architecture. Apart from its computational efficiency, the presented work provides means for exploring active vision for agent training in simulated environments and anthropomorphic robotics.eng
dc.relationhttps://doi.org/10.3389/fncom.2021.746204ger
dc.rightsAttribution 4.0 International*
dc.rights.urihttp://creativecommons.org/licenses/by/4.0/*
dc.subjectspace-variant visioneng
dc.subjectactive visioneng
dc.subjectfoveal visioneng
dc.subjectperipheral visioneng
dc.subjectdeep learning-artificial neural network (DL-ANN)eng
dc.subjectbottom-up attentioneng
dc.subjecttop-down attentioneng
dc.subject.ddc004 - Informatikger
dc.titleBiologically Inspired Deep Learning Model for Efficient Foveal-Peripheral Visioneng
dc.typeEinzelbeitrag in einer wissenschaftlichen Zeitschrift [article]ger
orcid.creatorhttps://orcid.org/0000-0003-3654-5267-
orcid.creatorhttps://orcid.org/0000-0002-3416-2652-
dc.identifier.doi10.3389/fncom.2021.746204-
Enthalten in den Sammlungen:FB08 - Hochschulschriften
Open-Access-Publikationsfonds

Dateien zu dieser Ressource:
Datei Beschreibung GrößeFormat 
fncom_Lukanov_etal_2021.pdfArticle2,42 MBAdobe PDF
fncom_Lukanov_etal_2021.pdf
Miniaturbild
Öffnen/Anzeigen


Diese Ressource wurde unter folgender Copyright-Bestimmung veröffentlicht: Lizenz von Creative Commons Creative Commons