Convolutional Neural Networks and Ensembles for Visually Impaired Aid

Registro completo de metadados
MetadadosDescriçãoIdioma
Autor(es): dc.contributorUniversidade Estadual Paulista (UNESP)-
Autor(es): dc.creatorBreve, Fabricio-
Data de aceite: dc.date.accessioned2025-08-21T20:31:15Z-
Data de disponibilização: dc.date.available2025-08-21T20:31:15Z-
Data de envio: dc.date.issued2025-04-29-
Data de envio: dc.date.issued2022-12-31-
Fonte completa do material: dc.identifierhttp://dx.doi.org/10.1007/978-3-031-36805-9_34-
Fonte completa do material: dc.identifierhttps://hdl.handle.net/11449/307675-
Fonte: dc.identifier.urihttp://educapes.capes.gov.br/handle/11449/307675-
Descrição: dc.descriptionRecent surveys show that smartphone-based computer vision tools for visually impaired individuals often rely on outdated computer vision algorithms. Deep-learning approaches have been explored, but many require high-end or specialized hardware that is not practical for users. Therefore, developing deep learning systems that can make inferences using only the smartphone is desirable. This paper presents a comprehensive study of 25 different convolutional neural network (CNN) architectures to tackle the challenge of identifying obstacles in images captured by a smartphone positioned at chest height for visually impaired individuals. A transfer learning approach is employed, with the CNN models initialized with weights pre-trained on the vast ImageNet dataset. The study employs k-fold cross-validation with k= 10 and five repetitions to ensure the robustness of the results. Various configurations are explored for each CNN architecture, including different optimizers (Adam and RMSprop), freezing or fine-tuning convolutional layer weights, and different learning rates for convolutional and dense layers. Moreover, CNN ensembles are investigated, where multiple instances of the same or different CNN architectures are combined to enhance the overall performance. The highest accuracy achieved by an individual CNN is 94.56 % using EfficientNetB4, surpassing the previous best result of 92.11 %. With the use of ensembles, the accuracy is further improved to 96.55 % using multiple instances of EfficientNetB4, EfficientNetB0, and MobileNet. Overall, the study contributes to the development of advanced deep-learning models that can enhance the mobility and independence of visually impaired individuals.-
Descrição: dc.descriptionFundação de Amparo à Pesquisa do Estado de São Paulo (FAPESP)-
Descrição: dc.descriptionSão Paulo State University, SP-
Descrição: dc.descriptionSão Paulo State University, SP-
Descrição: dc.descriptionFAPESP: 2016/05669-4-
Formato: dc.format520-534-
Idioma: dc.languageen-
Relação: dc.relationLecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics)-
???dc.source???: dc.sourceScopus-
Palavras-chave: dc.subjectComputer Vision-
Palavras-chave: dc.subjectConvolutional Neural Networks-
Palavras-chave: dc.subjectDeep Learning-
Palavras-chave: dc.subjectVisually Impaired Aid-
Título: dc.titleConvolutional Neural Networks and Ensembles for Visually Impaired Aid-
Tipo de arquivo: dc.typeaula digital-
Aparece nas coleções:Repositório Institucional - Unesp

Não existem arquivos associados a este item.