Atenção: Todas as denúncias são sigilosas e sua identidade será preservada.
Os campos nome e e-mail são de preenchimento opcional
Metadados | Descrição | Idioma |
---|---|---|
Autor(es): dc.contributor | Universidade Estadual Paulista (Unesp) | - |
Autor(es): dc.creator | Breve, Fabricio [UNESP] | - |
Autor(es): dc.creator | Fischer, Carlos N. [UNESP] | - |
Autor(es): dc.creator | IEEE | - |
Data de aceite: dc.date.accessioned | 2022-02-22T00:55:49Z | - |
Data de disponibilização: dc.date.available | 2022-02-22T00:55:49Z | - |
Data de envio: dc.date.issued | 2021-06-25 | - |
Data de envio: dc.date.issued | 2021-06-25 | - |
Data de envio: dc.date.issued | 2019-12-31 | - |
Fonte completa do material: dc.identifier | http://hdl.handle.net/11449/209251 | - |
Fonte: dc.identifier.uri | http://educapes.capes.gov.br/handle/11449/209251 | - |
Descrição: dc.description | Navigation and mobility are some of the major problems faced by visually impaired people in their daily lives. Advances in computer vision led to the proposal of some navigation systems. However, most of them require expensive and/or heavy hardware. In this paper we propose the use of convolutional neural networks (CNN), transfer learning, and semi-supervised learning (SSL) to build a framework aimed at the visually impaired aid. It has low computational costs and, therefore, may be implemented on current smartphones, without relying on any additional equipment. The smartphone camera can be used to automatically take pictures of the path ahead. Then, they will be immediately classified, providing almost instantaneous feedback to the user. We also propose a dataset to train the classifiers, including indoor and outdoor situations with different types of light, floor, and obstacles. Many different CNN architectures are evaluated as feature extractors and classifiers, by fine-tuning weights pre-trained on a much larger dataset. The graph-based SSL method, known as particle competition and cooperation, is also used for classification, allowing feedback from the user to be incorporated without retraining the underlying network. 92% and 80% classification accuracy is achieved in the proposed dataset in the best supervised and SSL scenarios, respectively. | - |
Descrição: dc.description | Fundação de Amparo à Pesquisa do Estado de São Paulo (FAPESP) | - |
Descrição: dc.description | Sao Paulo State Univ UNESP, Inst Geosci & Exact Sci, Rio Claro, SP, Brazil | - |
Descrição: dc.description | Sao Paulo State Univ UNESP, Inst Geosci & Exact Sci, Rio Claro, SP, Brazil | - |
Descrição: dc.description | FAPESP: 2016/05669-4 | - |
Formato: dc.format | 8 | - |
Idioma: dc.language | en | - |
Publicador: dc.publisher | Ieee | - |
Relação: dc.relation | 2020 International Joint Conference On Neural Networks (ijcnn) | - |
???dc.source???: dc.source | Web of Science | - |
Palavras-chave: dc.subject | Transfer Learning | - |
Palavras-chave: dc.subject | Particle Competition and Cooperation | - |
Palavras-chave: dc.subject | Convolutional Neural Networks | - |
Palavras-chave: dc.subject | Semi-Supervised Learning | - |
Título: dc.title | Visually Impaired Aid using Convolutional Neural Networks, Transfer Learning, and Particle Competition and Cooperation | - |
Aparece nas coleções: | Repositório Institucional - Unesp |
O Portal eduCAPES é oferecido ao usuário, condicionado à aceitação dos termos, condições e avisos contidos aqui e sem modificações. A CAPES poderá modificar o conteúdo ou formato deste site ou acabar com a sua operação ou suas ferramentas a seu critério único e sem aviso prévio. Ao acessar este portal, você, usuário pessoa física ou jurídica, se declara compreender e aceitar as condições aqui estabelecidas, da seguinte forma: