Atenção: Todas as denúncias são sigilosas e sua identidade será preservada.
Os campos nome e e-mail são de preenchimento opcional
Metadados | Descrição | Idioma |
---|---|---|
Autor(es): dc.contributor | Universidade Estadual Paulista (Unesp) | - |
Autor(es): dc.creator | Roder, Mateus [UNESP] | - |
Autor(es): dc.creator | Passos, Leandro A. [UNESP] | - |
Autor(es): dc.creator | Ribeiro, Luiz Carlos Felix [UNESP] | - |
Autor(es): dc.creator | Pereira, Clayton [UNESP] | - |
Autor(es): dc.creator | Papa, João Paulo [UNESP] | - |
Data de aceite: dc.date.accessioned | 2022-02-22T00:52:37Z | - |
Data de disponibilização: dc.date.available | 2022-02-22T00:52:37Z | - |
Data de envio: dc.date.issued | 2021-06-25 | - |
Data de envio: dc.date.issued | 2021-06-25 | - |
Data de envio: dc.date.issued | 2019-12-31 | - |
Fonte completa do material: dc.identifier | http://dx.doi.org/10.1007/978-3-030-61401-0_22 | - |
Fonte completa do material: dc.identifier | http://hdl.handle.net/11449/208175 | - |
Fonte: dc.identifier.uri | http://educapes.capes.gov.br/handle/11449/208175 | - |
Descrição: dc.description | With the advent of deep learning, the number of works proposing new methods or improving existent ones has grown exponentially in the last years. In this scenario, “very deep” models were emerging, once they were expected to extract more intrinsic and abstract features while supporting a better performance. However, such models suffer from the gradient vanishing problem, i.e., backpropagation values become too close to zero in their shallower layers, ultimately causing learning to stagnate. Such an issue was overcome in the context of convolution neural networks by creating “shortcut connections” between layers, in a so-called deep residual learning framework. Nonetheless, a very popular deep learning technique called Deep Belief Network still suffers from gradient vanishing when dealing with discriminative tasks. Therefore, this paper proposes the Residual Deep Belief Network, which considers the information reinforcement layer-by-layer to improve the feature extraction and knowledge retaining, that support better discriminative performance. Experiments conducted over three public datasets demonstrate its robustness concerning the task of binary image classification. | - |
Descrição: dc.description | São Paulo State University - UNESP | - |
Descrição: dc.description | São Paulo State University - UNESP | - |
Formato: dc.format | 231-241 | - |
Idioma: dc.language | en | - |
Relação: dc.relation | Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) | - |
???dc.source???: dc.source | Scopus | - |
Palavras-chave: dc.subject | Deep Belief Networks | - |
Palavras-chave: dc.subject | Residual networks | - |
Palavras-chave: dc.subject | Restricted Boltzmann Machines | - |
Título: dc.title | A Layer-Wise Information Reinforcement Approach to Improve Learning in Deep Belief Networks | - |
Aparece nas coleções: | Repositório Institucional - Unesp |
O Portal eduCAPES é oferecido ao usuário, condicionado à aceitação dos termos, condições e avisos contidos aqui e sem modificações. A CAPES poderá modificar o conteúdo ou formato deste site ou acabar com a sua operação ou suas ferramentas a seu critério único e sem aviso prévio. Ao acessar este portal, você, usuário pessoa física ou jurídica, se declara compreender e aceitar as condições aqui estabelecidas, da seguinte forma: