How optimizing perplexity can affect the dimensionality reduction on word embeddings visualization?

Registro completo de metadados
MetadadosDescriçãoIdioma
Autor(es): dc.contributorUniversidade Estadual Paulista (Unesp)-
Autor(es): dc.creatorRosa, Gustavo H. de [UNESP]-
Autor(es): dc.creatorBrega, Jose R. F. [UNESP]-
Autor(es): dc.creatorPapa, Joao P. [UNESP]-
Data de aceite: dc.date.accessioned2022-02-22T00:09:55Z-
Data de disponibilização: dc.date.available2022-02-22T00:09:55Z-
Data de envio: dc.date.issued2020-12-09-
Data de envio: dc.date.issued2020-12-09-
Data de envio: dc.date.issued2019-11-30-
Fonte completa do material: dc.identifierhttp://dx.doi.org/10.1007/s42452-019-1689-4-
Fonte completa do material: dc.identifierhttp://hdl.handle.net/11449/196603-
Fonte: dc.identifier.urihttp://educapes.capes.gov.br/handle/11449/196603-
Descrição: dc.descriptionTraditional word embeddings approaches, such as bag-of-words models, tackles the problem of text data representation by linking words in a document to a binary vector, marking their occurrence or not. Additionally, a term frequency-inverse document frequency encoding provides a numerical statistic reflecting how important a particular word is in a document. Nevertheless, the major vulnerability of such models concerns with the loss of contextual meaning, which inhibits them from learning proper pieces of information. A new neural-based embedding approach, known as Word2Vec, tries to mitigate that issue by minimizing the loss of predicting a vector from a particular word considering its surrounding words. Furthermore, as these embedding-based methods produce low-dimensional data, it is impossible to visualize them accurately. With that in mind, dimensionality reduction techniques, such as t-SNE, presents a method to generate bi-dimensional data, allowing its visualization. One common problem of such reductions concerns with the setting of their hyperparameters, such as the perplexity parameter. Therefore, this paper addresses the problem of selecting a suitable perplexity through a meta-heuristic optimization process. Meta-heuristic-driven techniques, such as Artificial Bee Colony, Bat Algorithm, Genetic Programming, and Particle Swarm Optimization, are employed to find proper values for the perplexity parameter. The results revealed that optimizing t-SNE's perplexity is suitable for improving data visualization and thus, an exciting field to be fostered.-
Descrição: dc.descriptionFundação de Amparo à Pesquisa do Estado de São Paulo (FAPESP)-
Descrição: dc.descriptionSao Paulo State Univ, Dept Comp, Av Eng Luiz Edmundo Carrijo Coube 14-01, Bauru, SP, Brazil-
Descrição: dc.descriptionSao Paulo State Univ, Dept Comp, Av Eng Luiz Edmundo Carrijo Coube 14-01, Bauru, SP, Brazil-
Descrição: dc.descriptionFAPESP: 2019/02205-5-
Formato: dc.format17-
Idioma: dc.languageen-
Publicador: dc.publisherSpringer-
Relação: dc.relationSn Applied Sciences-
???dc.source???: dc.sourceWeb of Science-
Palavras-chave: dc.subjectWord embeddings-
Palavras-chave: dc.subjectDimensionality reduction-
Palavras-chave: dc.subjectMeta-heuristic optimization-
Título: dc.titleHow optimizing perplexity can affect the dimensionality reduction on word embeddings visualization?-
Tipo de arquivo: dc.typelivro digital-
Aparece nas coleções:Repositório Institucional - Unesp

Não existem arquivos associados a este item.