GemBode and PhiBode: Adapting Small Language Models to Brazilian Portuguese

Registro completo de metadados
MetadadosDescriçãoIdioma
Autor(es): dc.contributorUniversidade Estadual Paulista (UNESP)-
Autor(es): dc.contributorUniversidade Federal de Goiás (UFG)-
Autor(es): dc.creatorGarcia, Gabriel Lino-
Autor(es): dc.creatorPaiola, Pedro Henrique-
Autor(es): dc.creatorGarcia, Eduardo-
Autor(es): dc.creatorRibeiro Manesco, João Renato-
Autor(es): dc.creatorPapa, João Paulo-
Data de aceite: dc.date.accessioned2025-08-21T17:16:25Z-
Data de disponibilização: dc.date.available2025-08-21T17:16:25Z-
Data de envio: dc.date.issued2025-04-29-
Data de envio: dc.date.issued2024-12-31-
Fonte completa do material: dc.identifierhttp://dx.doi.org/10.1007/978-3-031-76607-7_17-
Fonte completa do material: dc.identifierhttps://hdl.handle.net/11449/301403-
Fonte: dc.identifier.urihttp://educapes.capes.gov.br/handle/11449/301403-
Descrição: dc.descriptionRecent advances in generative capabilities provided by large language models have reshaped technology research and human society’s cognitive abilities, bringing new innovative capacities to artificial intelligence solutions. However, the size of such models has raised several concerns regarding their alignment with hardware-limited resources. This paper presents a comprehensive study on training Portuguese-focused Small Language Models (SLMs). We have developed a unique dataset for training our models and employed full fine-tuning, as well as PEFT approaches for comparative analysis. We used Microsoft’s Phi and Google’s Gemma as base models to create our own, named PhiBode and GemBode. These models range from approximately 1 billion to 7 billion parameters, with a total of ten models developed. Our findings provide valuable insights into the performance and applicability of these models, contributing significantly to the field of Portuguese language processing. This research is a step forward in understanding and improving the performance of SLMs in Portuguese. The comparative analysis of the models provides a clear benchmark for future research in this area. The results demonstrate the effectiveness of our training methods and the potential of our models for various applications. This paper significantly contributes to language model training, particularly for the Portuguese language.-
Descrição: dc.descriptionSchool of Sciences São Paulo State University (UNESP), SP-
Descrição: dc.descriptionInstitute of Informatics Federal University of Goiás, GO-
Descrição: dc.descriptionSchool of Sciences São Paulo State University (UNESP), SP-
Formato: dc.format228-243-
Idioma: dc.languageen-
Relação: dc.relationLecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics)-
???dc.source???: dc.sourceScopus-
Palavras-chave: dc.subjectBode-
Palavras-chave: dc.subjectGenerative Artificial Intelligence-
Palavras-chave: dc.subjectNatural Language Processing-
Palavras-chave: dc.subjectPortuguese-
Palavras-chave: dc.subjectSmall Language Models-
Título: dc.titleGemBode and PhiBode: Adapting Small Language Models to Brazilian Portuguese-
Tipo de arquivo: dc.typeaula digital-
Aparece nas coleções:Repositório Institucional - Unesp

Não existem arquivos associados a este item.