A fast access big data approach for configurable and scalable object storage Enabling mixed fault-tolerance

Registro completo de metadados
MetadadosDescriçãoIdioma
Autor(es): dc.contributorUniversidade Estadual Paulista (UNESP)-
Autor(es): dc.creatorValêncio, Carlos Roberto-
Autor(es): dc.creatorCaetano, André Francisco Morielo-
Autor(es): dc.creatorColombini, Angelo Cesar-
Autor(es): dc.creatorTronco, Mário Luiz-
Autor(es): dc.creatorFortes, Márcio Zamboti-
Data de aceite: dc.date.accessioned2021-03-11T00:45:24Z-
Data de disponibilização: dc.date.available2021-03-11T00:45:24Z-
Data de envio: dc.date.issued2018-12-11-
Data de envio: dc.date.issued2018-12-11-
Data de envio: dc.date.issued2017-07-01-
Fonte completa do material: dc.identifierhttp://dx.doi.org/10.3844/jcssp.2017.192.198-
Fonte completa do material: dc.identifierhttp://hdl.handle.net/11449/174933-
Fonte: dc.identifier.urihttp://educapes.capes.gov.br/handle/11449/174933-
Descrição: dc.descriptionThe progressive growth in the volume of digital data has become a technological challenge of great interest in the field of computer science. That comes because, with the spread of personal computers and networks worldwide, content generation is taking larger proportions and very different formats from what had been usual until then. To analyze and extract relevant knowledge from these masses of complex and large volume data is particularly interesting, but before that, it is necessary to develop techniques to encourage their resilient storage. Very often, storage systems use a replication scheme for preserving the integrity of stored data. This involves generating copies of all information that, if lost by individual hardware failures inherent in any massive storage infrastructure, do not compromise access to what was stored. However, it was realized that accommodate such copies requires a real storage space often much greater than the information would originally occupy. Because of that, there is error correction codes, or erasure codes, which has been used with a mathematical approach considerably more refined than the simple replication, generating a smaller storage overhead than their predecessors techniques. The contribution of this work is a fully decentralized storage strategy that, on average, presents performance improvements of over 80%in access latency for both replicated and encoded data, while minimizing by 55% the overhead for a terabyte-sized dataset when encoded and compared to related works of the literature.-
Formato: dc.format192-198-
Idioma: dc.languageen-
Relação: dc.relationJournal of Computer Science-
Relação: dc.relation0,147-
Direitos: dc.rightsopenAccess-
Palavras-chave: dc.subjectBig data-
Palavras-chave: dc.subjectCache-
Palavras-chave: dc.subjectData storage-
Palavras-chave: dc.subjectErasure coding-
Palavras-chave: dc.subjectObject storage-
Título: dc.titleA fast access big data approach for configurable and scalable object storage Enabling mixed fault-tolerance-
Tipo de arquivo: dc.typelivro digital-
Aparece nas coleções:Repositório Institucional - Unesp

Não existem arquivos associados a este item.