Experience generalization for multi-agent reinforcement learning

Registro completo de metadados
MetadadosDescriçãoIdioma
Autor(es): dc.contributorUniversidade Estadual Paulista (UNESP)-
Autor(es): dc.creatorPegoraro, Renê-
Autor(es): dc.creatorCosta, AHR-
Autor(es): dc.creatorRibeiro, CHC-
Data de aceite: dc.date.accessioned2021-03-10T16:47:24Z-
Data de disponibilização: dc.date.available2021-03-10T16:47:24Z-
Data de envio: dc.date.issued2014-05-20-
Data de envio: dc.date.issued2014-05-20-
Data de envio: dc.date.issued2001-01-01-
Fonte completa do material: dc.identifierhttp://dx.doi.org/10.1109/SCCC.2001.972652-
Fonte completa do material: dc.identifierhttp://hdl.handle.net/11449/8273-
Fonte: dc.identifier.urihttp://educapes.capes.gov.br/handle/11449/8273-
Descrição: dc.descriptionOn-line learning methods have been applied successfully in multi-agent systems to achieve coordination among agents. Learning in multi-agent systems implies in a non-stationary scenario perceived by the agents, since the behavior of other agents may change as they simultaneously learn how to improve their actions. Non-stationary scenarios can be modeled as Markov Games, which can be solved using the Minimax-Q algorithm a combination of Q-learning (a Reinforcement Learning (RL) algorithm which directly learns an optimal control policy) and the Minimax algorithm. However, finding optimal control policies using any RL algorithm (Q-learning and Minimax-Q included) can be very time consuming. Trying to improve the learning time of Q-learning, we considered the QS-algorithm. in which a single experience can update more than a single action value by using a spreading function. In this paper, we contribute a Minimax-QS algorithm which combines the Minimax-Q algorithm and the QS-algorithm. We conduct a series of empirical evaluation of the algorithm in a simplified simulator of the soccer domain. We show that even using a very simple domain-dependent spreading function, the performance of the learning algorithm can be improved.-
Formato: dc.format233-239-
Idioma: dc.languageen-
Publicador: dc.publisherInstitute of Electrical and Electronics Engineers (IEEE), Computer Soc-
Relação: dc.relationSccc 2001: Xxi International Conference of the Chilean Computer Science Society, Proceedings-
Direitos: dc.rightsopenAccess-
Título: dc.titleExperience generalization for multi-agent reinforcement learning-
Aparece nas coleções:Repositório Institucional - Unesp

Não existem arquivos associados a este item.