Learning what to observe in multi-agent systems

Yann-Michaël De Hauwere, Peter Vrancx, Ann Nowe, Toon Calders (Editor), Karl Tuyls (Editor), Mykola Pechenizkiy (Editor)

Research output: Contribution to journalConference paper

8 Citations (Scopus)

Abstract

A major challenge in multi-agent reinforcement learning remains dealing with the large state spaces typically associated with realistic multi-agent systems. As the state space grows, agent policies become more and more complex and learning slows down. Current more advanced single-agent techniques are already very capable of learning optimal policies in large unknown environments. When multiple agents are present however, we are challenged by an increase of the state-action space, exponential in the number of agents, even though these agents do not always interfere with each other and thus their presence should not always be included in the state information of the other agent. We introduce a framework capable of dealing with this issue. We also present an implementation of our framework, called 2observe which we apply to some gridworld problems.
Original languageEnglish
Pages (from-to)83-90
Number of pages8
JournalProceedings of the Benelux Conference on Artificial Intelligence
Volume21
Publication statusPublished - 29 Oct 2009
Event21st Benelux Conference on Artificial Intelligence - Eindhoven, Netherlands
Duration: 29 Oct 200930 Oct 2009

Bibliographical note

Toon Calders, Karl Tuyls, Mykola Pechenizkiy

Keywords

  • multi-agent learning
  • reinforcement learning
  • agent systems
  • artificial intelligence

Fingerprint Dive into the research topics of 'Learning what to observe in multi-agent systems'. Together they form a unique fingerprint.

Cite this