Many real-world scenarios involve multiple agents operating in the
same environment. Multi-agent systems are a potent tool for
researchers to study such environments. In most of the current
research, it is assumed that agents optimise for a single objective.
This is however often not the case, as many scenarios are inherently
multi-objective. For example, a logistics provider has the (possibly
conflicting) objectives to deliver all goods as fast as possible to their
destination, with the lowest associated cost as possible, while also
minimising carbon emissions to minimise its strain on the
In this proposal, we advance the state-of-the-art in multi-objective
multi-agent systems (MOMAS), using a combination of gametheoretic and reinforcement learning approaches. We focus on three
main objectives. First, we develop the necessary foundations from a
game-theoretic perspective by studying under what conditions agents
can reach different solution concepts. Secondly, we aim to develop
novel algorithms for agents in MOMAS with finite action and state
spaces. The last objective expands our view to more complex
systems. In systems with high-dimensional and infinite action and
state spaces, we investigate two auxiliary tools for agents to
efficiently learn optimal policies. The first method we consider is
opponent modelling, where agents construct models of other agents.
The second method involves communication so that agents might
receive necessary knowledge faster.