Research Council - Backup mandate H. Plisnier: Robots learning complex tasks from human feedback

Project Details

Description

More and more robots can be found in various human-populated environments (such as manufactoring industries, hospitals, centers for the elderly, and even homes), but their knowledge is generally fixed, preprogrammed by an expert designer. To self-adjust in any novel environment and to better assist people in everyday tasks, robots must be able to learn new skills that might not have been anticipated by the robot designer. To speed up learning, they can leverage the knowledge of humans in their surroundings. Since these humans may not be familiar with programming, teaching the agent should be as intuitive and easy as possible.Reinforcement Learning methods permit an agent to acquire new skills by interacting repeatedly with its environment. Furthermore, Hierarchical Reinforcement Learning techniques make learning complex tasks possible and knowledge about acquired skills reusable as needed. We propose to design a Hierarchical Reinforcement Learning architecture receiving feedback from a non-expert human teacher. Learning from human feedback makes learning faster and potentially permits the user to teach the agent behavior he/she likes, hence making the agent customizable. Moreover, the agent will act transparently as it is necessary for the human teacher to be aware of its current goal and performance to dynamically adjust his/her feedback to the agent's needs.
Short titleOZR opvangmandaat
AcronymOZR3211
StatusFinished
Effective start/end date1/01/1831/12/18

Keywords

  • computer science
  • informatics