Projects per year
Abstract
Safe reinforcement learning (RL) with hard constraint guarantees is a promising optimal control direction for multi-energy management systems. It only requires the environment-specific constraint functions itself a priori and not a complete model (i.e., plant, disturbance and noise models, and prediction models for states not included in the plant model — e.g. demand forecasts, weather forecasts, price forecasts). The project-specific upfront and ongoing engineering efforts are therefore still reduced, better representations of the underlying system dynamics can still be learnt, and modelling bias is kept to a minimum (no model-based objective function). However, even the constraint functions alone are not always trivial to accurately provide in advance (e.g., an energy balance constraint requires the detailed determination of all energy inputs and outputs), leading to potentially unsafe behaviour. Furthermore, while computing the closest feasible action results in a high sample efficiency (as in OptLayer), it does not necessarily have a high utility (especially in the initial learning stage of RL agents). In contrast, providing a safe fallback policy a priori can lead to a high initial utility, but was shown to result in a poor sample efficiency and inability to include equality constraints (as in SafeFallback). In this paper, we present two novel advancements: (I) combining the OptLayer and SafeFallback method, named OptLayerPolicy, to increase the initial utility while keeping a high sample efficiency and the possibility to formulate equality constraints. (II) introducing self-improving hard constraints, to increase the accuracy of the constraint functions as more and new data becomes available so that better policies can be learnt. Both advancements keep the constraint formulation decoupled from the RL formulation, so new (presumably better) RL algorithms can act as drop-in replacements. We have shown that, in a simulated multi-energy system case study, the initial utility is increased to 92.4% (OptLayerPolicy) compared to 86.1% (OptLayer) and that the policy after training is increased to 104.9% (GreyOptLayerPolicy) compared to 103.4% (OptLayer) — all relative to a vanilla RL benchmark. Although introducing surrogate functions into the optimisation problem requires special attention, we conclude that the newly presented GreyOptLayerPolicy method is the most advantageous.
Original language | English |
---|---|
Article number | 101202 |
Number of pages | 14 |
Journal | Sustainable Energy, Grids and Networks |
Volume | 36 |
Issue number | 2023 |
DOIs | |
Publication status | Published - Dec 2023 |
Bibliographical note
Funding Information:This research has received equal support from ABB n.v. and the Flemish Agency for Innovation and Entrepreneurship (VLAIO) under grant HBC.2019.2613 .
Publisher Copyright:
© 2023 Elsevier Ltd
Keywords
- Reinforcement Learning
- Surrogate Optimisation
- Constraints
- multi-energy systems
- Energy Management Systems
Fingerprint
Dive into the research topics of 'An adaptive safety layer with hard constraints for safe reinforcement learning in multi-energy management systems'. Together they form a unique fingerprint.Projects
- 1 Active
-
VLAOO13: Baekeland mandate: Safe reinforcement learning for optimal control in multi-energy systems
Messagie, M., Nowe, A. & Ceusters, G.
1/01/20 → 31/12/24
Project: Applied