Design and Interpret: A New Framework for Explainable AI

Project Details


Deep neural networks (DNNs) have shown tremendous performance improvement in a wide range of applications. However, they have
one important shortcoming: they are often considered as black boxes, as their inner processes and generalization capabilities are not fully understood. In this project, we aim at tackling this problem, by developing a new framework for AI that's explainable and interpretable. Two complementary research directions will be
investigated. First, we argue that knowledge about the data structure should be incorporated in the design of DNNs, i.e. prior to network training, leading to network transparency, i.e. networks that are more interpretable by design. We will apply this mostly to inverse problems such as image denoising, superresolution and inpainting. Second, we will develop trustworthy methods for post-hoc interpretation and explanation, that analyze the behavior of a network after it has been trained. This will be demonstrated on image classification as well as object detection problems. We expect both strategies to reinforce
one another, leading not only to more explainable models, but also better performing ones, outperforming the current state-of-the-art
Effective start/end date1/01/2031/12/23


  • Explainable AI
  • Deep learning

Flemish discipline codes

  • Computer vision
  • Image and language processing
  • Pattern recognition and neural networks