As artificial intelligence (AI) tools employ more advanced reasoning
mechanisms and computation, it becomes increasingly difficult to
understand why certain decisions are made. Explainable AI research
aims to fulfill the need for trustworthy AI systems that can explain
their reasoning in a human-understandable way. Our proposed
contribution to explainable AI is situated in the domain of constraint
solving and optimization, where we aim to augment constraint
solvers with explainable agency .
Based on research questions that came out of a preliminary study
performed by the two PIs, the high-level objective of this research
project is to design an integrated framework for explainable
constraint satisfaction and optimization. Developing such a
framework comes with several questions, related to scalability (the
ability to explain large instances), generality (the ability to answer
different types of questions) and interactability (the ability to interact
in a natural and fluent way with a user).