Step-wise explanations for constraint solving

Research output: ThesisPhD Thesis

Abstract

Constraint solving techniques provide decision support in many everyday and industrial applica-
tions, ranging from the scheduling of nurses in a hospital to sequencing jobs on machines under
resource constraints. AI practitioners harness the power of highly efficient solvers by building a
model-based representation of these constraint satisfaction problems (CSPs). Advances in con-
straint solving techniques and hardware improvements allow such systems to consider millions
of alternatives in a short period of time. However, this complex reasoning makes it increasingly
difficult to understand why certain decisions are made, i.e., which parts of the problem constraints
were used to obtain the solution.
In this thesis, we study the problem of explaining the inference steps leading to a solution, in
a human interpretable way, agnostic of the ‘inner working’ of the underlying solver used. The
main challenge is to explain the solution by means of a sequence of simple explanation steps. Each
explanation step should aim to be as cognitively easy as possible for a human to understand.
In our first contribution, we formalise the problem of step-wise explaining the maximal conse-
quence of constraint satisfaction problems. We propose algorithms for generating a sequence of
explanation steps. Each explanation step combines constraints with variable assignments (facts),
to infer new information from the solution. We propose the use of a cost function to estimate the
difficulty of an individual explanation step. This cost function captures, for example, the interac-
tion between constraints and facts, or the type of constraints present in the explanation. Our
algorithms for explanation generation exploit a one-to-one correspondence between so-called
non-redundant explanations and Minimal Unsatisfiable Subsets (MUSs) of a derived unsatisfi-
able formula. Since some generated explanations are still difficult to understand, we propose to
decompose a complex explanation into smaller nested explanations using reasoning by contradic-
tion, thus providing the ability to zoom-in on explanations.
Unfortunately, MUS extraction techniques do not guarantee the optimality of an explanation with
respect to such a cost function. Therefore, we address how to generate explanations by introduc-
ing Optimal Constrained Unsatisfiable Subsets (OCUS), that satisfy certain structural constraints
and that are provably optimal (with respect to a given objective function), e.g., ensuring that we
explain one fact of the solution at each step. We introduce a hitting–set-based algorithm that
leverages the strengths of Mixed Integer Programming (MIP) and Boolean Satisfiability (SAT)
for efficient OCUS computation.
So far, we have assumed the existence of a cost function that estimates the difficulty of an ex-
planation. However, we do not currently know what a good function is. Furthermore, at each
step, there are many candidate explanations for a single fact, and we do not currently know how
to characterise what is the most helpful explanation for a user. It may even vary from one user to
another. To address this challenge, we study learning preferences for explanations directly from
users. In our third contribution, we investigate whether we can learn a preference function over a
feature representation of explanations using learning-to-rank techniques, and its integration into
an explanation generation workflow. We showcase on sudoku that we can learn functions that
score well on choosing between two smallest MUSs. Furthermore, by using an iterative learning
setup, we can directly learn a linear scoring function that can be integrated to directly gener-
ate preferred explanations. Finally, we conduct a small-scale user study to evaluate how well
they perform when generating explanations directly versus selecting from a pre-generated set of
explanations.
Overall, our results show that we are able to step-wise explain the maximal consequence of con-
straint satisfaction problems, with a focus on human interpretability, by providing the most helpful
explanation to the user at each step.
Original languageEnglish
Awarding Institution
  • Vrije Universiteit Brussel
Supervisors/Advisors
  • Guns, Tias, Supervisor
  • Bogaerts, Bart, Supervisor
  • Ginis, Vincent, Supervisor
Award date21 May 2024
Publication statusPublished - 2024

Cite this