Solving optimization problems is the key to decision making in many real-life analytics applications. However, the coefficients of the optimization problems are often uncertain and dependent on external factors, such as future demand or energy or stock prices. Machine learning (ML) models, especially neural networks, are increasingly being used to estimate these coefficients in a data-driven way. Hence, end-to-end predict-and-optimize approaches, which consider how effective the predicted values are to solve the optimization problem, have received increasing attention. In case of integer linear programming problems, a popular approach to overcome their non-differentiabilty is to add a quadratic penalty term to the continuous relaxation, such that results from differentiating over quadratic programs can be used. Instead we investigate the use of the more principled logarithmic barrier term, as widely used in interior point solvers for linear programming. Specifically, instead of differentiating the KKT conditions, we consider the homogeneous self-dual formulation of the LP and we show the relation between the interior point step direction and corresponding gradients needed for learning. Finally our empirical experiments demonstrate our approach performs as good as if not better than the state-of-the-art QPTL (Quadratic Programming task loss) formulation of Wilder et al. [29] and SPO approach of Elmachtoub and Grigas [12].

Originele taal-2English
TitelInterior Point Solving for LP-based prediction+optimisation
Aantal pagina's11
StatusPublished - 9 dec 2020
EvenementNeural Information Processing Systems, 2020: Online Conference - Online
Duur: 6 dec 202012 dec 2020

Publicatie series

NaamAdvances in Neural Information Processing Systems
ISSN van geprinte versie1049-5258


ConferenceNeural Information Processing Systems, 2020
Internet adres


Duik in de onderzoeksthema's van 'Interior Point Solving for LP-based prediction+optimisation'. Samen vormen ze een unieke vingerafdruk.

Citeer dit