Samenvatting

Combinatorial Optimization has wide applications throughout many industries. However, in many real-life applications, some or all coefficients of the optimization problem are not known at the time of execution. In such applications, those coefficients are estimated using machine learning (ML) models. End-to-end predict-and-optimize approaches which train the ML model taking the optimization task into consideration, have received increasing attention. In case of mixed integer linear program (MILP), previous work suggested adding a quadratic regularizer term after relaxing the MILP and differentiate the KKT conditions to facilitate gradient-based learning. In this work, we propose to differentiate the homogeneous self-dual formulation of the relaxed LP, which contains more number of parameters, instead of the KKT conditions. Moreover, as our formulation contains a log-barrier term, we need not add a quadratic term to make the formulation differentiable
Originele taal-2English
Aantal pagina's8
StatusPublished - 7 sep 2020
EvenementDoctoral Program of the 26th International Conference on Principles and Practice of Constraint Programming
- Online
Duur: 7 sep 202011 sep 2020
https://cp2020.a4cp.org/callfordp.html

Conference

ConferenceDoctoral Program of the 26th International Conference on Principles and Practice of Constraint Programming
Periode7/09/2011/09/20
Internet adres

Vingerafdruk

Duik in de onderzoeksthema's van 'Differentiable Optimization Layer with an Interior Point Approach'. Samen vormen ze een unieke vingerafdruk.

Citeer dit