Activiteiten per jaar
Samenvatting
Combinatorial Optimization has wide applications throughout many industries. However, in many real-life applications, some or all coefficients of the optimization problem are not known at the time of execution. In such applications, those coefficients are estimated using machine learning (ML) models. End-to-end predict-and-optimize approaches which train the ML model taking the optimization task into consideration, have received increasing attention. In case of mixed integer linear program (MILP), previous work suggested adding a quadratic regularizer term after relaxing the MILP and differentiate the KKT conditions to facilitate gradient-based learning. In this work, we propose to differentiate the homogeneous self-dual formulation of the relaxed LP, which contains more number of parameters, instead of the KKT conditions. Moreover, as our formulation contains a log-barrier term, we need not add a quadratic term to make the formulation differentiable
Originele taal-2 | English |
---|---|
Aantal pagina's | 8 |
Status | Published - 7 sep. 2020 |
Evenement | Doctoral Program of the 26th International Conference on Principles and Practice of Constraint Programming - Online Duur: 7 sep. 2020 → 11 sep. 2020 https://cp2020.a4cp.org/callfordp.html |
Conference
Conference | Doctoral Program of the 26th International Conference on Principles and Practice of Constraint Programming |
---|---|
Periode | 7/09/20 → 11/09/20 |
Internet adres |
Vingerafdruk
Duik in de onderzoeksthema's van 'Differentiable Optimization Layer with an Interior Point Approach'. Samen vormen ze een unieke vingerafdruk.Activiteiten
- 1 Talk or presentation at a conference
-
Differentiable Optimization Layer with an Interior Point Approach
Jayanta Mandi (Speaker) & Tias Guns (Speaker)
7 sep. 2020Activiteit: Talk or presentation at a conference
Bestand