|
Course OML: Optimization and Machine Learning
Course descriptionThis course is both about the important role of optimization in Machine Learning, and on the role of Machine Learning to improve optimization methods.The first five weeks will be taught by Prof.dr. Ilker Birbil (University of Amsterdam). He will give an introduction on supervised learning, with a special focus on the role of optimization:
PrerequisitesInteger and linear optimization, and basic knowledge on nonlinear optimization.LiteratureHandouts.ExaminationTake home problems.Abstracts of research lecturesLeen Stougie (CWI & VU)Learning Augmented Algorithms for Online Optimization Problems: Illustrated by The Online Traveling Salesman ProblemIn online optimization input arrives over time or one-by-one and an algorithm needs to make decisions without knowledge on future requests. The performance of online algorithms is typically assessed by worst-case competitive analysis. The assumption in online optimization of not having any prior knowledge about future requests seems overly pessimistic. In particular, in the realm of machine-learning methods and data-driven applications, one may expect to have access to predictions about future requests. However, simply trusting such predictions might lead to very poor solutions, as these predictions come with no quality guarantee. A recent vibrant line of research aims at incorporating such error-prone predictions into online algorithms, to go beyond worst-case barriers. The goal of these so-called learning-augmented algorithms is to perform close to an optimal offline algorithm when given accurate predictions (called consistency) and, at the same time, never being (much) worse than that of a best known algorithm without access to predictions (called robustness). Further, the performance of a learning augmented algorithm shall degrade in a controlled way with increasing prediction error. In this lecture I will illustrate results within this framework for the online traveling salesman problem (OLTSP). I will show the typical ingredients that such analysis requires and the typical statements about performance that one may expect to see. One of the ingredients is a proper definition of a prediction error. For the OLTSP we have devised such an error measure, the basis of which is broadly applicable. This is joint work with Giulia Bernardini (UTrieste), Alexander Lindmayer (UBremen), Alberto Marchetti Spaccamela (SapienzaURoma), Nicole Megow (UBremen), Michelle Sweering (CWI). Jannis Kurtz (UvA)Counterfactual Explanations in Machine Learning and OptimizationIn recent years, there has been a rising demand for transparent and explainable machine learning (ML) models. A large stream of works focuses on algorithmic methods to derive so called counterfactual explanations (CE). In this lecture, we will introduce the main concept and show how these explanations can be efficiently calculated for a large class of ML models by gradient methods or constraint-learning techniques. Afterwards, we will show how robust optimization methods can be used to calculate regions of CEs which improve the flexibility for the user and the robustness of the CEs. Finally, we will show how the concept of CEs can be used to calculate useful explanations for linear optimization problems. Dick den Hertog (UvA)Mixed-Integer Optimization with Constraint LearningWe establish a broad methodological foundation for mixed-integer optimization with learned constraints. We propose an end-to-end pipeline for data-driven decision making in which constraints and objectives are directly learned from data using machine learning, and the trained models are embedded in an optimization formulation. We exploit the mixed-integer optimization representability of many machine learning methods, including linear models, decision trees, ensembles, and multilayer perceptrons, which allows us to capture various underlying relationships between decisions, contextual variables, and outcomes. We also introduce two approaches for handling the inherent uncertainty of learning from data. First, we characterize a decision trust region using the convex hull of the observations to ensure credible recommendations and avoid extrapolation. We efficiently incorporate this representation using column generation and propose a more flexible formulation to deal with low-density regions and high-dimensional data sets. Then, we propose an ensemble learning approach that enforces constraint satisfaction over multiple bootstrapped estimators or multiple algorithms. In combination with domain-driven components, the embedded models and trust region define a mixed-integer optimization problem for prescription generation. We implement this framework as a Python package (OptiCL) for practitioners. We demonstrate the method in both World Food Programme planning and chemotherapy optimization. The case studies illustrate the frameworkâs ability to generate high-quality prescriptions and the value added by the trust region, the use of ensembles to control model robustness, the consideration of multiple machine learning methods, and the inclusion of multiple learned constraints. Address of the coordinating lecturerProf.dr. S.I. BirbilFaculty of Economics and Business, Section Business Analytics University of Amsterdam E-mail: s.i.birbil@uva.nl |