Optymalizacja wypukła 1000-3M22OW
Many problems in Machine Learning can be formulated as optimizing a function in R^n under a number of constraints, formulated as inequalities. While this model is general enough to include provably hard problems, it becomes provably tractable under certain assumptions. Perhaps the most important examples are problems which can be modeled as optimizing a convex function under a set of constraints that describes a convex set. The course will cover the most important algorithms for such convex optimization problems including gradient descent, subgradient method, barrier method or Newton’s method. It will also include algorithms and properties of an especially well understood special case of convex optimization, namely linear programming. As an important side topic, we will also discuss discrete problems that arise from convex problems by adding the integrality constraints (e.g. integer linear programming).
After the course, the students should be able to recognize which problems can be formulated as convex problems and understand that the efficiency of algorithms for solving these problems is influenced by the class of constraints used. We will see a hierarchy of these classes (convex programming, semidefinite programming, second-order cone programming, quadratic programming, linear programming). We will practice modeling problems using constraints from these classes, as well as applying available solvers (open source or commercial with free academic license).
Kierunek podstawowy MISMaP
Rodzaj przedmiotu
Tryb prowadzenia
Założenia (opisowo)
Kryteria oceniania
There will be home assignments and a written exam. The final grade will depend on performance on both.
Więcej informacji
Dodatkowe informacje (np. o kalendarzu rejestracji, prowadzących zajęcia, lokalizacji i terminach zajęć) mogą być dostępne w serwisie USOSweb: