Summer School Lecturers
Abstract and Short Biography
- Title: Level-set methods for convex optimization
- Short Biography:
Certain classes of convex optimization problems have computationally favorable objectives but complicating constraints that render established algorithms ineffectual for large-scale problems; these often arise in machine learning and signal processing applications. Level-set methods are a group of techniques for transforming a constrained problem into a sequence of simpler subproblems that are amenable to standard algorithms. The theoretical and computational benefits of the approach are many. This tutorial lays out the basic theoretical background needed to understand the approach, including duality and its connection to properties of the optimal-value function, and presents the main algorithmic tools needed to solve the subproblems. Various applications and problem formulations are used throughout to highlight key aspects of the level-set approach. [Based on joint work with A. Aravkin, J. Burke, D. Drusvyatskiy, S. Roy.]
Michael Friedlander is IBM Professor of Computer Science and Professor of Mathematics at the University of British Columbia. He received his PhD in Operations Research from Stanford University in 2002, and his BA in Physics from Cornell University in 1993. From 2002 to 2004 he was the Wilkinson Fellow in Scientific Computing at Argonne National Laboratory. He has held visiting positions at UCLA's Institute for Pure and Applied Mathematics (2010), and at Berkeley's Simons Institute for the Theory of Computing (2013). His research is primarily in developing numerical methods for large-scale optimization, their software implementation, and applying these to problems in signal processing and machine learning.
- Title: Large scale convex composite optimization: duality, algorithms and implementations
- Co-author: Defeng Sun
- Short Biography:
Convex composite optimization problems arise frequently in a wide variety of domains including machine learning, statistics, signal processing, semidefinite programming, etc. Typically, the problems are large scale and their underlying structures must be fully exploited in the algorithms designed to solve them. In this tutorial, we will survey some basic tools for designing efficient and robust algorithms for large scale convex composite optimization problems. In particular, we will describe augmented Lagrangian based algorithms which can be designed to solve those problems, and discuss how the subproblems can be solved efficiently by a semismooth Newton-CG method. While it is popularly believed that first-order methods are the most appropriate framework for solving those large scale problems, we shall convincingly demonstrate that for difficult problems, incorporating the second-order information wisely into the algorithmic design is necessary for achieving reasonably good accuracy and computational efficiency.
Dr Toh is a Professor at the Department of Mathematics, National University of Singapore (NUS). He obtained his BSc degree in Mathematics from NUS in 1990 and the PhD degree in Applied Mathematics from Cornell University in 1996 under the direction of Nick Trefethen.
He is currently an Area Editor for Mathematical Programming Computation, and an Associate Editor for the SIAM Journal on Optimization. He also serves as the secretary of the SIAM Activity Group on Optimization. He has been invited to speak at numerous conferences and workshops, including SIAM Annual Meeting in 2010 and ISMP in 2006. His current research focuses on designing efficient algorithms and software for convex programming, particularly large scale optimization problems arising from data science and large scale matrix optimization problems such as linear semidefinite programming (SDP) and convex quadratic semidefinite programming (QSDP).
- Title: Algorithmic and geometric aspects of combinatorial and continuous optimization
- Short Biography:
The question of whether a linear optimization problem can be solved in strongly polynomial time - that is, the existence of an algorithm independent from the input data length and polynomial in the number of constrains and the dimension - is listed by Fields medalist Smale as one of the top mathematical problems for the XXI century. The simplex and primal-dual interior point methods are the most computationally successful algorithms for linear optimization. While simplex methods follow an edge path, interior point methods follow the central path. The curvature of a polytope, defined as the largest possible total curvature of the associated central path, can be regarded as a continuous analogue of its diameter. We review results and conjectures dealing with the combinatorial, geometric, and algorithmic aspects of linear optimization. In particular, we highlight links between the edge and central paths, and between the diameter and the curvature. We recall continuous results of Dedieu, Malajovich, and Shub, and discrete results of Holt, Klee, and Walkup, and related conjectures including the Hirsch conjecture that was disproved by Santos. We present results dealing with curvature and diameter of polytopes, such as a counterexample to a continuous analogue of the polynomial Hirsch conjecture by Allamigeon, Benchimol, Gaubert, and Joswig, a strengthening of the Kleinschmidt and Onn upper bound for the diameter of lattice polytopes by Del Pia and Michini, and the strengthening of the Kalai and Kleitman upper bound for the diameter of polytopes by Kitahara, Sukegawa, and Todd.
Since 2004, Antoine Deza has been at McMaster University where he has held a Canada Research Chair in Combinatorial Optimization. Since 2008, he has been the Head of the Advanced Optimization Laboratory. He had previously held a faculty position at the Tokyo Institute of Technology. He has been the Chair of the Fields Institute Industrial Optimization Seminar since 2008, a co-organizer of several conferences including the 2011 Fields Institute Thematic Program on Discrete Geometry and Applications, an Associate Editor for Discrete Applied Mathematics, Optimization Letters, and the Journal of Discrete Algorithms. He was elected a Fields Institute Fellow in the 2014. He has held visiting positions at Ecole Polytechnique Federale de Lausanne, Technion Haifa, Tokyo Institute of Technology, Universite Paris Sud, where he holds a Digiteo Chair in Combinatorics and Optimization, Universite Pierre et Marie Curie, and Ecole Nationale des Ponts et Chaussees, Paris.
- Title: Convex analysis approach to discrete optimization
- Short Biography:
Discrete convex analysis is a theory that aims at a discrete analogue of convex analysis for nonlinear discrete optimization. We first introduce fundamental classes of discrete convex functions, including submodular functions, integrally convex functions, M-convex functions and L-convex functions. Emphasis is put on the relation between submodularity and convexity/concavity.
Next we explain fundamental properties of discrete convex functions, including:
(i) Operations such as addition, scaling, convolution and transformation by networks,
(ii) Conjugacy relation between L-convexity and M-convexity under Legendre transformation, and
(iii) Duality theorems such as separation theorem and Fenchel-type minimax theorem.
Then we go on to algorithmic aspects of minimization of discrete convex functions, including:
(i) Local optimality criterion for global minimality, which varies with types of discrete convexity,
(ii) Descent algorithms for M-convex and L-convex functions, and
(iii) M-convex intersection algorithm for minimizing the sum of two M-convex functions.
Kazuo Murota is a professor at School of Business Administration, Faculty of Urban Liberal Arts, Tokyo Metropolitan University, where he has been since 2015. He received his Doctor of Engineering from University of Tokyo in 1983 and Doctor of Science from Kyoto University in 2002. Murota's research interests is broad in mathematical engineering, in particular, discrete and continuous optimization (discrete convex analysis), combinatorial matrix theory (mixed matrices), numerical analysis, and economic geography. He is the author of five English books, including "Discrete Convex Analysis" and "Systems Analysis by Graphs and Matroids." He was awarded Inoue Prize for Science in 2004.