_Mandatory course for Data Science._
This course is a detailed survey of optimization from both a computational and theoretical perspective. Theoretical topics include convex sets, convex functions, optimization problems, least-squares, linear and quadratic programs, optimality conditions, and duality theory. Special emphasis is put on scalable numerical methods for analyzing and solving linear programs (e.g. simplex), general smooth unconstrained problems (e.g. first-order and second-order methods), quadratic programs (e.g. linear least squares), general smooth constrained problems (e.g. interior-point methods), as well as, a family of non-smooth problems (e.g. ADMM method).
The applications in data sciences, such as machine learning, model fitting, and image processing, will be discussed. The computational part covers the following algorithms: gradient method, quasi-Newton methods, proximal gradient method, Nesterov’s accelerated gradient method, augmented Lagrangian method, alternating direction method of multipliers, block coordinate descent method and stochastic gradient descent method. Students complete hands-on exercises using high-level numerical software.
## Prerequisites
- not scared of math
- good knowledge of linear algebra
- multivariable calculus skills
- programming skills; Python or Julia language is recommended.
## Topics overview
- (Numerical) Linear algebra review
- Convex Functions
- Duality Theory
- Unconstrained Optimization
- Linear Programming Models
- Proximal Methods and ADMM
- Interior Point Methods
- Iterative Methods
- Stochastic Methods
- Derivative-free Methods