_Mandatory course for Data Science._
This course is a detailed survey of optimization from both a computational and theoretical perspective. Theoretical topics include convex sets, convex functions, optimization problems, least-squares, linear and quadratic programs, optimality conditions, and duality theory. Special emphasis is put on scalable numerical methods for analyzing and solving general smooth unconstrained problems (e.g. first-order and second-order methods), quadratic programs (e.g. linear least squares), general smooth constrained problems (e.g. interior-point methods), as well as, a family of non-smooth problems (e.g. ADMM method).
The applications in data sciences, such as machine learning, model fitting, and image processing, will be discussed. The computational part covers the following algorithms: gradient method, quasi-Newton methods, proximal gradient method, Nesterov’s accelerated gradient method and stochastic gradient descent method. Students complete hands-on exercises using high-level numerical software.
## Prerequisites
- not scared of math
- good knowledge of linear algebra
- multivariable calculus skills
- programming skills; Python is recommended.
## Topics overview
- (Numerical) Linear algebra review
- Iterative methods solving linear system of equations
- First and second order methods + quasi Newton methods
- Convex Functions
- Unconstrained Optimization
- Stochastic Methods