1
Large-Scale Machine Learning and Stochastic Algorithms by Leon Bottou - Machine Learning Summer School at Purdue, 2011. During the last decade, data sizes have outgrown processor speed. We are now frequently facing statistical machine learning problems for which datasets are virtually infinite. Computing time is then the bottleneck.
166 years, 7 months
6
The first part of the lecture centers on the qualitative difference between small-scale and large-scale learning problem. Whereas small-scale learning problems are subject to the usual approximation - estimation tradeoff, large-scale learning problems are subject to a qualitatively different tradeoff involving the computational complexity of the underlying optimization algorithms in non-trivial ways. Unlikely optimization algorithm such as stochastic gradient show amazing performance for large-scale machine learning problems. The second part makes a detailed overview of stochastic learning algorithms applied to both linear and nonlinear models. In particular I would like to spend time on the use of stochastic gradient for structured learning problems and on the subtle connection between nonconvex stochastic gradient and active learning.
Course Currilcum
- Lecture 1 – Learning with Stochastic Gradient Descent Unlimited
- Lecture 2 – The Tradeoffs of Large Scale Learning Unlimited
- Lecture 3 – Experiments with SGD Unlimited
- Lecture 4 – Analysis for a Simple Case, Learning with a Single Epoch Unlimited
- Lecture 5 – General Convergence Results Unlimited
- Lecture 6 – SGD for Neyman-Pearson Classification Unlimited