Short Course: Machine Learning from a Statistical Point of View
Speaker: Stefan Richter (Heidelberg)
Time and Place: The short Course will take place in the Mathematikon (INF 205).
Monday, October 15th, 9.15  10.45, 11.15  12.45 Conference Room (5th floor)
14.00  15.30 Seminarroom A
Tuesday, October 16th, 10.00  11.30, 12.00  13.30, 14.30  16.00 Conference Room (5th floor)
Wednesday, October 17th, 9.15  10.45, 11.15  12.45 Conference Room (5th floor)
During the breaks there will be coffee/tea and snacks in room 5/300
Abstract:
The course gives an overview over past and recent machine learning algorithms, their applications in practice and their statistical analysis.
On the first two days, we will consider supervised algorithms such as support vector machines, trees, random forests and neural networks. If time is left we will also have a look at Reinforcement Learning (via DeepQLearning) and Unsupervised Learning via Spectral Clustering. For each algorithm we provide graphical examples for better understanding.
The statistical focus lies on upper bounds of the socalled Bayes excess risk which measures the expected difference of the generalization error of the estimator and the (optimal) Bayes risk. Since the input data is usually highdimensional with dimension d,we will formulate nonasymptotic results which depend on d and n, the number of training samples. Due to time restrictions, we will only explain the theoretical setup and the theorems but skip proofs.

