20 Mär 2019
11:00

Seminar in probability theory: David Belius (Universität Basel)

Theory of Deep Learning 1: Introduction to the main questions

This is the first talk in a five part series of talks on deep learning from a theoretical point of view, held jointly between the probability theory and machine learning groups of the Department of Mathematics and Computer Science. The four invited speakers that follow after this talk are young researchers who are contributing in different ways to what will hopefully eventually be a comprehensive theory of deep neural networks.

In this first talk I will introduce the main theoretical questions about deep neural networks:
1. Representation - what can deep neural networks represent?
2. Optimization - why and under what circumstances can we successfully train neural networks?
3. Generalization - why do deep neural networks often generalize well, despite huge capacity?

As a preface I will review the basic models and algorithms (Neural Networks, (stochastic) gradient descent, ...) and some important concepts from machine learning (capacity, overfitting/underfitting, generalization, ...).


Veranstaltung übernehmen als iCal