10Apr 2019

Seminar in probability theory: Lenaïc Chizat (Université Paris-Sud)

Theory of Deep Learning 4: Training Neural Networks in the Lazy and Mean Field Regimes

The current successes achieved by neural networks are mostly driven by experimental exploration of various architectures, pipelines, and hyper-parameters, motivated by intuition rather than precise theories. Focusing on the optimization/training aspect, we will see in this talk why pushing theory forward is challenging, but also why it matters and key insights it may lead to. We will review some recent results on the phenomenon of "lazy training", on the role of over-parameterization, and on training neural networks with a single hidden layer.

Veranstaltung übernehmen als iCal