Starts 9 Feb 2021 15:00
Ends 9 Feb 2021 17:00
Central European Time
Zoom meeting

Meeting ID: 475-819-702

If you haven't registered for previous QLS webinars, please contact to obtain the PASSWORD for this zoom meeting.

Abstract: Understanding the impact of data structure on learning in neural networks remains a key challenge for the theory of neural networks.
In these two lectures, we will discuss how to go beyond the simple i.i.d. modelling paradigm in the teacher-student setup by studying neural networks trained on data drawn from structured generative models. 
Our discussion will center around two results:
(1) We give rigorous conditions under which a class of generative models shares key statistical properties with an appropriately chosen Gaussian feature model.
(2) We use this Gaussian equivalence to analyse the dynamics of two-layer neural networks trained using one-pass stochastic gradient descent on data drawn from a large class of generators.  
I will try to make these lectures self-contained. 
They will be mostly based on the following two papers:
[1] Goldt, S., Mézard, M., Krzakala, F. and Zdeborová, L., 2020.
     Modeling the Influence of Data Structure on Learning in Neural Networks: The Hidden Manifold Model. Physical Review X10(4), p.041044.
[2] Goldt, S., Reeves, G., Loureiro, B., Mézard, M., Krzakala, F. and Zdeborová, L., 2020.
     The Gaussian equivalence of generative models for learning with two-layer neural networks. 
     under review;