Meeting ID: 475-819-702

If you haven't registered for previous QLS webinars, please contact to obtain the PASSWORD for this zoom meeting.

We provide a complete picture about mechanisms of unsupervised learning, i.e., in a simple neural network system, unsupervised learning can be interpreted as breaking a series of symmetries, driven by increasing observations. First, it is the spontaneous symmetry breaking starting the concept formation, then the permutation symmetry among hidden units is spontaneously broken, including in sequence two types: the student side is the first, and the teacher (planted truth) is the second. In this paper, we also analytically prove that the learning threshold in a simple neural network that triggers a spontaneous symmetry breaking (concept-formation) does not depend on the number of hidden neurons (here two for a minimal model) once this number is finite, for the correlation-free case. The underlying physics is the partition function factorization for which we give a proof. Moreover, our analytical result reveals that the weak correlation among receptive fields of hidden neurons significantly reduces the learning threshold, which is consistent with the non-redundant weight assumption popular in system neuroscience and machine learning! By studying the minimal model, we are excited to reveal the inner workings of unsupervised learning, a fundamental process governing artificial and biological intelligence. We expect this work may open doors towards physical laws governing neural learning.
Go to day