Scientific Calendar Event



Starts 5 Nov 2025 11:00
Ends 5 Nov 2025 12:00
Central European Time
ICTP
Common area, second floor, old SISSA building
Via Beirut, 2
For three decades, statistical physics has framed neural-network analysis, but its reach to expressive, feature-learning deep models was unclear. We answer yes by studying supervised learning in fully connected multi-layer nets whose hidden layers scale with input dimension—favoring feature learning over ultra-wide kernels while remaining more expressive than narrow or fixed-weight models—in the challenging interpolation regime where parameters and data are comparable. Using a matched teacher–student setup, we characterize fundamental performance limits and the sufficient statistics learned as data grows. The analysis uncovers rich phenomenology with multiple learning transitions: with enough data, optimal performance requires “specialization” of the student to the target, yet practical training can be trapped in sub-optimal solutions. Specialization is inhomogeneous—spreading from shallow to deep layers and unevenly across neurons—and deeper targets are intrinsically harder. Though derived in a Bayesian-optimal setting, the insights on nonlinearity, depth, and finite (proportional) width likely generalize.