Speaker: Ryan Tibshirani
Abstract: I will talk about some new results exposing surprising phenomena in min-norm least squares interpolation in high dimensions. The setting is very simple: given n observations from a linear model in p dimensions, run a linear regression, and take the minimum L2 norm solution when p > n. Now examine asymptotic prediction risk, when p/n converges to a positive constant \gamma. How does this behave with \gamma (and other problem aspects)? Random matrix theory provides a precise answer to this; despite the simplicity of this problem, there are several surprises, and several similarities to what is observed for more complex interpolators like kernels and neural networks.