A statistician walks into a deep learning bar…

Apr 12 (Wednesday) at 2pm GHC-8102

Speaker: YJ Choe

Abstract:j In this talk, I will review some of the well known and less well known statistical results on neural networks. Specifically, I will present minimax rates and generalization error bounds for shallow neural networks, which appear to break the curse of dimensionality that other nonparametric estimators are known to suffer. In the first part of the talk, I will give a brief review of neural networks and present their universal approximation property as well as their minimax rates of convergence [1]. In the second part of the talk, I will introduce convex neural networks and their connections to RKHS to show how shallow neural networks with the popular rectified linear units (ReLUs) can achieve a sparsity-adaptive generalization error bound [2]. Time permitting, I will also mention some recent work that explains the effect of depth in deep learning.

  1. Yang, Y., Barron, A. (1999). Information-theoretic determination of minimax rates of convergence. Annals of Statistics, 1564-1599.
  2. Bach, F. (2014). Breaking the curse of dimensionality with convex neural networks. Research Report, INRIA Paris.