Classification accuracy as a proxy for two-sample testing

Jan 28, 3pm, GHC 8102

Speaker: Ilmun Kim

Abstract: When data analysts train a classifier and check if its accuracy is significantly different from chance, they are implicitly performing a two-sample test. We investigate the statistical properties of this flexible approach in the high-dimensional setting. We first present general conditions under which a classifier-based test is consistent, meaning that its power converges to one. To get a finer understanding of the rates of consistency, we study a specialized setting of distinguishing two Gaussians with different means and a common covariance. By focusing on Fisher's linear discriminant analysis (LDA) and its high-dimensional variants, we provide asymptotic but explicit power expressions of classifier-based tests and contrast them with corresponding Hotelling-type tests. Surprisingly, the expressions for their power match exactly in terms of the parameters of interest, and the LDA approach is only worse by a constant factor. This is joint work with Aaditya Ramdas, Aarti Singh, and Larry Wasserman.

Reference