Cross-Validation with Confidence

Apr 05 (Wednesday) at 2pm GHC-8102

Speaker: Jing Lei

Abstract: Cross-validation is one of the most popular model selection methods in statistics and machine learning. Despite its wide applicability, traditional cross-validation methods tend to select overfitting models, unless the ratio between the training and testing sample sizes is much smaller than conventional choices. We argue that such an overfitting tendency of cross-validation is due to the ignorance of the uncertainty in the testing sample. Starting from this observation, we develop a new, statistically principled inference tool based on cross-validation that takes into account the uncertainty in the testing sample. This new method outputs a small set of highly competitive candidate models containing the best one with guaranteed probability. As a consequence, our method can achieve consistent variable selection in a classical linear regression setting, for which existing cross-validation methods require unconventional split ratios. We demonstrate the performance of the proposed method in several simulated and real data examples. Link: https://arxiv.org/abs/1703.07904