Speaker: Yining Wang
Abstract: In this paper we consider the question of global optimization of unknown non-convex smooth functions. The algorithm is allowed to adaptively query the underlying func- tion at different locations and receive noisy evaluations function values at queried points. Optimization performance is evaluated by the expected difference of function values at the estimated optimum and the true optimum. Unlike classical optimization problems, first-order information like gradients are not accessible. We propose a local minimax theoretical framework to characterize the fundamental difficulty of optimizing smooth functions with adaptive function evaluations. We show that for functions with fast level set growth around the global minimum, adaptive algorithms converge faster. For the special case of strongly convex and smooth functions, our implied convergence rates match the one developed solely for zeroth-order convex optimization problems [1, 27]. On the other hand, for worst-case smooth functions no algorithm can converge faster than the minimax rate of estimation the entire unknown function in sup norm. We also explicitly give an intuitive and efficient algorithm that attains the derived upper error bounds.