Speaker: Yusha Liu
Abstract: We consider the the problem of sequentially optimizing a black-box function f based on noisy samples and bandit feedback. We assume that f is smooth in the sense of having a bounded norm in some reproducing kernel Hilbert space (RKHS), yielding a commonly-considered non-Bayesian form of Gaussian process bandit optimization. We will discuss algorithm-independent lower bounds with a focus on the cumulative regret, measuring the sum of regrets over the T chosen points. We will discuss results pertaining to two commonly-used stationary kernels: squared exponential (SE) and Matérn. https://arxiv.org/abs/1706.00090