Speaker: Christoph Dann
Abstract: Reinforcement learning is one of the most prominent frameworks for sequential decision making under uncertainty with a wide range of applications. These include tasks in robotics, health care, education and advertisement. In this talk, I will first give a brief overview of the landscape of reinforcement learning problems and important open research questions. I will then focus on reinforcement learning in episodic Markov decision processes and discuss model-based algorithms using optimism under uncertainty. Finally, I will present the key insights of recent analyses which show that these methods achieve near-optimal sample complexity. The last part of the talk is based on our NIPS'15 paper available at: http://arxiv.org/abs/1510.08906 .