Multi-group agnostic learning via sleeping experts and adaptive hedging

29 Mar, 2023, 2:30-4:00 pm, GHC 8102

Speaker: Chirag Gupta

Abstract: An agnostic learning algorithm finds a predictor that is competitive with the best predictor in a benchmark hypothesis class. However, the predictor might be quite sub-optimal for structured subgroups of individuals, such as protected demographic groups. The goal of multi-group agnostic learning is as follows: fixing some loss, a benchmark class H, and a collection of (potentially overlapping) subgroups G, learn a predictor such that the loss experienced by every group g ∈ G is not much larger than the best possible loss (within H) for g. The phrase "multi-group agnostic PAC" was conceived by Rothblum and Yona (2021), and my description above is almost directly from their abstract. Blum and Lykouris (2020) studied multi-group agnostic online learning for bounded decomposable losses. Their (surprising) finding is a single randomized strategy that controls regret on each group g simultaneously, at almost the same level that one would achieve by running separate strategies for each g. They do this via a reduction to the problem of 'sleeping' experts, which in turn can be solved using adaptive versions of the well-known Hedge algorithm. I will present the multi-group learning goal, make the reduction to sleeping experts, and (time permitting) discuss details of a specific adaptive hedging algorithm by Luo and Schapire (2015). Relevant papers: - Rothblum and Yona (2021) https://arxiv.org/abs/2105.09989 - Blum and Lykouris (2020) https://arxiv.org/abs/1909.08375 - Luo and Schapire (2015) https://arxiv.org/abs/1502.05934