Nearly Dimension-Independent Rates for Differentially-Private Stochastic Saddle-Point Problems

21 Feb, 2025, 4-5 pm, GHC 8102

Speaker: Tomas Gonzalez

Abstract: Stochastic Convex Optimization (SCO) and Stochastic Saddle Point (SSP) problems are important in ML. At the same time, protecting users’ privacy has emerged as a critical concern in ML applications, and Differential Privacy (DP) has become the gold standard to ensure formal privacy guarantees. Existing lower bounds for SCO under DP indicate that the excess of risk is polynomial-in-the-dimension in the worst case. However, it has been shown that under some extra geometric assumptions it is possible to achieve polylogarithmic-in-the-dimension excess of risk. We generalize the polylogarithmic-in-the-dimension error rates to the more general problem of SSP under DP, by using very different techniques from the ones used for the SCO case. Talk based on: https://arxiv.org/pdf/2403.02912