The lecture is about the ongoing recent work on differentially private machine learning (DP-ML). Firstly, I will present simple yet effective strategies to improve the performance of DP-stochastic gradient descent (DP-SGD), the widely adopted method for DP-ML. Then I will discuss ways to defend against Byzantine attacks when DP-SGD is used in federated learning. Finally, I will talk about our recent exploration in input perturbation with DP and synthetic data generation, which is another popular approach to DP-ML.
Tianhao Wang is an assistant professor of computer science at the University of Virginia. His research interests lie in data privacy and security, and their connections to machine learning and cryptography. He obtained his Ph.D. from Purdue University in 2021 and held a postdoc position at Carnegie Mellon University. His work about differentially private synthetic data generation won multiple awards in the NIST competitions.