When Machines Learn About Humans

Moritz Hardt, IBM Research Almaden

The “human element” in data introduces fundamental algorithmic challenges such as protecting individual privacy, ensuring fairness in classification, and, designing algorithms that are robust to population outliers and adversarial conditions. This talk focuses on the fruitful interplay between privacy and robustness.

We will first give a simple and practical method for computing the principal components of a data set under the strong notion of differential privacy. Our algorithm always guarantees privacy while its utility analysis circumvents an impossibility result in differential privacy using a realistic assumption central to robust principal component analysis.

We then turn to the problem of analyzing massive data sets using few linear measurements—an algorithmic paradigm known as “linear sketching”. Here we prove a “no free lunch” theorem showing that the computational efficiency of linear sketches comes at the cost of robustness. Indeed, we show that efficient linear sketches cannot guarantee correctness on adaptively chosen inputs. Our result builds on a close connection to privacy and can be seen as a novel “reconstruction attack” in the privacy setting.

Speaker Biography

Moritz Hardt is a post-doctoral researcher in the theory group at IBM Research Almaden. He completed his PhD in Computer Science at Princeton University in 2011, advised by Boaz Barak. His current work focuses on the algorithmic foundations of privacy, fairness and robustness in statistical data analysis. His general research areas include algorithms, machine learning and complexity theory.