Aleksandar Nikolov (Toronto) - The Power of Factorization Mechanisms in Differential Privacy

Return to Full Calendar
November 5, 2019 at 3:30pm - 4:30pm
Ryerson, Rm. 251
Event Audience:
Aleksandar Nikolov

Speaker: Aleksandar Nikolov Assistant Professor, Department of Computer Science, University of Toronto

I am broadly interested in theory of computation and algorithm design, and I am a part of the Theory Group. My current research interests are in the connections between high dimensional geometry and computer science. In my work, I have applied geometric tools to the theory of private data analysis (differential privacy), discrepancy theory, and experimental design. I have also worked on computational questions in high dimensions, such as nearest neighbor search, and various geometric optimization problems. I also think about approximation algorithms, and sublinear and parallel algorithms for analyzing massive data. For more information, look in Research. If any of the above sounds intriguing, and you are a talented and motivated student interested in the theory of computing and the design of algorithms, I encourage you to apply to the University of Toronto.

Before coming to Toronto, between October 2014 and July 2015, I was a postdoc researcher in the Theory Group at Microsoft Research in Redmond. Before that, I completed my PhD in Rutgers University's Computer Science department, where I was advised by Muthu. During 2012-2014 I was supported by a Simons Graduate Fellowship.

Abstract: The Power of Factorization Mechanisms in Differential Privacy

A central goal in private data analysis is to estimate statistics about an unknown distribution from a dataset possibly containing sensitive information, so that the privacy of any individual represented in the dataset is preserved. We study this question in the model of non-interactive local differential privacy (LDP), in which every person in the dataset randomizes their own data in order to preserve its privacy, before sending it to a central server. We give a characterization of the minimum number of samples necessary to get an accurate estimates of a given set of statistical queries, as well as a characterization of the sample complexity of agnostic PAC learning in this model. The characterization is tight up polylogarithmic factors for any given set of statistical queries, and, respectively, any given concept class. The characterization is achieved by a simple and efficient instance-optimal (with respect to the queries/concept class) approximate factorization mechanism, i.e. a mechanism that answers the statistical queries by answering a different set of strategy queries, from which the answers to the original queries can be approximately reconstructed. 

Based on joint work with Alexander Edmonds and Jonathan Ullman

Sponsor: Department of Mathematics & Computer Science Combinatorics & Theory Seminar

Type: talk