Cell-sampling is an elementary information theoretic technique for proving unconditional lower bounds on the “locality” of algorithms, via a compression-style argument. Despite its simplicity, cell-sampling yields state-of-art lower bounds in many computational models, such as static and dynamic data structures, hashing, locally-decodable codes (LDCs) and matrix rigidity. I will sketch some of those applications, including time-space tradeoffs for near-neighbor search and the Katz–Trevisan lower bound for general LDCs.
[slides], [slides (PPTX)]The central limit theorem is one of the cornerstones of modern probability theory — in recent years, probably to no one's surprise, the theorem and its variants have found applications in several areas of theoretical computer science including complexity theory, learning theory and algorithmic theory among others. In this talk, I will talk about some of these variants, their applications and some approaches that are used to prove central limit theorems.
[slides]I will discuss two (unrelated) small facts that have proven quite useful in various domains. The first goes by many names (Gibbs variational principle, Donsker—Varadhan formula, among others), and provides a very fruitful characterization of relative divergence. The second, more algorithmic, is an improvement over naive averaging-type/bucketing arguments, sometimes known as Levin's economical investment strategy, allowing one to leverage a lower bound on the expected value of some quantity without losing quadratic nor log factors.
[slides]Differential privacy is a notion of algorithmic stability that provides a rigorous foundation for the study of privacy-preserving data analyses. However, tools developed for differential privacy have also found applications in areas of research beyond privacy. In this talk, I will describe how one can leverage the stability guarantee in differential privacy to obtain 1) incentive-compatibility in mechanism design, 2) statistical validity in adaptive data analysis, and 3) certified robustness to adversarial examples.
[slides]