We study the learnability of sets in $\R^n$ under the Gaussian distribution, taking Gaussian \emph{surface area} as the ``complexity measure'' of the sets being learned. Let $\calC_S$ denote the class of all (measurable) sets with surface area at most $S$. We first show that the class $\calC_S$ is learnable to any constant accuracy in time $n^{O(S^2)}$, even in the arbitrary noise (``agnostic'') model. Complementing this, we also show that any learning algorithm for $\calC_S$ information-theoretically requires $2^{\Omega(S^2)}$ examples for learning to constant accuracy. These results together show that Gaussian surface area essentially characterizes the computational complexity of learning under the Gaussian distribution.
Our approach yields several new learning results, including the following (all bounds are for learning to any constant accuracy):
Postscript or
pdf of extended version