We consider a fundamental problem in computational learning theory: learning an arbitrary Boolean function which depends on an unknown set of $k$ out of $n$ Boolean variables. We give an algorithm for learning such functions from uniform random examples which runs in time roughly $(n^k)^{{\frac \omega {\omega + 1}}},$ where $\omega < 2.376$ is the matrix multiplication exponent. We thus obtain the first polynomial factor improvement on the naive $n^k$ time bound which can be achieved via exhaustive search. Our algorithm and analysis exploit new structural properties of Boolean functions.
Note: (August '06) Eric Bach and Lisa Hellerstein have informed us that Theorem 12 was proved earlier by T. Siegenthaler, Transactions on Information Theory 30(5), 1984. This result can also be used in place of Theorem 13.