Given any linear threshold function $f$ on $n$ Boolean variables, we construct a linear threshold function $g$ which disagrees with $f$ on at most an $\eps$ fraction of inputs and has integer weights each of magnitude at most $\sqrt{n} \cdot 2^{\tilde{O}(1/\eps^2)}$. We show that the construction is optimal in terms of its dependence on $n$ by proving a lower bound of $\Omega(\sqrt{n})$ on the weights required to approximate a particular linear threshold function.
We give two applications. The first is a \emph{deterministic} algorithm for approximately counting the fraction of satisfying assignments to an instance of the zero-one knapsack problem to within an additive $\pm \eps.$ The algorithm runs in time polynomial in $n$ (but exponential in $1/\eps^2$).
In our second application, we show that any linear threshold function $f$ is specified to within error $\eps$ by estimates of its Chow parameters (degree 0 and 1 Fourier coefficients) which are accurate to within an additive $\pm 1/(n \cdot 2^{\tilde{O}(1/\eps^2)}).$ This is the first such accuracy bound which is inverse polynomial in $n$ (previous work of Goldberg~\cite{Goldberg:01} gave a $1/\quasipoly(n)$ bound), and gives the first polynomial bound (in terms of $n$) on the number of examples required for learning linear threshold functions in the ``restricted focus of attention'' framework.