I’m presenting Barron’s Theorem in the Machine Learning reading group today.

**Abstract:** I will state and prove Barron’s Theorem, which shows that 1-layer neural networks can evade the curse of dimensionality. Barron’s Theorem bounds the error of the best neural net approximation to a function, in terms of the number of hidden nodes and the smoothness of the function, independently of the dimension.

**Update:**Thanks to Alex Anderson for pointing out limitations of Barron’s Theorem. In his words:

In the context of deep learning, Barron’s Theorem has been a huge red-herring for the neural network community. Even if a single hidden-layer network can do arbitrary function approximation, that doesn’t mean that it does it efficiently (in terms of number of parameters), and these approaches are never used in practice now. There are some things happening that are much more subtle than can be treated in this fashion.

Here are some recent papers he recommends:

- http://www.stat.berkeley.edu/~bruna/publications.shtml
- http://arxiv.org/pdf/1406.2572.pdf
- http://stanford.edu/~asaxe/

Advertisements

Reblogged this on #iblogstats and commented:

Seems no one knows how a neural network functions. I like the very realistic approach here, I wish my students would articulate this some day! Even if a single hidden-layer network can do arbitrary function approximation, that doesn’t mean that it does it efficiently (in terms of number of parameters…

By:

alwynnalwynnon October 10, 2015at 11:01 pm