What happens if you over-train a neural net?
When you let a machine learn too much, it may happen that it will do worse. It is just like us – as human being – start forgetting things, or even go crazy, when we are forced to study excessively.
All jokes aside, I am having some chances to play with deep and (reasonably) big neural network, and I just found out what have been said above.
In the first experiment, I trained a feed forward neural network on MNIST. The net has 3 hidden layers with 1024 – 1024 – 2048 hidden units, fully connected, trained by stochastic gradient descent with momentum, L2 weight decay and decaying learning rate. The cost function is Cross Entropy. The net is similar to the one described here, but 2 layer deeper. The number of errors on training/validation/test sets are displayed in the figure below
It gone wild, eventually. After 700*2000 =…
View original post 414 more words