Blog Archives

The Resilient Propagation (RProp) algorithm

DeepTrainer

The RProp algorithm is a supervised learning method for training multi layered neural networks, first published in 1994 by Martin Riedmiller. The idea behind it is that the sizes of the partial derivatives might have dangerous effects on the weight updates. It implements an internal adaptive algorithm which focuses only on the signs of the derivatives and completely ignores their sizes. The algorithm computes the size of the weight update by involving an update value which depends on the weights. This value is independent from the size of the gradients.

$latex
\Delta w_{i,j}^{(t)}=\begin{cases}
-\Delta_{i,j}^{(t)} & ,\text{if} \ \frac{\partial E^{(t)}}{\partial w_{i,j}} > 0 \\ +\Delta_{i,j}^{(t)} & ,\text{if} \ \frac{\partial E^{(t)}}{\partial w_{i,j}} < 0 \\ 0 & , \text{otherwise}\end{cases}
&s=-2&bg=ffffff&fg=000000$

Here the ∂E(t)/∂wi,j value is the summarized gradient valid for the whole patterns set and it is obtained from a batch backpropagation. The second step of the RProp algorithm…

View original post 289 more words