The Resilient Propagation (RProp) algorithm

Portfolio & Past Research in the field of Artificial Neural Networks

The RProp algorithm is a supervised learning method for training multi layered neural networks, first published in 1994 by Martin Riedmiller. The idea behind it is that the sizes of the partial derivatives might have dangerous effects on the weight updates. It implements an internal adaptive algorithm which focuses only on the signs of the derivatives and completely ignores their sizes. The algorithm computes the size of the weight update by involving an update value which depends on the weights. This value is independent from the size of the gradients.

$latex
\Delta w_{i,j}^{(t)}=\begin{cases}
-\Delta_{i,j}^{(t)} & ,\text{if} \ \frac{\partial E^{(t)}}{\partial w_{i,j}} > 0 \\ +\Delta_{i,j}^{(t)} & ,\text{if} \ \frac{\partial E^{(t)}}{\partial w_{i,j}} < 0 \\ 0 & , \text{otherwise}\end{cases}
&s=-2&bg=ffffff&fg=000000$

Here the ∂E(t)/∂wi,j value is the summarized gradient valid for the whole patterns set and it is obtained from a batch backpropagation. The second step of the RProp algorithm…

View original post 289 more words

Advertisements

About Khuram Ali

Programming... Programming and Programming...!!!

Posted on April 27, 2013, in Algorithms and tagged , , . Bookmark the permalink. Leave a comment.

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s

%d bloggers like this: