# Monthly Archives: April 2013

## Age: 17 Employer: Yahoo Worth: tens of millions of USD!!!

A seventeen year old high school student of UK is the latest employee of software giant, ‘Yahoo‘! As if that is not enough, his bank balance boasts of tens of millions of US dollars!!

Who is this guy and what on the earth made this guy filthy rich? Remember Mark Zuckerberg and his story of being billionaire by building Facebook?? 17-year-old software genius Nick D’Aloisio sold his app Summly to Yahoo for an undisclosed amount which could be easily in tens of millions (internet gossip is 30 million$)!!

Sunmmly, formerly Trimit, is an app for iOS that summarizes news it receives from hundreds of sources and presents to the user in an intuitive way. Summly is an algorithm that summarizes text using Artificial Intelligence and Natural Language Processing technologies and solves the problem of data overload.

Summly has announced that the app will be removed…

View original post 74 more words

## Why Online Education

1) Huge variety of courses from the best professors in the world (see Gamification course from Coursera below) or Machine Learning , Human Computer Interaction

2) They are free ( is a mistake)! time is not free.

Also signature courses at Coursera now offer credible tracks for $39, and they have more support.

Why do you as a student need support? because sometimes you get stuck, and sometimes you need human interaction to stay motivated.

3) Coursera- I love these things-

Can run the course faster at 1.75 times ( because seriously I get distracted otherwise)

Can run the multiple language CC (captions) – reading is so much faster

Best feature- in video quizzes

Most number of courses

Free!

Codeacademy-

Makes learning fun

Makes easy to learn language

I wish someone could mash more of Coursera content with Codeacademy gamification and teach hacking and data sciences to the next generation…

View original post 53 more words

## The Resilient Propagation (RProp) algorithm

The RProp algorithm is a supervised learning method for training multi layered neural networks, first published in 1994 by Martin Riedmiller. The idea behind it is that the sizes of the partial derivatives might have dangerous effects on the weight updates. It implements an internal adaptive algorithm which focuses only on the signs of the derivatives and completely ignores their sizes. The algorithm computes the size of the weight update by involving an update value which depends on the weights. This value is independent from the size of the gradients.

$latex

\Delta w_{i,j}^{(t)}=\begin{cases}

-\Delta_{i,j}^{(t)} & ,\text{if} \ \frac{\partial E^{(t)}}{\partial w_{i,j}} > 0 \\ +\Delta_{i,j}^{(t)} & ,\text{if} \ \frac{\partial E^{(t)}}{\partial w_{i,j}} < 0 \\ 0 & , \text{otherwise}\end{cases}

&s=-2&bg=ffffff&fg=000000$

Here the ∂E^{(t)}/∂w_{i,j} value is the summarized gradient valid for the whole patterns set and it is obtained from a batch backpropagation. The second step of the RProp algorithm…

View original post 289 more words

## Non-Recursive tic-tac-toe AI Algorithm

I intend to lay out the algorithm for a function that returns the next cell to be occupied by an AI in a game of Tic-tac-toe (iteratively). The AI’s difficulty can be divided into 3 levels: easy, medium and difficult.

View original post 1,200 more words

## properties of heuristics and A*

Let $latex G=(V,E,c)$ be a direct graph with non-negative costs. Also consider a starting vertex $latex s$ and a goal vertex $latex g$. The classic search problem is to find a optimal cost path between $latex s$ and $latex g$. Consider the distance function $latex d:V \times V \to \mathbb R$ induced by the cost $latex c$ (the cost of the minimum cost path). Also, assume the graph $latex G$ is strongly connected. Also let $latex w$ be a real greater or equal to 1.

A heuristic is a function $latex h:V \to \mathbb R$. An heuristic $latex h$ is $latex w-$admissible if $latex h(x) \leq wd(x,g)$ for all $latex x \in v$. A heuristic is $latex w-$consistent if $latex h(g) = 0$ and for all $latex x,y \in V$ such that $latex (x,y) \in E$ we have $latex h(x) \leq wc(x,y) + h(y)$. For $latex w= 1$ we just say…

View original post 1,851 more words

## How Ray Kurzweil Will Help Google Make the Ultimate AI Brain

The road to the technological singularity

Taken from http://www.wired.com/business/2013/04/kurzweil-google-ai?cid=co7464184

Google has always been an artificial intelligence company, so it really shouldn’t have been a surprise that Ray Kurzweil, one of the leading scientists in the field, joined the search giant late last year. Nonetheless, the hiring raised some eyebrows, since Kurzweil is perhaps the most prominent proselytizer of “hard AI,” which argues that it is possible to create consciousness in an artificial being. Add to this Google’s revelation that it is using techniques of deep learning to produce an artificial brain, and a subsequent hiring of the godfather of computer neural nets Geoffrey Hinton, and it would seem that Google is becoming the most daring developer of AI, a fact that some may consider thrilling and others deeply unsettling. Or both.

On Tuesday, Kurzweil moderated a live Google hangout tied to a release of the upcoming Will Smith film, *After Earth*, presumably tying…

View original post 1,675 more words