Perceptron

The idea of perceptron was introduced around the late 1950s by F. Rosenblatt  (for more info). Later on, P.J. Webos expanded on this idea with the idea of backpropagation (for more info).  In a layman’s term, a perceptron is a simple algorithm, which given a set of inputs, it will output either a 1 or a 0.

Mathematically, a perceptron can be defined as; This equation can be interpreted as a function that takes some sets of input x. (this input is also known as features) and gives an output of 1 or zero based on the following condition. If the multiplication of the weight and all the input features with the bias is greater than 0, then make the value of the output function 1, otherwise, make it 0.

The challenge with the perceptron algorithm is that it can only give discrete output 1 or 0. What happens if we want a continuous output? For instance, we can use the perceptron to answer a Yes or No question but how do we get a maybe out of it? It is not possible – at least for now :).

So how can we implement a perceptron, we will be leveraging Keras Sequential model to achieve this.  The code fragment above defines a single layer with 12 artificial neurons. The dimension of the input variable is 8.

from keras.models import Sequential

model = Sequential()