## Perceptron

The idea of perceptron was introduced around the late 1950s by F. Rosenblatt (for more info). Later on, P.J. Webos expanded on this idea with the idea of backpropagation (for more info). In a layman’s term, a perceptron is a simple algorithm, which given a set of inputs, it will output either a 1 or a 0.

Mathematically, a perceptron can be defined as;

This equation can be interpreted as a function that takes some sets of input x. (this input is also known as features) and gives an output of 1 or zero based on the following condition. If the multiplication of the weight and all the input features with the bias is greater than 0, then make the value of the output function 1, otherwise, make it 0.

The challenge with the perceptron algorithm is that it can only give discrete output 1 or 0. What happens if we want a continuous output? For instance, we can use the perceptron to answer a Yes or No question but how do we get a maybe out of it? It is not possible – at least for now :).

So how can we implement a perceptron, we will be leveraging Keras Sequential model to achieve this. The code fragment above defines a single layer with 12 artificial neurons. The dimension of the input variable is 8.

from keras.models import Sequential

model = Sequential()

model.add(Dense(12, input_dim=8, kernel_initializer=’random_uniform’))

Analysis of the code listing above:

*from keras.models import Sequential **– There are several models in the Keras package. we imported the simplest model called Sequential. *

**model = Sequential()** – A Sequential model is a linear pipeline that takes a stack of neural network layers. This gives us the ability to stack up several models in other to build a deep neural net. Here we create a constructor using the Sequential model that we imported earlier.

*model.add(Dense(12, input_dim=8, kernel_initializer=’random_uniform’))- *`Dense`

implements the operation: `output = activation(dot(input, kernel) + bias)`

where `activation`

is the element-wise activation function passed as the `activation`

argument. The random uniform part of the kernel_initialiser specifies that the initial weights are initialized to small values between (-0.05, 0.05).