# Part 3 – The Single-layer Perceptron

We propose to implement a simple ANN from scratch using only the Numpy library. Although it is more efficient to use deep learning libraries such as Tensorflow or Pytorch, the motivation is to have a better understanding of how ANN work.

We will look at implementing an ANN with 3 input neurons. This means that our problem has 3 features. Here is the architecture. is the weight between the neuron in layer and the neuron in layer .

Feedforward

The activation of the predicted output can be written as follows,

(1) Where

(2) Backpropagation

The MSE cost function is defined as follows,

(3) Let’s apply Gradient Descent to the first weight .

(4) The chain rule now becomes,

(5) Isolating each terms, we have If we repeats the same steps for the other 2 weights and the bias, we get We are now ready to implement!

Numpy implementation

The problem

Before tackling the implementation itself, we need to define a problem to solve. Let’s build a toy dataset for a simple classification problem. Suppose we have some information about obesity, smoking habits, and exercise habits of five people. We also know whether these people are diabetic or not. We can encode this information as follows:

“In the above table, we have five columns: Person, Smoking, Obesity, Exercise, and Diabetic. Here 1 refers to true and 0 refers to false. For instance, the first person has values of 0, 1, 0 which means that the person doesn’t smoke, is obese, and doesn’t exercise. The person is also diabetic.

It is clearly evident from the dataset that a person’s obesity is indicative of him being diabetic. Our task is to create a neural network that is able to predict whether an unknown person is diabetic or not given data about his exercise habits, obesity, and smoking habits. This is a type of supervised learning problem where we are given inputs and corresponding correct outputs and our task is to find the mapping between the inputs and the outputs.”

from 

The code

We will base our implementation on the neural network architecture described above.

We start by importing some libraries and defining the Sigmoid function and its derivative. We then define our data set and the hyperparameters of the model. During the training phase, we perform feedforward and backpropagation steps. We can then plot the evolution of the cost, weights and bias with the number of iterations (epoch). Finally, we test our neural network on some unseen examples.

In example 1, a person who is smoking, not obese and does not exercise is classified as not diabetic. In example 2, a person who is not smoking, obese and does not exercise is classified as diabetic.

The training error (MSE) keeps decreasing with the number of iterations, which is a good sign. We can also notice that the weight becomes predominant after many iterations. This is because the 2nd feature (obesity) is very highly correlated with the output variable (diabetic).