Deep Learning practice Perceptron
In this post we’ll look at a hands-on example of classification in Python, and we are calling here Perceptron
what is Perceptron?
Perceptron is a binary linear classifier used in supervised learning to determine lines that separates two classes.
Each node in a neural net hidden layer is essentially a small perceptron. As we build this single perceptron, imagine how many of these in sequence could classify data with complex features.
In the following figure we have two type of images: circle and star and there is a line between them called Boundry Line.
The example in the figure above learns its feature recognition like deep learning algorithms but for this example, we’ll only have a single neural network layer.
-
Boundary line :
The boundary line that separates the two classes are:
�1�1 +�2�2 +�=0
Here:
�1 and �2 are the inputs
�1 and �2 are the weights
� is the bias
This equation will allow our model to find the boundary line between our two input classes, star
and not star
.
2. Discrete Prediction with Forward Propagation
Now we’ll implement forward propagation to determine if a point is a part of star or not.
This is a discrete prediction because our implementation simply returns “yes” or “no” and not a percentage of certainty about that prediction.
Copy the following Python script and past a empty Python file (open a text document and call it Boundary_line.py).
You should have Python is installed in your environment (in my case is Windows 10)
import numpy as np
# step activation function
def step(weighted_sum):
# The step activation is applied to the perceptron output that
# returns 0 if the weighted sum is less than 0, otherwise is 1
return (weighted_sum > 0) * 1
def forward_propagation(input_data, weights, bias):
#Computes the forward propagation operation of a perceptron
# and returns the output after applying the step activation function
# takes the dot product of input and the weights and adds the bias
return step(np.dot(input_data, weights) + bias)
# Initialize parameters
# declaring two data points
X = np.array([2, 3])
Y = np.array([0]) # label
weights = np.array([2.0, 3.0]) # weights of perceptron
bias = 0.1 # bias value
# predicted label
Y_predicted = forward_propagation(X, weights.T, bias)
# print out result:
print("Predicted label:", Y_predicted)
Open Command prompt in your file path and Run this Python script with the following command:
>python path/filename
For me the file name is Boundary_line.py. The following figure show the result: Predicted label: 1.
Code Explanation:
forward_propagation function
After the parameters are initialized, the forward propagation function is called.
Takes in the input variable X and weights, then it calculates the dot product using np.dot and adds the bias to compute the weighted sum.
Applies the step function to the computed weighted sum.
step function:
Takes the weighted sum and returns 1 if the value is greater than 0 and 0 otherwise.
Variables Definition
X An input NumPY array with feature values 2 and 3
Y An output label with value 0
weights The weights of the perceptron with initial values of 2 and 3, respectively.
bias The bias value initialized with 0
The Python script you can find in my Github, under AI_Python/Boundary_line
3. Logistic Regression
Now we’ll apply the Sigmoid Activation Function to make our example more accurate. The function increases the range of prediction of our program from 0
or 1
to between 0
and 1
.
This allows our program to record various levels of certainty and approve those above a certain threshold.
Copy the following code and paste to a Python file:
import numpy as np
def sigmoid(x):
# The sigmoid activation function
return 1 / (1 + np.exp(-x)) # applying the sigmoid function
def forward_propagation(input_data, weights, bias):
#Computes the forward propagation operation of a perceptron and
#returns the output after applying the sigmoid activation function
# take the dot product of input and weight and add the bias
return sigmoid(np.dot(input_data, weights) + bias) # the perceptron equation
# Initializing parameters
X = np.array([2, 3]) # declaring two data points
Y = np.array([0]) # label
weights = np.array([2.0, 3.0]) # weights of perceptron
bias = 0.1 # bias value
output = forward_propagation(X, weights.T, bias) # predicted label
print("Forward propagation output:", output)
Y_predicted = (output > 0.5) * 1 ## apply sigmoid activation
print("Label:", Y_predicted)
Open Command prompt in your file path and Run this Python script with the following command:
>python path/filename
For me the file name is Logistic_Regression.py. The following figure show the result: Predicted label: 1.
As we see the output is :
Forward propagation output: 0.9999979547735586
Label: 1
Code Explanation:
sigmoid function
For the given input value x, the value of sigmoid can be calculated as 1/1+np.exp(−x).
The label after the forward propagation operation is predicted as 1 if the sigmoid output is greater than 0.5 and 0 otherwise. In this example, the threshold is set to 0.5.
Threshold-based classification models logistic regression algorithms, therefore we’ve implemented logistic regression.
4. Error Function: Cross-entropy
Finally, we’ll implement an error function that compares the actual value and the predicted value of each point in our example.
Error functions are used to quantify the certainty of a prediction. For example, instead of simply having the logistically determined “yes” or “no”, we’ll be able to see how certain the program is in its prediction.
Cross-entropy is the error function used for classification models.
Minimized cross-entropy indicates a maximum likelihood that a class belongs to the predicted type.
Copy the following code and paste it to your Python file.
import numpy as np
def sigmoid(x):
# The sigmoid activation function"""
return 1 / (1 + np.exp(-x))
def forward_propagation(input_data, weights, bias):
# Computes the forward propagation operation of a perceptron and
# returns the output after applying the sigmoid activation function
# take the dot product of input and weight and add the bias
return sigmoid(np.dot(input_data, weights) + bias)
def calculate_error(y, y_predicted):
#Computes the binary cross entropy error"""
return - y * np.log(y_predicted) - (1 - y) * np.log(1 - y_predicted)
def ce_two_different_weights(X, Y, weights_0, weights_1, bias):
#Computes sum of error using two different weights and the same bias"""
sum_error1 = 0.0
sum_error2 = 0.0
for j in range(len(X)):
Y_predicted_1 = forward_propagation(X[j], weights_0.T, bias) # predicted label
sum_error1 = sum_error1 + calculate_error (Y[j], Y_predicted_1) # sum of error with weights_0
Y_predicted_2 = forward_propagation(X[j], weights_1.T, bias) # predicted label
sum_error2 = sum_error2 + calculate_error (Y[j], Y_predicted_2) # sum of error with weights_1
return sum_error1, sum_error2
# Initialize parameters
X = np.array([[2, 3], [1, 4], [-1, -3], [-4, -5]]) # declaring two data points
Y = np.array([1.0, 1.0, 0.0, 0.0]) # actual label
weights_0 = np.array([0.0, 0.0]) # weights of perceptron
weights_1 = np.array([1.0, -1.0]) # weights of perceptron
bias = 0.0 # bias value
sum_error1, sum_error2 = ce_two_different_weights(X, Y, weights_0, weights_1, bias)
print("sum_error1:", sum_error1, "sum_error2:", sum_error2)
Open Command prompt in your file path and Run this Python script with the following command:
>python path/filename
For me the file name is Cross-entropy.py. The following figure show the result: Predicted label: 1.
As we see the output is:
Forward propagation output: 0.9999979547735586
Label: 1
PS C:\Utvecklingprogram\AI\AI_Python> python .\Cross-entropy.py
sum_error1: 2.772588722239781 sum_error2: 7.802038737653159
Code Explanation:
ce_two_different_weights function
The functions take the parameters, the input data features X, the labels Y, weights_0, weights_1, and bias.
Line 18 - 27: Loops over the training data calculates the predicted value and error. It also continues to add the error of the previous iteration in the variable sum_error1 (line 20) and sum_error2 (line 21) while using both of the weights separately.
Line 27: Returns the sum of cross-entropy error by each of the weights.
All the source code is in my Github
Conclusion
In this post we have learned simple Perceptron classifier! with example of Discrete Prediction with Forward Propagation, Logistic Regression, Error Function: Cross-entropy
In my next post I will go through AI in Microsoft Cloud Azure: Azure AI Services such as : Letter Classification System, Face Detection System; Digit Recognition System ;Music Genre Classification System
This post is part of AI (Artificial Intelligence) step by step