Posts

A Single Layer Perceptron for Regression: Part 5

     New Function created: calculate_rsquare :  This function calculates the R-square value. This value helps understand the accuracy of the model. It is used for both single and multiple linear regression. However, adjusted R-square works better for the latter.   Parameters :  (1) predicted_output(numpy array (m,1)): Predicted output values.                       (2) actual_output(numpy array (m,1)) : The original output values.                       (3) num_of_predictor_variables (int): Number of predictor variables.                       (4) mean_of_output(float or string) : The mean of the actual output values. If it is                                                                                       "default", then the mean has to be calculated.                                                                                      Otherwise it is passed.                      (5) adjusted_r2(Boolean): By default is False. It can be made True if Adjusted R-             

A Single Layer Perceptron for Regression: Part 4

 Testing - Observations, Rectifications, and Plans 1) Adding a second condition to end epochs: Till now, the only condition in the loop was that the difference between the errors of 2 consecutive epochs had to be within the threshold value. This created an issue when the error value kept going up with each epoch. I, hence, added another condition that checks if the current value is 0.2 values higher than the previous error. If yes, then the loop breaks. Reason to use 0.2: While experimenting with the dataset, 0.2 gave the best results. I hope to:      a) Change 0.2 to an argument.      b) Experiment with more datasets to find a more appropriate value to use. 2)    Need to add Accuracy metric: Currently, the code, only shows the error. However, I would like to add the R squared value as a metric to see how good the model is w.r.t. accuracy. Code   The new version of the code can be found on this link:  https://github.com/HridayaAnnuncio247/Single-Layer-Perceptron Dataset Used For Testin

A Single Layer Perceptron for Regression: Part 3

   New Functions created: normalize :  This function normalizes inputs using either min-max or Z-score normalization techniques.  Importance of Normalization: Preventing than others just because their range of values are greater in comparison. Parameters :  (1) input (numpy array (m,n)): input/predictor values that have to be normalized.                       (2) outputs(numpy array (m,1)) : Output values that have to be normalized.                       (3) type_of_normlaization (string): can be min_max or z_score.                       denormalize:  This function converts the normalized values back to their original form. Parameters:   (1) outputs(numpy array (m,1)) : Predicted outputs that are to be brought back to                                                                                their original form.                        (2) term1(float):Either the min value or the mean of the original outputs.                      (3) term2(float): Either the max value or the standar

A Single Layer Perceptron for Regression: Part 2

  Functions Edited: initialize_weights: Earlier, this function returned an numpy array of shape (1,n). It has now been edited to return the transpose, i.e., an numpy array of shape (n,1). This change was made to make matrix multiplication in the function forward_pass easy. forward_pass:  The main change is that now this function uses matrix multiplication to multiply all inputs with the weights. This will enable batch processing.  activate: This function now activates the input sums for every row of predictor values. New Functions created: backward_pass : Finding the change in weights required with respect to the Error. Based on the derivation, for a neuron that is on the output layer, delta w = learning_rate * eeta * input_attribute_ij wherein eeta = predicted_output_i*( 1 - predicted_output_i)(actual_output_i - predicted_output_i).  Parameters :  (1) input (numpy array (m,n))                       (2) predicted_output(numpy array (m,1))                       (3) actual_output(numpy a

A single layer Perceptron for Regression: Part 1

Image
 The next agenda: to create the code for a single layer perceptron. What is a single layer perceptron? A neural network with 1 layer that has a single neuron. This means: 1) Input/Predictor values enter this neuron. 2) Each input has some weight connected to it. 3) Inputs are multiplied to their respective weights and summed up. 4) This sum is passed through an activation function. 5) The activated value of the sum/net is the predicted/output value of the neuron. 6) The error is calculated and the weights are changed with the help of a backward pass. image created by Hridaya Annuncio on Kleki.com Functions created till now: initialize_weights : The initial random weights assigned. Parameters :  (1) row (string): a single row of predictor values. This is used to find the number of                            weights that have to be initialized.            activate : Based on the activation function desired to be used, the activated net value of all the inputs times their weights is calcu

What Data Structure Do I use?

Hello Everyone! Creating a Neural Network from Scratch has been on my mind for the last few months. So, I have finally converted my thoughts to action! I am going to try to create neural networks from scratch, i.e., without using libraries like Tensorflow and PyTorch. While creating the basic roadmap, I came across the following: 1) What kind of data structure should I use for each neuron? After  a bit of looking around: The answer is numpy arrays.  Why Numpy arrays over Lists?  - Each item of a list is a pointer to a memory location that actually contains the value. (And this is why lists can store items of different data types) - Each item in a numpy array is stored in contiguous memory locations (they are written in C++). -This means that each time a list value is retrieved there are two visits to memory locations, but when a numpy array value is retrieved, there is only one memory location to go to. I hope to understand more about numpy arrays - the way they use memory. 2) How to u