Defining the loss function

The mean squared error (MSE) of regression is given as follows:

Here, is the number of training samples, is the actual value, and is the predicted value.

The implementation of the preceding loss function is shown here. We feed the data and the model parameter, theta, to the loss function, which returns the MSE. Remember that data[,0] has an value and that data[,1] has a value. Similarly, theta [0] has a value of m and theta[1] has a value of .

Let's define the loss function:

def loss_function(data,theta):

Now, we need to get the value of and :

    m = theta[0]
b = theta[1]

loss = 0

We do this for each iteration:

    for i in range(0, len(data)):

Now, we get the value of and :

        x = data[i, 0]
y = data[i, 1]

Then, we predict the value of :

        y_hat = (m*x + b)

Here, we compute the loss as given in equation (3):

        loss = loss + ((y - (y_hat)) ** 2)

Then, we compute the mean squared error:

    mse = loss / float(len(data))

return mse

When we feed our randomly initialized data and model parameter, theta, loss_function returns the mean squared loss, as follows:

loss_function(data, theta)

1.0253548008165727

Now, we need to minimize this loss. In order to minimize the loss, we need to calculate the gradient of the loss function, , with respect to the model parameters, and , and update the parameter according to the parameter update rule. First, we will calculate the gradients of the loss function.

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset