Classification

A classification problem runs a neural network model to classify the inputs. For example, it classifies images of clothing into trousers, tops, and shirts. When we provide more inputs to the classification model, it will predict the value of the outcomes. 

A simple example would be filtering an email as spam or not spamClassification either predicts categorical class labels based on the training set or the values (class labels) when classifying attributes that are used in classifying new data. There are many classification models, such as Naive Bayes, random forests, decision tress, and logistic regression.

Here, we will work on a simple classification problem. To do this, use the following this code:

%matplotlib inline

import torch
import torch.nn.functional as F
import matplotlib.pyplot as plt

# torch.manual_seed(1) # reproducible

# make fake data
n_data = torch.ones(100, 2)
x0 = torch.normal(2*n_data, 1) # class0 x data (tensor), shape=(100, 2)
y0 = torch.zeros(100) # class0 y data (tensor), shape=(100, 1)
x1 = torch.normal(-2*n_data, 1) # class1 x data (tensor), shape=(100, 2)
y1 = torch.ones(100) # class1 y data (tensor), shape=(100, 1)
x = torch.cat((x0, x1), 0).type(torch.FloatTensor) # shape (200, 2) FloatTensor = 32-bit floating
y = torch.cat((y0, y1), ).type(torch.LongTensor) # shape (200,) LongTensor = 64-bit integer

class Net(torch.nn.Module):
def __init__(self, n_feature, n_hidden, n_output):
super(Net, self).__init__()
self.hidden = torch.nn.Linear(n_feature, n_hidden) # hidden layer
self.out = torch.nn.Linear(n_hidden, n_output) # output layer

def forward(self, x):
x = F.relu(self.hidden(x)) # activation function for hidden layer
x = self.out(x)
return x

net = Net(n_feature=2, n_hidden=10, n_output=2) # define the network
print(net) # net architecture

optimizer = torch.optim.SGD(net.parameters(), lr=0.02)
loss_func = torch.nn.CrossEntropyLoss() # the target label is NOT an one-hotted

plt.ion() # something about plotting

for t in range(100):
out = net(x) # input x and predict based on x
loss = loss_func(out, y) # must be (1. nn output, 2. target), the target label is NOT one-hotted

optimizer.zero_grad() # clear gradients for next train
loss.backward() # backpropagation, compute gradients
optimizer.step() # apply gradients

if t % 10 == 0:

Now, let's plot the graphs and display the learning processes:

 plt.cla()
prediction = torch.max(out, 1)[1]
pred_y = prediction.data.numpy()
target_y = y.data.numpy()
plt.scatter(x.data.numpy()[:, 0], x.data.numpy()[:, 1], c=pred_y, s=100, lw=0, cmap='RdYlGn')
accuracy = float((pred_y == target_y).astype(int).sum()) / float(target_y.size)
plt.text(1.5, -4, 'Accuracy=%.2f' % accuracy, fontdict={'size': 20, 'color': 'red'})
plt.pause(0.1)

plt.ioff()
plt.show()

The output of the preceding code is as follows: 

Net(
  (hidden): Linear(in_features=2, out_features=10, bias=True)
  (out): Linear(in_features=10, out_features=2, bias=True)
)

We will pick only a few plots from the output, as shown in the following screenshot:

You can see that the accuracy levels have increased with the increased number of steps in the iteration:

We can reach an accuracy level of 1.00 in the final step of our execution:

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset