Using variables in PyTorch

Variables in torch are used to build a computational graph. Whenever a variable is calculated, it builds a computational graph. This computational graph is used to connect all the calculation steps (nodes), and finally, when the error is reversed, the modification range (gradient) in all the variables is calculated at once. In comparison, tensor does not have this ability. We will look into this difference with a simple example:

import torch
from torch.autograd import Variable

# Variable in torch is to build a computational graph,
# So torch does not have placeholder, torch can just pass variable to the computational graph.

tensor = torch.FloatTensor([[1,2,3],[4,5,6]]) # build a tensor
variable = Variable(tensor, requires_grad=True) # build a variable, usually for compute gradients

print(tensor) # [torch.FloatTensor of size 2x3]
print(variable) # [torch.FloatTensor of size 2x3]

# till now the tensor and variable looks similar.
# However, the variable is a part of the graph, it's a part of the auto-gradient.

#Now we will calculate the mean value on tensor(X^2)
t_out = torch.mean(tensor*tensor)

#Now we will calculate the mean value on variable(X^2)
v_out = torch.mean(variable*variable)

Now, we will be printing the results for all parameters:


#print the results
print(t_out)
print(v_out)
#result will be 7.5

v_out.backward() # backpropagation from v_out
# v_out = 1/4 * sum(variable*variable)
# the gradients with respect to the variable,

#Let's print the variable gradient

print(variable.grad)
'''
0.5000 1.0000
1.5000 2.0000
'''

print("Resultant data in the variable: "+str(variable)) # this is data in variable

"""
Variable containing:
1 2
3 4
We will consider the variable as a FloatTensor
[torch.FloatTensor of size 2x2]
"""

print(variable.data) # this is data in tensor format
"""
1 2
3 4
We will consider the variable as FloatTensor
[torch.FloatTensor of size 2x2]
"""

#we will print the result in the numpy format
print(variable.data.numpy())
"""
[[ 1. 2.]
[ 3. 4.]]
"""

Here is the output of the preceding code block:

tensor([[1., 2., 3.],
        [4., 5., 6.]])
tensor([[1., 2., 3.],
        [4., 5., 6.]], requires_grad=True)
tensor(15.1667)
tensor(15.1667, grad_fn=<MeanBackward1>)
tensor([[0.3333, 0.6667, 1.0000],
        [1.3333, 1.6667, 2.0000]])
Data in the variabletensor([[1., 2., 3.],
        [4., 5., 6.]], requires_grad=True)
tensor([[1., 2., 3.],
        [4., 5., 6.]])
[[1. 2. 3.]
 [4. 5. 6.]]

Now, let's try plotting data on a graph using matplotlib.

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset