Math operations in TensorFlow

Now, we will explore some of the operations in TensorFlow using the eager execution mode:

x = tf.constant([1., 2., 3.])
y = tf.constant([3., 2., 1.])

Let's start with some basic arithmetic operations.

Use tf.add to add two numbers:

sum = tf.add(x,y)
sum.numpy()

array([4., 4., 4.], dtype=float32)

The tf.subtract function is used for finding the difference between two numbers:

difference = tf.subtract(x,y)
difference.numpy()

array([-2., 0., 2.], dtype=float32)

The tf.multiply function is used for multiplying two numbers:

product = tf.multiply(x,y)
product.numpy()

array([3., 4., 3.], dtype=float32)

Divide two numbers using tf.divide:

division = tf.divide(x,y)
division.numpy()

array([0.33333334, 1. , 3. ], dtype=float32)

The dot product can be computed as follows:

dot_product = tf.reduce_sum(tf.multiply(x, y))
dot_product.numpy()

10.0

Next, let's find the index of the minimum and maximum elements:

x = tf.constant([10, 0, 13, 9])

The index of the minimum value is computed using tf.argmin():

tf.argmin(x).numpy()

1

The index of the maximum value is computed using tf.argmax():

tf.argmax(x).numpy()

2

Run the following code to find the squared difference between x and y:

x = tf.Variable([1,3,5,7,11])
y = tf.Variable([1])

tf.math.squared_difference(x,y).numpy()

[ 0, 4, 16, 36, 100]

Let's try typecasting; that is, converting from one data type into another.

Print the type of x:

print x.dtype

tf.int32

We can convert the type of x, which is tf.int32, into tf.float32 using tf.cast, as shown in the following code:

x = tf.cast(x, dtype=tf.float32)

Now, check the x type. It will be tf.float32, as follows:

print x.dtype

tf.float32

Concatenate the two matrices:

x = [[3,6,9], [7,7,7]]
y = [[4,5,6], [5,5,5]]

Concatenate the matrices row-wise:

tf.concat([x, y], 0).numpy()

array([[3, 6, 9], [7, 7, 7], [4, 5, 6], [5, 5, 5]], dtype=int32)

Use the following code to concatenate the matrices column-wise:

tf.concat([x, y], 1).numpy()

array([[3, 6, 9, 4, 5, 6],
[7, 7, 7, 5, 5, 5]], dtype=int32)

Stack the x matrix using the stack function:

tf.stack(x, axis=1).numpy()

array([[3, 7], [6, 7], [9, 7]], dtype=int32)

Now, let' see how to perform the reduce_mean operation:

x = tf.Variable([[1.0, 5.0], [2.0, 3.0]])

x.numpy()

array([[1., 5.], [2., 3.]]

Compute the mean value of x; that is, (1.0 + 5.0 + 2.0 + 3.0) / 4:

tf.reduce_mean(input_tensor=x).numpy() 

2.75

Compute the mean across the row; that is, (1.0+5.0)/2, (2.0+3.0)/2:

tf.reduce_mean(input_tensor=x, axis=0).numpy() 

array([1.5, 4. ], dtype=float32)

Compute the mean across the column; that is, (1.0+5.0)/2.0, (2.0+3.0)/2.0:

tf.reduce_mean(input_tensor=x, axis=1, keepdims=True).numpy()

array([[3. ], [2.5]], dtype=float32)

Draw random values from the probability distributions:

tf.random.normal(shape=(3,2), mean=10.0, stddev=2.0).numpy()

tf.random.uniform(shape = (3,2), minval=0, maxval=None, dtype=tf.float32,).numpy()

Compute the softmax probabilities:

x = tf.constant([7., 2., 5.])

tf.nn.softmax(x).numpy()

array([0.8756006 , 0.00589975, 0.11849965], dtype=float32)

Now, we'll look at how to compute the gradients.

Define the square function:

def square(x):
return tf.multiply(x, x)

The gradients can be computed for the preceding square function using tf.GradientTape, as follows:

with tf.GradientTape(persistent=True) as tape:
print square(6.).numpy()

36.0

More TensorFlow operations are available in the Notebook on GitHub at http://bit.ly/2YSYbYu.

TensorFlow is a lot more than this. We will learn about various important functionalities of TensorFlow as we move on through this book.

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset