Summary

We started off this chapter by learning about TensorFlow and how it uses computational graphs. We learned that every computation in TensorFlow is represented by a computational graph, which consists of several nodes and edges, where nodes are mathematical operations, such as addition and multiplication, and edges are tensors.

We learned that variables are containers used to store values and they are used as input to several other operations in a computational graph. Later, we learned that placeholders are like variables, where we only define the type and dimension but will not assign the values, and values for the placeholders will be fed at runtime.

Going forward, we learned about TensorBoard, which is TensorFlow's visualization tool and can be used to visualize a computational graph. It can also be used to plot various quantitative metrics and the results of several intermediate calculations.

We also learned about eager execution, which is more Pythonic, and allows for rapid prototyping. We understood that, unlike the graph mode, where we need to construct a graph every time to perform any operations, eager execution follows the imperative programming paradigm, where any operations can be performed immediately, without having to create a graph, just like we do in Python.

In the next chapter, we will learn about gradient descent and the variants of gradient descent algorithms.

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset