The understanding of computational graphs will help us to think of complex models in terms of small subgraphs and operations.
Let's look at an example of a neural network with only one hidden layer and what its computation graph might look like in TensorFlow:
So, we have some hidden layer that we are trying to compute, as the ReLU activation of some parameter matrix W time some input x plus a bias term b. The ReLU function takes the max of your output and zero.
The following diagram shows what the graph might look like in TensorFlow:
In this graph, we have variables for our b and W and we have something called a placeholder for x; we also have nodes for each of the operations in our graph. So, let's get into more detail about those node types.