Useful operations

In all the previous TensorFlow models, we encountered functions that create layers in TensorFlow. There are a few layers that are more or less inescapable.

The first one is tf.dense, connecting all input to a new layer. We saw them in the auto-encoder example, and they take as an inputs parameter a tensor (variable, placeholder...) and then units the number of output units. By default, it also has bias, meaning that the layer computes inputs * weights + bias.

Another important layer that we will see later is conv2d. It computes a convolution on an image, and this times it takes the filters that will indicate the number of nodes in the output layer. It is what defines convolutional neural networks. Here is the usual formula for the convolution:

The standard name for the tensor of coefficients in the convolution is called a kernel.

Let's have a look at a few other layers:

  • dropout will randomly put some weights to zero during the training phase. This is very important in a complex deep-learning network to prevent it from overfitting. We will also see it later.
  • max_pooling2d is a very important complement to the convolution layer. It selects the maximum of the input on a two-dimensional shape. There is also a one-dimensional version that works after dense layers.

All layers have an activation parameter. This activation transforms a linear operation to a nonlinear one. Let's have a look at the most useful ones from the tf.nn module:

As we saw earlier, scikit learn provides lots of metrics to compute accuracy, curves, and more. TensorFlow provides similar operations in the tf.metrics module.

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset