We saw a sample fine-tuning implementation in step 1. Fine-tuning configurations are intended for default/global changes that are applicable across layers. So, if we want to remove specific layers from being considered for fine-tuning configuration, then we need to make those layers frozen. Unless we do that, all the current values for the specified modification type (gradients, activation, and so on) will be overridden in the new model.
In step 2, if our original MultiLayerNetwork model has convolutional layers, then it is possible to make modifications in the convolution mode as well. As you might have guessed, this is applicable if you perform transfer learning for the image classification model from Chapter 4, Building Convolutional Neural Networks. Also, if your convolutional neural network is supposed to run in CUDA-enabled GPU mode, then you can also mention the cuDNN algo mode with your transfer learning API. We can specify an algorithmic approach (PREFER_FASTEST, NO_WORKSPACE, or USER_SPECIFIED) for cuDNN. It will impact the performance and memory usage of cuDNN. Use the cudnnAlgoMode() method with the PREFER_FASTEST mode to achieve performance improvements.