4.1. VALIDATION TECHNIQUES 37
4.1.5 VISUALIZATION
Visualization can be a useful tool to improve the interpretability of neural networks for verifi-
cation and validation practitioners. Visualization techniques can be used to transform data to
forms that humans can interpret more easily. ere are already visualization tools and techniques
in traditional software verification and validation which are used to create visual representations
of data to improve interpretability [174]. Visualization techniques for neural networks could, for
example, be used to create graphical representations of changes in weights or internal connec-
tions in the network, plots of error functions over the learning process to improve understanding
of the decision making process and the learning process during training. erefore, these rep-
resentations can give greater insight into the structure of the neural network, including weights
and biases and their respective changes during training [206, 207].
Visualization tools have gained significant interest recently for the interpretation of CNN
learning. One popular technique is Activation Maximization which provides insight into which
features the CNN classifiers have learned to relate to different classifications. Activation Max-
imization synthesizes an image which maximizes the output for a given neuron or output. An-
other use for this technique is to create an adversarial example, which is an image that is unrec-
ognizable to humans but outputs a high confidence classification by the CNN [167]. However,
Simonyan et al. [208] introduced a regularization technique into this process, which created
more recognizable images, giving insight into the kind of features the CNN classifier was look-
ing for in the specific classes. Yosinski et al. [209] further extended this method with better
regularization techniques as well as investigation of neurons at all layers, rather than limiting
the study to the output neurons. is showed that neurons at different layers were learning
different features, with higher layers learning more complex and abstract features (e.g., faces,
wheels, eyes) while the lower layers were learning more basic features (e.g., edges and corners).
erefore, this type of visualization shows great potential to improve interpretability of CNNs,
as they can provide significant insight into what the neural network has learned [209]. For in-
stance, such visualization techniques were used by Bojarski et al. [87, 210] in the NVIDIA
PilotNet project to visualize the internal state of the CNN used for steering. By studying the
activations within different layers of the trained CNN, they were able to gain a better under-
standing of what features the neural network had learned to recognize. e analysis showed that
even with only the human steering angles as training input, the CNN had learned to recognize
useful road features such as the edges of the road. e authors also investigated the activations
in the network when given an image with no road as input and found that the activations of the
two feature maps mostly contained noise, indicating that the CNN found no useful features in
the image. erefore, the CNN only learned to recognize features that were useful for its task,
such as road-related features.
Another useful visualization technique investigates the importance of different neurons
in creating predictions. By analyzing the gradients flowing into the last convolutional layer in
the CNN, the contribution of each neuron to the final prediction can be determined. is in-