32 4. SAFETY VALIDATION OF NEURAL NETWORKS
4.1 VALIDATION TECHNIQUES
4.1.1 FORMAL METHODS
Formal methods are a class of techniques which rely on formal logic and discrete mathematics
for mathematical specification, design, and verification of systems [172175]. Applying formal
methods for verification and validation of software can be a rigorous and complex task, but can
provide high safety assurance [176]. Typically, a mathematical model of the system is used to
apply formal methods to prove that the system is complete and correct [177]. However, due to
the opacity of neural network systems, the applicability of formal methods to neural networks
is currently limited. Nevertheless, existing formal methods can provide limited insight when
applied to specific types of neural networks.
Rule extraction is a technique used to extract high-level rules regarding the operation of
the network, where the rules are examined to determine the acceptability and safety of the net-
work. During development, these extracted rules could be used to provide further understanding
of what the network is learning. In some types of networks, rule insertion can also be used to
insert rules into the network to ensure the desired behavior. Such methods have, for example,
been shown to work in hybrid neuro-fuzzy systems [178, 179]. Methods extracting guarantees
from neural networks have been investigated as well. For instance, Pulina and Tacchella [180]
encoded a neural network as a set of constraints which could then be solved by a Boolean Satis-
fiability Problem solver. is abstraction allowed formal methods to be applied to the network
and provided some guarantee of the networks outputs in the desired neighborhoods. However,
the technique was only tested on a six-neuron network and the sigmoid activation function was
approximated using constraints, limiting the potential insight into its efficacy on practical deep
neural networks. is method was further extended by Katz et al. [181] in the Reluplex algo-
rithm. e method is based on the Simplex verification method, extended to work on ReLU
activation functions by utilizing a search tree instead of the search path approach used in Sim-
plex. While this system specializes in networks with ReLU neurons, it was shown to provide
some behavioral guarantees in networks with up to 300 neurons. On the other hand, Huang et
al. [182] proposed a framework for verifying the robustness of feedforward multi-layer neural
networks to adversarial examples based on Satisfiability Modulo eory. e proposed method
searches for adversarial examples for the network by analyzing the network layer by layer. If no
adversarial examples are found, the network can be verified as safe. However, if such adversar-
ial examples are found, they can be used to train the network thus improving the robustness
of the network. e main weakness of the method was that the verification process increases
exponentially in computational complexity with the increasing number of features, limiting the
scalability of the method. Moreover, the technique relies on various assumptions (e.g., assuming
that the output for a given input is mainly dependent on a small subset of neurons). Overall,
the trend seen in current formal verification methods is that some guarantees can be obtained
for smaller networks, but the techniques are limited by their reliance on assumptions and lack
of scalability to larger networks.
..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset