4.1. VALIDATION TECHNIQUES 33
ere are a number of identifiable challenges which limit the current use of formal meth-
ods and they might become more powerful tools for neural network validation if there are im-
provements in environment modeling, formal specification, and interpretability of the networks.
Environment modeling poses a challenge, since in comparison to traditional formal verification
methods (where the environment can be well defined and often even over-approximated), in
neural network systems the environment can be too complex to precisely define. erefore, the
uncertainty of the model must be taken into account in the verification and validation process.
One approach is to identify assumptions made about the environment at design time, and then
monitor the validity of these assumptions at run-time. A further challenge is to provide accurate
formal specifications for the system. is is challenging in domains where the operational envi-
ronment of the neural network system is very complex. e formal specification must accurately
describe the desired and undesired behavior of the system. is can be made easier by focusing
on creating a system level description to specify the end-to-end behavior of the entire neural
network, rather than focusing on creating a component level specification for each element of
the system. For instance, for an autonomous vehicle, the behavior should not be specified at the
level of the AI system itself, but overall vehicle behavior such as minimum distance from other
obstacles, which can be measured and defined more easily [175].
4.1.2 RUN-TIME MONITORING
Run-time monitoring can be used to monitor the real-time behavior of the system to ensure safe
operation. It is well suited to complex cyber-physical systems where exhaustive offline testing is
often impossible due to adaptive, non-deterministic and/or the near infinite state nature of these
systems [163]. Run-time monitoring is used to ensure that the system operates in a pre-defined
safe boundary. Additionally, run-time monitoring can be used to ensure system stability and
identify non-convergent behavior using methods based on the Lyapunov stability theory [183].
e core idea of run-time monitoring is to identify potential safety problems arising in the system
before they can cause accidents. erefore, in addition to monitoring techniques, thought must
also be given to recovery techniques in case the safety monitors are violated. e first approach
is to shut down the system and stop operation. However, this does not allow the system to
recover safe operation, and moreover shutting down an autonomous agent such as a vehicle can
be dangerous in some environments. A second approach is utilizing a second recovery controller.
is recovery controller would be a controller which can ensure safe operation, even if it is with
degraded performance and reduced capabilities. In autonomous vehicle systems, this could be
a traditional rule-based controller which has lower performance but for which safety can be
more easily guaranteed. is type of approach is also referred to as run-time monitoring and
switching. A third approach utilizes software safety cages which limit the outputs of the system
within safe bounds.
It should be noted that run-time monitoring methods are very domain specific, and their
applicability depends on the availability of effective safety metrics and recovery techniques.
..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset