34 4. SAFETY VALIDATION OF NEURAL NETWORKS
Moreover, run-time monitoring and validation should also account for uncertainties in state
observations. ese can be caused, for example, by sensor inaccuracies and defects or software
faults. erefore, run-time monitoring techniques must accurately account for the impact of
hardware and software faults, otherwise the safety violations may not be identified in time to
prevent accidents. One approach for considering these uncertainties is the use of probabilistic
models. In this approach, the behavior of the system and its environment is described by a likeli-
hood based on the confidence in the safety monitors’ observations. e safety monitors use these
probabilistic models to provide a likelihood that the system is behaving according to its speci-
fication [163]. ere have been a number of examples of such probabilistic run-time monitors
proposed in the literature; for instance, see [184189]. A further drawback of run-time moni-
toring methods is the increased computational overhead in systems where checking complicated
properties is involved. In some cases it may be acceptable to lower the accuracy of the safety mon-
itors to reduce the increased computational overhead for example, by using a control-theoretic
approach to implement a trade-off between accuracy and computational overhead [190].
4.1.3 SOFTWARE SAFETY CAGES
Software safety cages can be used to improve the safety of a system by limiting the system out-
puts to a safe envelope. In their simplest form, software safety cages are simply hard upper and/or
lower limits on the system output. However, when combined with run-time monitoring meth-
ods, the software safety cages can be dynamic. By using observations from the safety monitors as
context, the software safety cages can limit the output of the neural network system dynamically
based on the current state. For example, in an autonomous vehicle, this approach can be used to
prevent acceleration or force braking of the vehicle if the vehicle is within a critical distance from
another vehicle, regardless of the outputs of the neural network controller. In this approach, po-
tentially dangerous outputs of the neural network can be prevented by limiting their outputs
during critical scenarios. erefore, the safety validation requirements on the neural network
can be relaxed, given that the software safety cages can be validated with high assurance.
Heckemann et al. [191] proposed a framework which utilizes context aware safety cages
to ensure the safety of complex and adaptive software algorithms in automotive applications.
e safety cages in this framework evaluate the hazards and the driving situation and assign an
Automotive Safety Integrity Level (ASIL) to their combination based on the estimated levels
of exposure, severity, and controllability. For instance, an emergency braking maneuver has a
higher severity in a high-speed driving situation. erefore, the software safety cages can use
their context awareness to check the plausibility of function outputs and restrict the system
outputs depending on the current vehicle state. For example, although an emergency braking
maneuver might be allowed in an urban scenario, it might be restricted to limited braking power
when driving at high speeds on a highway. Furthermore, since the large number of inputs in
such a system would cause the safety cage to be very complex, the overall vehicle safety cage is
split into multiple smaller safety cages. Each individual safety cage addresses a certain aspect
..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset