28 3. DEEP LEARNING FOR VEHICLE CONTROL
tools for training DNNs, which aim to avoid overfitting to the training data. Early stopping
uses a validation data set, a separate data set from the training set not used for weight updates,
which is used to observe the performance of the model on unseen data. If the validation perfor-
mance begins to degrade while the training set score increases, the training should be stopped
as the model is beginning to overfit. Regularization techniques are another set of tools to re-
duce test errors, although sometimes at the cost of an increased training error. Regularization by
constraining the weights of the network or dropping some neurons out during training ensures
trends over the whole training set are learned and complex co-adaptations between neurons are
prevented [142146].
3.2.5 VERIFICATION AND VALIDATION
Verifying the model performance is important for any real world vehicle application. Currently,
the trend in the field is to evaluate the model performance in simulation, due to the high cost
and time requirements of field testing [147]. While simulation allows for faster testing of novel
ideas and easier testing of safety-critical scenarios, it must be kept in mind that the use of sim-
ulation will introduce various model inaccuracies. Accurately modeling physical phenomenon
such as friction and contact can be difficult. However, these inaccuracies mean that a model
trained in simulation may not work in the real world. is is exacerbated by the use of vision,
as creating an adequately realistic graphical simulator can be difficult. is leads to the model
overfitting to the simulation environment, and failing to generalize to the differences in the
real-world environment. e transfer from simulation to real-world training is currently an ac-
tive research area, with many promising approaches for solving this problem such as domain
rrandomization [148150], domain adaption [151154], and iterative learning control [155
157]. Furthermore, approaches such as the mid-to-mid learning in ChauffeurNet [114] can be
used as an alternative work around to this issue. Regardless of the training method, extensive
real-world testing of any autonomous vehicle application will be required to ensure the model
performs adequately in its intended operational environment [158].
3.2.6 SAFETY AND INTERPRETABILITY
e safety of the deep learning models is perhaps the most critical issue hindering the deploy-
ment of learned autonomous vehicles. A serious mistake or malfunction in an autonomous ve-
hicle can cause death or serious harm to people. erefore, in a safety-critical system such as
an autonomous vehicle, the safety of each sub-component as well as the overall system must be
validated to a high assurance level. However, this is challenging where deep neural networks are
used, as the systems are opaque in nature. is is referred to as the black box problem [159].
While we can test the trained models in targeted testing to observe model performance, the
complexity of the deep learning models means that understanding what the system has learned
and what rules it follows is practically impossible. is means that traditional safety validation
techniques are less useful where deep neural networks are used. erefore, new techniques, or
..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset