3.2. RESEARCH CHALLENGES 25
and a mid-to-mid learning approach were key to allowing the model to learn to drive the vehicle
while avoiding undesirable behaviors.
A summary of the full vehicle control techniques presented in this chapter can be seen
in Table 3.3. In contrast to the previous sections, a variety of learning and implementation
approaches were used in this section. Machine learning approaches have demonstrated strong
results for driving tasks such as lane keeping or vehicle following. While these results are impor-
tant, for fully automated driving, more complex tasks need to be considered. e network should
be able to consider low-level goals (e.g., lane keeping, vehicle following) alongside high-level
goals (e.g., route following, traffic lights). Early research has been carried out integrating high-
level context to the machine learning models, but these models are still far from human-level
performance.
3.2 RESEARCH CHALLENGES
e discussion in previous sections has shown that important results have been achieved in the
field of autonomous vehicle control using deep learning techniques. However, research in this
area is on-going, and there exists multiple research challenges that prevent these systems from
achieving the performance levels required for commercial deployment. In this section, we will
discuss some key research challenges that need to be overcome before the systems are ready
to be deployed. It is worth remembering that besides these research challenges, there are also
more general challenges preventing the deployment of autonomous vehicles such as legislative,
user acceptance, and economic challenges. However, since the focus here is on the research
challenges, for information on more general challenges we refer the interested readers to the
discussions in [115–120].
3.2.1 COMPUTATION
Computation requirements are an issue for any deep learning solution due to the vast amount of
training data (supervised learning) or training experience (reinforcement learning) required. Al-
though the increase in parallelisable computational power has (partly) sparked the recent interest
in deep learning, with deeper models and more complex problems the computational require-
ments are becoming an obstacle for deployment of deep neural network driven autonomous ve-
hicles [121–123]. Furthermore, the sample inefficiency, which many state-of-the-art deep rein-
forcement learning algorithms suffer from [124], means that a vast number of training episodes
are required to converge to an optimal control policy. Since training the policy in the real world
is often unfeasible, simulation is used as a solution. However, this results in high computation
requirements for reinforcement learning.