8
Intelligent Traffic: Formulating an Applied Research Methodology for Computer Vision and Vehicle Detection

Gabrielle Bakker‐Reynolds, Emre Erturk, and Istvan Lengyel

Eastern Institute of Technology, School of Computing, Hawke's Bay, New Zealand

8.1 Introduction

8.1.1 Introduction

As the potential benefits of smart city services continue to attract attention, focus is placed on what new instruments and approaches may be deployed to achieve better results. Examining innovative technologies draws attention to computer vision, a field of computer science that enables computers to process, perceive, and identify images, mimicking how humans make sense of the physical world through different learning architectures [1].

In the recent years, one area of interest for computer vision is the ability to support traffic management systems. Since computer vision holds the capabilities to classify and detect objects, people, and visual details in real time, the ways in which it may assist the development of traffic management solutions are vast. Current examples of computer vision‐driven software solutions can be demonstrated within travel assistance and navigation, parking management and enforcement, real‐time traffic control, automated vehicle identification, and license plate recognition [2]. In traffic management solutions, computer vision tools can be used in conjunction with the Internet of things (IoT). However, the core city management objectives should generally determine their strategic purpose.

In the context of this research, computer vision is important for object detection. Object detection can be understood as a technique that enables software solutions to detect semantic objects of certain classes within images, video files, or real‐time recording [3]. Established domains of object detection are demonstrative in software solutions harnessing facial detection and pedestrian detection capabilities. Subsequently, this research sets out to investigate how computer vision can be utilized to support a traffic management prototype by providing a cost‐effective and efficient way to support traffic flow analysis.

8.1.2 Background

This research aims to develop a vehicle detection prototype for traffic flow analysis by harnessing the capabilities of computer vision. The function of computer vision within this context is to introduce an innovative method that can be utilized to gain insight into busy and peak traffic times. These insights can then be used for a variety of motivations, such as enabling planning and development of new routes and assisting in scheduling maintenance on particular streets or roads.

Presently, the primary way traffic flow analysis is conducted in the selected region of New Zealand through the employment of pneumatic tube‐based sensors alongside MetroCount traffic counters [4]. The pneumatic tube‐based sensors are placed alongside the roads, and the counter module registers the pulses produced by each vehicle as they drive over the tubes [4]. Additionally, reliance is placed on people to monitor an intersection for a specific period of time. However, this approach leaves room for human error, can take up a lot of time, and, overall, may be considered a mundane task to carry out. Hence, it is significant that the proposed vehicle detection prototype is optimized to accurately detect the number of vehicles. Due to the significance of accurate vehicle detection, an evaluation tool is employed to carry out assessments. Consequently, evaluative measures are set in place to ensure that the accuracy and function of the prototype are analyzed accordingly.

8.1.3 Problem Statement

8.1.3.1 Purpose of Research

This research is concerned with examining how computer vision can be used for vehicle detection. The purpose of this is to serve as an innovative alternative to current traffic flow analysis techniques.

Another goal of this research is to know how innovative technologies may be used to contribute to increase the efficiency of current traffic management approaches in smart cities. This is because in the recent years, computer vision has been noted as an effective and innovative tool in aiding many different areas of general traffic management [5, 6]. Furthermore, a prototype can be designed for capturing traffic flow data, to be evaluated with regard to accuracy and time.

8.1.3.2 Research Questions

The primary research question is as follows:

  1. How can computer vision be used to support traffic flow analysis?

Accompanying questions of this research include the following:

  1. What is required within the planning, development, and implementation phases of a prototype that performs accurate vehicle detection?
  2. How will the performance of a vehicle‐detection prototype be measured?
  3. What are the barriers to the development of a vehicle‐detection prototype?

8.1.3.3 Study Aim and Objectives

The central aim of the research is to investigate and discuss a vehicle‐detection prototype. Based on this aim, the three central objectives are as follows:

  1. To describe the planning, development, and implementation phases of a modern vehicle‐detection prototype.
  2. To propose a modern computer vision‐driven prototype that can perform accurate vehicle detection for traffic flow analysis.
  3. To understand how to assess the performance of a prototype based on evaluation criteria.

8.1.3.4 Significance and Structure of the Research

This research offers a unique perspective and serves as a contribution to the small amount of existing literature covering the role of computer vision‐driven traffic solutions within New Zealand and other countries. The significance of this research is the way it informs of how such a traffic monitoring solution might be developed, integrated, and customized. The research also draws attention to a current traffic flow analysis implementation to highlight what changes may be made to strengthen existing processes.

Furthermore, this research explores how incorporating emerging technologies, specifically computer vision‐based approaches within traffic flow analysis in New Zealand, can offer affordances. The significance of this comes from noting how computer vision is explored and employed on a global scale. For instance, successful use cases abroad include the use of computer vision to automate traffic lights to decrease congestion and save costs [7], assess the retroreflectivity of road signs to save time and manual maintenance [8], and estimate traffic flow through automation, minimizing the margin of error and offering higher rates of accuracy [9]. Thus, successful use cases such as the ones highlighted above inform on how computer vision is one of the most popular emerging technologies used within traffic management. However, looking into past usage of computer vision illustrates very few use cases, suggesting that there may be barriers preventing further usage.

This research reports on the investigation, development, and testing and trialing of a computer vision‐driven prototype to perform vehicle detection – with some limitations. The first limitation to note is the lack of information reporting on the specifications of the current traffic flow analysis approaches employed. Although it is apparent what tools are implemented to carry out traffic flow analysis within New Zealand, not having the definite information reporting on factors such as accuracy, time efficiency, and cost makes it difficult to compare the research's prototype against it. The second potential limitation has to do with the current pandemic circumstances. COVID‐19 may impact the research by limiting access to resources required to carry out the research projects. One example of this could be through long wait periods when purchasing required products online from overseas vendors.

Section 8.2 presents a literature review covering relevant academic research within the field of computer vision. Section 8.3 is composed of the methodology portion of the research, addressing the uses of a design‐based research approach and examining the proposed study design and the software and hardware used. This also introduces the methodology including the design and development, testing, analysis, and redesign of the prototype. References are made to the literature, and various insights, benefits, and potential barriers are discussed. Additionally, an outline surrounding the next stages of the research project is presented to be continued in Chapter 9. The last section is the conclusion, which also offers directions for future research.

8.2 Literature Review

8.2.1 Introduction

The literature review seeks to create familiarity with the use of computer vision and its present role within traffic management systems. The sources examined throughout the literature review only include primary, peer‐reviewed sources available from the databases such as Google Scholar, ProQuest, and ERIC and are divided into thematic subheadings. Keywords used to select the sources included “computer vision,” “computer vision and traffic management,” “machine learning,” “deep learning,” “benefits of computer vision,” “object detection in computer vision,” “object recognition in computer vision,” and “technical issues in computer vision.”

8.2.2 Machine Learning, Deep Learning, and Computer Vision

8.2.2.1 Machine Learning

Machine learning (ML) can be understood as a field of computer science that focuses on the study of computer algorithms to carry out intelligent predictions based on a given data input [10]. As ML algorithms are given more data, they are able to perform better and change or alter how they process data over time by providing feedback on previous performances. This, in turn, serves as an extension off of traditional statistical modeling approaches [11]. Additional reasons for the growing interest in ML include advancements in big data, significant increase in computational power, and large progress in the general development and design of ML algorithms [12].

Some of the most popular and widely used ML algorithms fall under either the categories of supervised learning or unsupervised learning. Supervised learning techniques focus on either classification to predict a data label by mapping inputs to outputs or regression, with the objective to predict a data quantity [13]. Conversely, unsupervised learning techniques focus on providing either clustering, which concentrates on finding any fundamental groupings among data, or dimension reduction, which focuses on reducing the number of variables within data to localize the necessary information [14, 15].

8.2.2.2 Deep Learning

Deep learning is a subset of ML and is based on the communication nodes present within the biological systems of humans [16]. It is important to understand the general concept of deep learning architectures and the structure of an artificial neural network (ANN). The architecture of the ANN is composed of three layers. As illustrated in Figure 8.1, the first layer is the input layer, where data is taken into the system for further processing within the succeeding layers [17]. Once data is taken into the input layer, it is transferred to the hidden layer where the artificial neurons take in a set of weight data inputs and generate a data output through completing an activation function. Lastly, the output layer takes the data and produces the given outputs.

Schematic illustration of the architecture of an artificial neural network.

Figure 8.1 The architecture of an artificial neural network.

Source: Bre et al. [17] / with permission of Elsevier.

Subsequently, deep learning serves as an extension off of a typical ANN, offering an additional two or more layers, hence gaining the term “deep” learning [18]. There are many different deep learning neural network (DNN) algorithms, with some of the most common being convolutional neural networks (CNNs), recurrent neural networks, recursive neural networks, and unsupervised pretrained networks (UPNs). The common characteristics of deep learning approaches include having strong learning ability and being extremely powerful and efficient when utilizing significantly large datasets [18]. Deep learning requires high‐end graphical processing units and hardware that offer incredibly high performance to process workloads. Nevertheless, overall time requirements are larger [19]. Although these factors can be considered constraints, a higher level of accuracy is often attributed to deep learning approaches [20].

8.2.2.3 Computer Vision

Some examples of computer vision techniques can be demonstrated in subdomains such as image classification, object detection, object tracking, semantic segmentation, and instance segmentation [21]. Taking these subdomains into account, and depending on which technique is used, computer vision supports capabilities that are illustrated through software such as facial recognition, surveillance, biometrics, and self‐driving cars. If developing computer system's objective was to offer speed detection, such an approach would be implemented to achieve this. One example of this approach can be shown within recent research conducted into obstacle detection and classification for high‐speed autonomous driving [22]. To ensure a car is able to achieve full detection and classification of objects, the PASCAL VOC image dataset was selected, chosen as it is composed of many objects common within traffic contexts such as vehicles, pedestrians, and animals. Once the dataset was selected, it was then used to train a region‐based CNN, with a Titan X GPU running on it, processing 10 frames per second (FPS). Due to such a high frame rate, the system was considered suitable for high‐speed driving of autonomous vehicles, and the results from object classification and detection demonstrated full invariance when tested in different conditions such as changes in lighting or climate [22].

8.2.3 Object Recognition, Object Detection, and Object Tracking

As this research is concerned with creating a computer vision‐driven prototype offering the affordance of vehicle detection, this section not just focuses on how object detection functions but also concerns itself with exploring additional surrounding computer vision approaches. These additional approaches include object recognition and object tracking. These approaches also share a dependence on one another. For instance, object detection relies on the initial establishment of the recognition of object classes to function. Similarly, object tracking relies on the accurate detection of object classes to track the object classes across video frames. Object tracking may then be employed on either previously recorded data or in real time to deliver the most up to date insights.

8.2.3.1 Object Recognition

Object recognition is a computer vision technique that enables software solutions to recognize object classes within images, video files, or livestream video sequence recordings. Once object classes have been recognized within data, they are then assigned a label that serves as a title [23]. The process of object recognition is composed of the following steps. Data is first collected from images or video files. Once obtained, features can then be extracted from the data through the use of a feature extraction algorithm. This algorithm may concern itself with acquiring certain features within each piece of data it is feed and is performed to differentiate between subsequent object classes later on in the learning process [24]. Once this process has been completed, the features that have been extracted can then be added to the selected object recognition model that will classify the features into different groupings. These groupings will then be used to help assist in analyzing newly input data and, in turn, perform recognition of the object classes that exist within the data. Overall, there are many different types of feature extraction algorithms and object classification models that work together to achieve object recognition.

8.2.3.2 Object Detection

Object detection can be understood as a computer vision technique that enables software solutions to detect and locate objects of certain classes within images, video files, or livestream video sequence recordings. Sometimes, object detection is used interchangeably with the technique of object recognition. However, there is a difference between the two. For instance, object recognition is concerned with recognizing the object within image data, whereas object detection focuses on focuses understanding what the object is but also locating its position within the given data [25]. To further make sense of how object detection functions, it is important to differentiate between some of the common terms commonly used when performing this approach. Below, object classification and object localization will be briefly examined in accordance to their role and function within object detection.

Object classification is one of the first steps necessary in completing object detection, used to predict the class within an image. This is also the core step in completing object recognition [25]. The first step of object classification is to feed a computer vision algorithm input data. Generally, the input data includes images composed of differing objects that the algorithm uses to learn and predict the objects in future data. Once the algorithm is given the input image data, it then outputs the same image data with a class label, such as one or more integers assigned to objects within the image data [26].

Once the computer vision algorithm has completed object classification, it is then able to move on to object localization. Object localization is a computer vision approach used to determine where an object is situated within an image or video file. For the algorithm to be able to correctly identify the location of a specific object, differing approaches are employed, with one of the most common being bounding box regression that is used to refine or predict localization boxes within data. Usually, the bounding box regressors are trained to identify the predefined object classes through regressing either fixed anchor boxes or region proposal networks [27]. Hence, the function of object detection serves to fulfill two main objectives: to comprehend each object class, also known as object classification, and to understand where objects are located within an image, also referred to as object localization [28].

8.2.3.3 Object Tracking

Object tracking is related to how an application can maintain the ability to stay focused on specific objects that have been detected and determining the placement of the object over various video frames. This can be employed on batched video files or in real time. The process of object tracking can be broken down into three main components. The first step includes taking a set of bounding box coordinates from object detection approaches. Once these coordinates have been obtained, an individual ID for each of the initial detections must be created. The final step is then to track each of the detected objects as they change location through each video frame while simultaneously upholding the assignment of each of the IDs [29]. Overall, the process of assigning IDs to each object allows the ability to track them across the video frames.

8.2.4 Edge Computing, Fog Computing, and Cloud Computing

When utilizing computer vision approaches, it is significant to identify the way in which data will be processed for computations and how workloads will be handled and stored. Subsequently, edge computing, fog computing, and cloud computing are architectures that offer processing and data storage capabilities. Accordingly, these architectures will be briefly considered in accordance to their roles in supporting the use of data.

8.2.4.1 Edge Computing

The first architecture may be evidenced by edge computing. The term edge computing comes about when addressing its main objective, being that it is concerned with processing data at the edge of a network [30]. Edge computing can also be defined as a distributed computing paradigm, where the management of data is completed closer to enterprise applications, such as local servers or IoT devices [31]. Taking this into account, computer vision approaches greatly rely on the speed at which data may be handled to offer real‐time results. Thus, by enabling data processing at different points and allowing data to be processed wherever it is required the most, edge computing offers enterprises the ability to achieve immediate insights that can be transformed into significant processes and practices.

8.2.4.2 Fog Computing

Fog computing is another approach that can be taken to deploying computer vision applications. This, also referred to as fogging or fog networking, can be defined as a scenario where a large amount of heterogeneous, ubiquitous, and decentralized devices connect with one another to undertake processing and storage tasks [32]. These tasks can vary from supporting new services and applications, to assisting with basic networking functions. Furthermore, users that rent out these services are able to receive incentives as compensation. Thus, fog computing is incredibly similar to edge computing in the way that data is managed and processed closer to where it is originally created. Due to this, the terms edge computing and fog computing are often used interchangeably. However, one of the key differences between the two is the location of where the processing of the data takes place [32]. For instance, edge computing focuses on processing data directly on the devices close to where sensors are located, whereas fog computing ordinarily sends the computing power to a local area network (LAN). Once the data has been sent to an LAN, it is then processed either through a node, hub, router, or gateway, and the results of the processing are then sent through to the elected devices [33].

8.2.4.3 Cloud Computing

Lastly, another option may be demonstrated by cloud computing that can be defined as the on‐demand delivery of computer services, including networking, software, storage, servers, and databases. The delivery of these services is offered through the Internet, also recognized by the term the “cloud” [34]. Many of these services are given acronyms, with some of the most popular being SaaS meaning software as a service, IaaS meaning infrastructure as a service, and PaaS meaning platform as a service. Furthermore, these service models are offered for remote use, allowing users utilizing applications the ability to employ them remotely and only needing to pay for as much of the service as they require [35].

8.2.5 Benefits of Computer Vision‐Driven Traffic Management

This research is also concerned with understanding what benefits computer vision may offer within a traffic management context. Consequently, some examples of these benefits will be addressed below.

While the use of computer vision‐driven traffic management solutions is still a relatively new concept, many benefits are now becoming noticed. One example can be illustrated by research conducted by Fedorov et al. [9], who estimated the traffic flow rate by using a video surveillance camera composed of visual data. To accomplish this research, a modified version of a faster R‐CNN two‐stage detector, as well as a SORT tracker, was employed. In addition, a region‐based heuristic algorithm was used to sort what direction vehicles were moving. Once trials had been completed, the results demonstrated that the system was able to comprehend the number of vehicles within its field of view and understand and report on the direction of vehicle movement during heavy traffic with a less than 10% recorded mean error percentage [9].

A second benefit of using computer vision techniques within traffic management can be illustrated by its uses within real‐time video processing. One example of this can be illustrated by Gulati and Srinivasan [5], who conducted research into various types of detector systems for traffic management supported by computer vision techniques. During the research, it was noted that current conventional detection systems used within traffic management include RADAR, LASER, infrared, ultrasonic, and magnetometer, all holding different competencies. However, when comparing video image processing against conventional detection systems, it was acknowledged that it held the most capabilities, including detection, presence, speed, occupancy, classification, and count of vehicles. Additional benefits include the abilities to offer real‐time traffic detection, comprehensive area detection, and cost‐effective quotes for installation and maintenance of traffic management solutions [5].

A third benefit of using computer vision within traffic management can be demonstrated by the way it can reduce traffic jams and congestion on busy roads through automation features. One example of this can be evidenced by research undertaken by Osman et al. [7], where a cost‐effective intelligent and automated traffic control system was designed and implemented. There were two major components of the system that were detrimental to its success, including image processing and network settings. Image processing was run on a server because processing done through embedded devices would cost significantly more. Another reason that image processing was completed through a server was that it was more time efficient. With regard to network settings, a network was created in between the traffic controller and the server by utilizing HTTP protocol, alongside general network techniques. This decision is made due to their stable and well‐established communication protocols for images and other data [7].

8.2.6 Challenges of Computer Vision‐Driven Traffic Management

While understanding the benefits of using computer vision within traffic management, there are also technical challenges that accompany it.

8.2.6.1 Big Data Issues

Big data can be understood as information that exceeds processing, storage, and computing capacity of traditional databases and data analysis approaches [36]. Taking this into account, computer vision approaches depend on big data to fulfill their objectives. For instance, one example can be evidenced by a scenario concerned with employing an object detection algorithm for real‐time detection. In this context, not only does object detection require initial training data, but also it must analyze the data presented in real time. Thus, such a reliance on big data often opens itself up to different vulnerabilities.

To prevent some of these barriers, Qiu et al. [37] undertook research into the use of ML processes for big data processing, finding that there are five critical issues that ML may be met with when handling big data. These issues include the (i) volume of data, (ii) variety of data, (iii) velocity of data, (iv) veracity of data, and, lastly, (v) the value of data. For instance, volume of data represents the quantity of data that is available. Although large scales of readily available data are often advantageous, there is also a drawback. For instance, as enterprises regularly amass terabytes to petabytes of information, the volume of big data can be considered insurmountable [38]. Because of information overload, the volume characteristic is considered one of the most significant and distinctive components of big data analysis, prompting the need for specific requirements that must be met to assist in managing it [38]. Thus, the role of computer vision approaches in handling large volumes of big data can be considered an issue not only in its management but also in the way that it must deal with the scale of differing data sources.

8.2.6.2 Privacy Issues

As evidenced above, computer vision greatly relies on big data to function. Hence, another point to consider are the precautions that must be taken to uphold and preserve privacy in certain aspects of data usage. These precautions must be taken to negate any negative consequences that may stem from poor differential privacy [39]. Overall, differential privacy can be defined as an instrument that is able to output information of an underlying dataset and withhold specific data so that it is able to serve its intended purpose. Thus, this instrument completes its objective by offering the required information, such as patterns or insights in the underlying data, but ensuring that sensitive information existing within that same dataset is not compromised [39]. Furthermore, it should be noted that differential privacy is not regarded as a binary notion, but rather a matter of accumulative risk [40]. What this means is that the privacy of an individual is not considered either breached or anonymous, but instead is given parameters that quantify data loss through different mathematical functions.

Computer vision adoption has raised privacy concerns within the facial recognition, facial expression recognition, and scene recognition domains in the recent years. This is largely due to the reason that cameras have become ubiquitous and users not being aware of their presence [41]. This reality not only calls for users to be granted the right to be aware of where cameras are situated but also acknowledges that computer vision‐driven solutions must have adequate control over the real‐time collection and usage of this data.

8.2.6.3 Technical Barriers

There are some current technical barriers that computer vision faces upon its implementation, such as class imbalance problems, limitation of model generalization, absence of scene understanding, and issues faced during image synthesis. Below, the significance and the true meaning of these technical issues will be considered.

One of the most common issues surrounding the implementation of computer vision approaches can be evidenced when addressing class imbalance problems. Class imbalance can be defined as a classification issue, where the distribution of instances across a given dataset is either one sided or skewed. Class imbalance problems can largely vary; however more severe instances such as where high‐class imbalance occurs may call for specialized techniques and approaches to be efficient [42]. Hence, it is evident that effective classification using imbalanced data is an exceptionally important area of research due to the reason that high‐class imbalance inherently exists within many real‐world applications, such as cancer detection and fraud detection [43]. However, it has been noted that only a small amount of empirical research exists surrounding the use of deep learning with existing class imbalance. As a result, present research focuses its efforts on computer vision tasks with CNNs and neglects to consider the effects of big data [43].

A second technical issue to consider when implementing computer vision approaches can be evidenced by model generalization limitations. Model generalization can be defined as a model's ability to adapt properly to previously unseen or used data and draw the same distributions as the ones drawn from the one used to create the initial model with the original data [44]. Furthermore, the act of data collection, data cleaning, data preparation, and building models can all bring different challenges; however, one of the most significant of these challenges is determining how to distinguish if the model will predict future distributions [45]. Taking this into account, computer vision functions by exploiting a dataset to train and assess a model, achieved by splitting up datasets into a training set and a test set [46]. However, the downside to this approach is that the trained model then has the same data distribution across the training and test set, meaning that there is a likelihood that the data distribution will vary from the data used throughout training and testing, such as different camera angles or object scales.

8.3 Research Methodology

This section discusses how a traffic management prototype may be developed, tested, and analyzed. Sections 8.3.1, 8.3.3, 8.3.5, 8.3.6 include the research questions, the adapted design‐based approach, and hardware and software instruments. The design‐based research methodology is acknowledged alongside issues that may be encountered with this methodology and how they may be mitigated.

8.3.1 Research Questions and Objectives

The primary research question is as follows:

  1. How can computer vision be used to support traffic flow analysis?

Accompanying questions of this research include the following:

  1. What is required within the planning, development, and implementation phases of a prototype that performs accurate vehicle detection?
  2. How will the performance of a vehicle‐detection prototype be measured?
  3. What are the barriers to the development of a vehicle‐detection prototype?

As stated in Section 8.3.3, the central aim of the research is to investigate and discuss a vehicle‐detection prototype. The objectives are to describe the planning, development, and implementation phases of a modern vehicle‐detection prototype, to propose a modern computer vision‐driven prototype that can perform accurate vehicle detection for traffic flow analysis, and to understand how to assess the performance of a prototype based on evaluation criteria.

8.3.2 Study Design

The design‐based research methodology is composed of several key characteristics that make it an appropriate option for this research. A design‐based research methodology can be defined by its approach to placing focus on the design and testing of a meaningful intervention [47]. In this context, a meaningful intervention may be regarded not only as the simple act of designing and testing but also a way of representing specific academic claims and welcoming the relationship between theory, design, and practice [47]. Four features must be aligned with a design‐based methodology to ensure effective interventions that include learning frameworks, outlined affordances of the selected instructional tools, demonstration of domain knowledge, and contextual limitations [47]. The design‐based research methodology is often split up into four stages including the analysis of practical problems, development of solutions, iterative cycle testing, and reflection of design principles.

Table 8.1 The adaptation of the DBR methodology for this research.

Analysis of practical problemDevelopment and designIterative testingReflection and redesign
The first step involves the completion of research into current approaches to traffic management using computer vision.
Current approaches can be viewed within the literature review
The second step involves the development and design of a prototype by taking into account previous strengths and weaknesses of the literatureThe third step includes testing the prototypeThe fourth step involves reflection of the testing and recalibrating if necessary

8.3.2.1 Selection Rationale

The primary motivation for selecting the design‐based research methodology can be demonstrated through the objectives both the methodology and the research share in common. As the objective of the research is related to the creation of a prototype through the implementation of a phase‐based approach, this complements the phases also laid out within the design‐based research methodology (Table 8.1).

8.3.2.2 Potential Challenges

Upon selecting the design‐based research methodology, it is essential to identify any potential challenges that may be presented when aligning it with the research to mitigate them. Below, some of the potential challenges that come with electing the design‐based research methodology will be outlined.

It must be noted first that one of the most significant challenges that come with utilizing the design‐based research methodology is how it operates in different research contexts. For instance, the design‐based research methodology places significant emphasis on objectivity, reliability, and validity, illustrating that these features are vital in ensuring that scientifically sound research is effectively produced [48]. However, when these features are managed in a different context, such as in a controlled experimentation environment, the way they are prioritized may also vary. This in itself places the researcher in a difficult position. On the one hand, it is vital that objectivity is advocated for when facilitating meaningful interventions. However, on the other hand, the researcher may find themselves in both of the roles of advocate and critic [48].

A second challenge that may come about may be demonstrated by uncertainty arising when implementing this approach [49]. For instance, it has been posited that the efficacy of the design‐based learning methodology is strongly dependent upon who conducts it. This bears a significant contrast to other approaches such as grounded theory or further dedicated experiments that commonly explain and outline their processes in particular detail. Although this flexibility gives researchers the ability to adopt a range of differing design processes that may fall under the design‐based learning methodology, it is reasoned that some of these processes fail to wholly define the advanced phases of design. Similarly, an additional issue to address is that some of these design frameworks also fail to properly convey how they accompany research components [49].

Taking these challenges into account, there are a few provisions that may be taken to help mitigate them. First, this research may place significant focus on the approach to the design‐based learning methodology for the research objectives and ensuring they fall in line with the defining key phases. Next, the way in which the design‐based methodology complements its research counterpart may be realized through ensuring that a clear definition of its key phases is offered. Through presenting a well‐defined classification of each of the phases, Easterday et al. [49] reasons that it resolves any uncertainty, informs new researchers through illustrating best practices, and, overall, strengthens communication among the research community.

8.3.3 Adapted Study Design Research Approach

Once the design‐based research methodology had been identified, it was then applied to the research by adapting each stage of its phase‐based approach. To convey the way the design‐based research methodology was adapted to the research, each of its four phases will be outlined below with an accompanying table.

  • Analysis: As the first phase of the design‐based research methodology is concerned with the analysis of practical problems, the research focused on analyzing present‐day literature surrounding the usage of computer vision (Table 8.2).
  • Development: The second phase focused its efforts on prototype design and development (Table 8.3).
  • Testing: The third phase was concerned with testing the prototype (Table 8.4)
  • Redesign: The final phase of the design‐based research concentrated on developing design principles that may be implemented in additional iterations (Table 8.5).

Table 8.2 Applying phase 1 of the DBR methodology to the prototype.

Phase 1 – AnalysisSummary of the adapted study design research approach
Analysis of practical problems by researchers and users faced when implementing solutions [50]
  • Adopted an academic foundation through analysis of the use of computer vision in the present day, taking note of approaches and software/hardware instruments utilized
  • Completed a literature review encompassing computer vision fundamentals
  • Used the literature to guide the research through assessment of current day barriers to assist in effective implementation

Table 8.3 Applying phase 2 of the DBR methodology to the prototype.

Phase 2 – DevelopmentSummary of the adapted study design research approach
Development of solutions informed by present‐day design principles and technological innovations [50]
  • Outlined weaknesses in current approaches to computer vision usage in traffic management
  • Acknowledged weaknesses through designing a prototype utilizing recent technological innovations. This was achieved through implementation of alternative instruments
  • Identified and adhered to relevant present‐day software design principles
  • Obtained the instruments and developed the computer vision prototype

8.3.4 Selected Hardware and Software

The tools required to create the traffic management prototype were identified through examining past and current instruments implemented in other projects and looking into recent technological innovations in the domain of computer vision. Below, each of the selected hardware and software components will be briefly examined with regard to their function and rationale behind selecting them.

Table 8.4 Applying phase 3 of the DBR methodology to the prototype.

Phase 3 – TestingSummary of the adapted study design research approach
Iterative cycles of trialing and fine‐tuning of solutions in practice [50]
  • Opted to implement iterative cycles of testing of the traffic management prototype
  • Identified and recorded the results from the testing of the prototype. Results were then used to inform suitability of design principles

Table 8.5 Applying phase 4 of the DBR methodology to the prototype.

Phase 4 – RedesignSummary of the adapted study design research approach
Reflect upon iterations to employ additional design principles. Use the design principles to improve performance of future solutions in following iterations [50]
  • Opted to implement iterative cycles of redesign to assist the testing phase of the traffic management prototype
  • Reflected upon the results of the prototype in each iteration
  • Researched and employed modern‐day design principles to redesign

8.3.4.1 Hardware: The NVIDIA Jetson Nano Developer Kit and Accompanying Items

The NVIDIA Jetson Nano Developer Kit can be understood as a small yet strong computer that enables users to run multiple neural networks in parallel and, in turn, allows the ability to harness capabilities such as image classification, segmentation, object detection, recognition, and tracking and implement them into applications in real time [51]. The rationale behind selecting the NVIDIA Jetson Nano Developer Kit over alternative modern day embedded system options is as follows. First, analyzing the current literature concerned with the use of embedded systems for computer vision in traffic management illustrates that one of the most popular choices are the range of Raspberry Pi‐embedded devices [52, 53]. Due to this, there is a gap in the literature surrounding the employment of the NVIDIA Jetson range for traffic management and, in particular, the newest model NVIDIA Jetson Nano Developer Kit, released in March 2019. Additionally, further choice behind the selection of the NVIDIA Jetson Nano Developer Kit can be evidenced by price (NVIDIA® [54]). One final reason for the choice in the NVIDIA Jetson Nano Developer Kit can be evidenced by the objectives set out by the research, as well as analyzing the requirements necessary to design and develop the vehicle detection prototype. For instance, as the research focuses on specifically vehicle detection approaches, it requires a significant amount of processing power. Hence, composed of a Quad‐core ARM Cortex ‐A57 MPCore central processing unit (CPU), the NVIDIA Jetson Nano Developer Kit offers a sufficient processing power suitable for modern computer vision applications [55].

To use the NVIDIA Jetson Nano Developer Kit, a microSD card, micro‐USB power supply, WiFi supply, a computer display using either HDMI or DP, and a USB keyboard and mouse are required. Taking this into account, it is recommended to utilize a microSD card that holds 16 gigabytes (GB) minimum [51]. Thus, a 64 GB microSD card was obtained for the prototype. Moreover, it is also recommended to use a 5v2A micro‐USB power supply and so this was also obtained.

Table 8.6 The hardware components obtained and utilized for the design and development of the prototype.

Selected hardwareTechnical specifications/additional information
NVIDIA Jetson Nano Developer KitCPU: Quad‐core ARM A57 at 1.43 GHz
GPU: 128‐core Maxwell
Memory: 4 GB 64‐bit LPDDR4 25.6 GB/s
Storage: uses a microSD (not inclusive)
Camera: 2x IPI CSI‐2 PHY lanes
Connectivity: Gigabit Ethernet, M.2 Key E
Display: HDMI and DP
USB: 4x USB 3.0, USB 2.0 Micro‐B
SanDisk microSD card: 64 GBSpeeds: up to 170 MB/s Read, up to 90 MB/s Write
HP X900 mouseWired USB connection
KeyboardWired USB connection
Computer monitorApple Mac 21‐in. desktop
Belkin power adapterPower supply: 5 V = 2 A
Belkin Micro‐USB cableLength: 3.3FT/1M cable
Ethernet cableLength: 20 m
Raspberry Pi V2 camera moduleSize: 25 mm x 24 mm x 9 mm
Video modes: 1080p30, 720p60 and 640 x 480p60/90
Still resolution: 8 megapixels
Sensor: Sony IMX219
Sensor resolution: 3280 x 2464 pixels

The NVIDIA Jetson Nano Developer Kit also has camera capabilities, and to utilize this feature for the development of the prototype, a Raspberry Pi V2 camera module was obtained due to its cost‐effective price and its compatibility across a wide range of differing edge computing devices [56] (Table 8.6).

8.3.5 Hardware Proposed

8.3.5.1 Software Stack: NVIDIA Jetpack SDK and Accompanying Requirements (All Iterations)

The underlying software considered foundational is the NVIDIA Jetpack Software Development Kit (SDK). This is one of the most comprehensive solutions for building applications and solutions leveraging deep learning on the NVIDIA Jetson Nano Developer Kit as it is composed of many different components that may be used for design and development, including a range of software libraries and application programming interfaces (APIs) [57]. Furthermore, the NVIDIA Jetpack SDK also includes the latest operating system (OS) images for the NVIDIA Jetson range and sample applications and demos, developer tools, and documentation [57]. Each time the Jetpack SDK is updated to a newer version, so too are the versions of the libraries included within the Jetpack SDK version. Both versions of Jetpack 4.2 and 4.4 have been obtained and utilized for the purpose of design and development of the prototype by being written to two identical microSD cards to test out approaches that utilize both older and newer versions of the software (Table 8.7).

8.3.6 Software Proposed

The design‐based research methodology will guide the iterations of the vehicle‐detection prototype. The first iteration will take into account the phases of environment setup, design and development, implementation, analysis, and redesign of the prototype. Before carrying out the vehicle detection approaches, the vehicle detection environment needs to be prepared for design and development. To achieve setting up the environment, the following steps need to be completed. Firstly, the researcher will write the image to the microSD card by downloading the Jetson Nano Developer Kit SD card image available on the NVIDIA website. Secondly, the NVIDIA Jetson Nano Developer Kit needs to be setup. Thirdly, as this is the first time that the Jetson Nano Kit is powered on, it will require an initial setting up of the configurations. Chapter 9 will describe these steps in more detail and the testing and the analysis of the results (Figure 8.2).

Table 8.7 Software obtained and utilized for the development and design of the prototype.

Software stackVersionTechnical specifications/additional information
Ubuntu18.04Linux distribution offering an open‐source operating system (OS)
NVIDIA Jetpack SDK4.2 and 4.4SDK for building AI applications
CUDA10 and 10.2Parallel computing platform
CuDNN8Library offering deep learning frameworks
TensorFlowN/ASoftware library for dataflow and differentiable programming
TensorRTN/ASDK for deep learning inference
Python2.7 and 3.6Code used to execute models on both approaches
SSD MobileNet modelV2Neural network used to run on the Jetson Nano
COCO datasetN/AUsed within the first approach to train data
Open Images datasetV6Used for second approach to train data
Repositories from GitHubN/ATwo repositories forked and cloned across the approaches
Photo depicts the Jetson Nano Developer Kit after setup.

Figure 8.2 The Jetson Nano Developer Kit after setup.

Source: NVIDIA.

8.4 Conclusion

With this chapter, a vehicle‐detection prototype has been proposed as a way to record live traffic flow. This plan follows a design‐based research methodology and has outlined only the essential features. Future proposals may also examine a way to record data and then have it transmitted through to the prototype at a later stage. Additional features to investigate may include a graphical user interface (GUI) for user‐friendliness and an API to better output vehicle detection data instead of the terminal.

While this study has moved toward developing a functional prototype, future studies may look into better customizing existing prototypes for supporting smart traffic. For instance, Salonen [58] conducted research into traffic safety, in‐vehicle security, and traffic management when using driverless shuttle busses. This was achieved by analyzing the passengers' own experiences, issues, and expectations from their perspective. Future testing for vehicle detection may include feedback from drivers, those evaluating new tools for traffic flow analysis, computer‐vision developers, and other researchers. Evaluating these perceptions may provide insight into additional changes or improvements for future applications.

NVIDIA is working on computer vision developments that may offer more options surrounding suitable open‐source code in the future. One example is NVIDIA Metropolis, an edge‐to‐cloud platform supported by SDKs such as Jetpack that can be used to maintain and improve smart cities through the use of artificial intelligence (AI) [59]. Overall, vehicle detection holds much potential for assisting in conducting traffic flow analysis in smart cities in the future. Based on the software and hardware components in this chapter, the implementation of the prototype for vehicle detection commenced. The implementation and testing are discussed in Chapter 9.

This research has discussed a variety of computer vision approaches and contributed to the literature by offering an up‐to‐date overview and outlining practical development approaches. These will also inform other researchers who are investigating similar domains using embedded devices within the IoT.

References

  1. 1 Elgendy, M. (2019). Deep learning for vision systems. https://livebook.manning.com/book/grokking-deep-learning-for-computer-vision/chapter-1/v-8/ (accessed 24 July 2021).
  2. 2 Buch, N., Velastin, S.A., and Orwell, J. (2011). A review of computer vision techniques for the analysis of urban traffic. IEEE Transactions on Intelligent Transportation Systems 12 (3): 920–939. https://doi.org/10.1109/TITS.2011.2119372.
  3. 3 Grauman, K. and Leibe, B. (2010). Visual object recognition. https://cs.gmu.edu/∼kosecka/cs682/grauman-recognition-draft-27-01-11.pdf (accessed 24 July 2021).
  4. 4 MetroCount (2015). RoadPod. https://metrocount.com/products/roadpod-vehicle-tube-classifier/ (accessed 19 September 2020).
  5. 5 Gulati, I. and Srinivasan, R. (2019). Image processing in intelligent traffic management. International Journal of Recent Technology and Engineering 8 (2S4): 213–218. https://doi.org/10.35940/ijrte.B1040.0782S419.
  6. 6 Song, H., Liang, H., Li, H. et al. (2019). Vision‐based vehicle detection and counting system using deep learning in highway scenes. European Transport Research Review 11 (1): 51. https://doi.org/10.1186/s12544-019-0390-4.
  7. 7 Osman, T., Pysche, S., Ferdous, S., and Zaman, H. (2017). Intelligent traffic management system for cross section of roads using computer vision, IEEE Annual Computing and Communication Workshop and Conference (CCWC) (9–11 January 2017). Las Vegas, NV, USA: IEEE.
  8. 8 Ai, C. and Tsai, Y.J. (2016). An automated sign retroreflectivity condition evaluation methodology using mobile LIDAR and computer vision. Transportation Research Part C: Emerging Technologies 63: 96–113. https://doi.org/10.1016/j.trc.2015.12.002.
  9. 9 Fedorov, A., Nikolskaia, K., Ivanov, S. et al. (2019). Traffic flow estimation with data from a video surveillance camera. Journal of Big Data 6 (1): 73. https://doi.org/10.1186/s40537-019-0234-z.
  10. 10 IBM (21 October 2019). What is machine learning?. https://www.ibm.com/topics/machine-learning (accessed 24 July 2021).
  11. 11 Panch, T., Szolovits, P., and Atun, R. (2018). Artificial intelligence, machine learning and health systems. Journal of Global Health 8 (2). https://doi.org/10.7189/jogh.08.020303.
  12. 12 Qolomany, B., Al‐Fuqaha, A., Gupta, A. et al. (2019). Leveraging machine learning and big data for smart buildings: a comprehensive survey. IEEE Access 7: 90316–90356. http://arxiv.org/abs/1904.01460.
  13. 13 Abu‐nimeh, S., Nappa, D., Wang, X., and Nair, S. (2007). A comparison of machine learning techniques for phishing detection. ECrime '07: Proceedings of the Anti‐Phishing Working Groups 2nd Annual ECrime Researchers Summit. ICPS (4–5 October 2007. Pittsburgh, Pennsylvania, USA.
  14. 14 Kanungo, T., Mount, D.M., Netanyahu, N.S. et al. (2002). An efficient k‐means clustering algorithm: analysis and implementation. IEEE Transactions on Pattern Analysis and Machine Intelligence 24 (7): 881–892. https://doi.org/10.1109/TPAMI.2002.1017616.
  15. 15 Nguyen, L.H. and Holmes, S. (2019). Ten quick tips for effective dimensionality reduction. PLoS Computational Biology 15 (6): e1006907. https://doi.org/10.1371/journal.pcbi.1006907.
  16. 16 Amazon Web Services, Inc. (2020). Deep learning on AWS. https://aws.amazon.com/deep-learning/ (accessed 26 May 2020).
  17. 17 Bre, F., Gimenez, J., and Victor, F. (2017). Prediction of wind pressure coefficients on building surfaces using artificial neural networks. Energy and Buildings 158: 1429–1441. https://doi.org/10.1016/j.enbuild.2017.11.045.
  18. 18 Dargan, S., Kumar, M., Ayyagari, M.R., and Kumar, G. (2019). A survey of deep learning and its applications: a new paradigm to machine learning. Archives of Computational Methods in Engineering. https://doi.org/10.1007/s11831-019-09344-w.
  19. 19 Ceron, R. (05 December 2019). AI, machine learning and deep learning: what's the difference? IBM IT Infrastructure Blog. https://www.ibm.com/blogs/systems/ai-machine-learning-and-deep-learning-whats-the-difference/ (accessed 24 July 2021).
  20. 20 Kowsari, K., Brown, D., Heidarysafa, M., et al. (2017). Hierarchical deep learning for text classification. https://www.researchgate.net/publication/319968747_HDLTex_Hierarchical_Deep_Learning_for_Text_Classification (accessed 24 July 2021).
  21. 21 Liu, L., Ouyang, W., Wang, X. et al. (2020). Deep learning for generic object detection: a survey. International Journal of Computer Vision 128 (2): 261–318. https://doi.org/10.1007/s11263-019-01247-4.
  22. 22 Prabhakar, G., Kailath, B., Natarajan, S., & Kumar, R. (2017). Obstacle detection and classification using deep learning for tracking in high‐speed autonomous driving. 2017 IEEE Region 10 Symposium (TENSYMP) (14–16 July 2017). Cochin, India: IEEE.
  23. 23 Bansal, M., Kumar, M., and Kumar, M. (2020). 2D object recognition techniques: state‐of‐the‐art work. Archives of Computational Methods in Engineering https://doi.org/10.1007/s11831-020-09409-1.
  24. 24 Mathworks (2020). Object recognition. https://www.mathworks.com/solutions/image-video-processing/object-recognition.html (accessed 4 October 2020).
  25. 25 Vipul, J. (15 February 2018). Image recognition vs object detection: the difference. Hackernoon. https://hackernoon.com/micro-learnings-image-classification-vs-object-detection-the-difference-77110b592343 (accessed 24 July 2021).
  26. 26 Brownlee, J. (21 May 2019). A gentle introduction to object recognition with deep learning. Machine Learning Mastery. https://machinelearningmastery.com/object-recognition-with-deep-learning/.
  27. 27 Lee, S., Kwak, S., and Cho, M. (2019). Universal bounding box regression and its applications. https://arxiv.org/abs/1904.06805?fbclid=IwAR08OT5j2Jmi8xRksjGE7gKdIXtYhsiBz2w8I-erlmrb2sdCImbQPuUGWhw.
  28. 28 Zhao, Z.‐Q., Zheng, P., Xu, S., and Wu, X. (2019). Object detection with deep learning: a review. IEEE Transactions on Neural Networks and Learning Systems 30: 1–21.
  29. 29 Rosebrock, A. (23 July 2018). Simple object tracking with OpenCV. PyImageSearch. https://www.pyimagesearch.com/2018/07/23/simple-object-tracking-with-opencv/ (accessed 24 July 2021).
  30. 30 Huang, D. and Wu, H. (2018). Edge clouds: pushing the boundary of mobile clouds. In: Mobile Cloud Computing (eds. D. Huang and H. Wu), 153–176. Morgan Kaufmann. https://doi.org/10.1016/B978-0-12-809641-3.00008-9.
  31. 31 IBM (2020). What Is edge computing. https://www.ibm.com/cloud/what-is-edge-computing (accessed 18 August 2020).
  32. 32 Naha, R., Garg, S., Georgakopoulos, D. et al. (2018). Fog computing: survey of trends, architectures, requirements, and research directions. IEEE Access 6: 47980–48009. https://www.researchgate.net/publication/326171764_Fog_Computing_Survey_of_Trends_Architectures_Requirements_and_Research_Directions.
  33. 33 Klonoff, D.C. (2017). Fog computing and edge computing architectures for processing data from diabetes devices connected to the medical Internet of Things. Journal of Diabetes Science and Technology 11 (4): 647–652. https://doi.org/10.1177/1932296817717007.
  34. 34 Microsoft Azure (2020). What is cloud computing?. https://azure.microsoft.com/en-us/overview/what-is-cloud-computing/ (accessed 18 August 2020).
  35. 35 Wilson, A. (1 April 2018). Cloud computing: the next frontier for vision software. Vision Systems Design. https://www.vision-systems.com/factory/article/16739360/cloud-computing-the-next-frontier-for-vision-software (accessed 24 July 2021).
  36. 36 Najafabadi, M., Villanustre, F., Khoshgoftaar, T. et al. (2015). Deep learning applications and challenges in big data analytics. Journal of Big Data 2. https://doi.org/10.1186/s40537‐014‐0007‐7.
  37. 37 Qiu, J., Wu, Q., Ding, G. et al. (2016). A survey of machine learning for big data processing. EURASIP Journal on Advances in Signal Processing 2016 (1): 67. https://doi.org/10.1186/s13634-016-0355-x.
  38. 38 Hadi, H.J., Shnain, A.H., Hadishaheed, S., and Ahmad, A.H. (2015). Big data and the five V's characteristics. International Journal of Advances in Electronics and Computer Science 2 (1): 9.
  39. 39 Ji, Z., Lipton, Z., and Elkan, C. (2014). Differential privacy and machine learning: a survey and review. https://www.researchgate.net/publication/269997816_Differential_Privacy_and_Machine_Learning_a_Survey_and_Review (accessed 24 July 2021).
  40. 40 Harvard University (2020). Differential privacy. https://privacytools.seas.harvard.edu/differential-privacy (accessed 31 July 2020).
  41. 41 Das, A., Degeling, M., Wang, X. et al. (2017). Assisting users in a world full of cameras: a privacy‐aware infrastructure for computer vision applications. In: 2017 IEEE Conference on Computer Vision and Pattern Recognition Workshops (CVPRW), 1387–1396. IEEE. https://doi.org/10.1109/CVPRW.2017.181.
  42. 42 Brownlee, J. (22 December 2019). A gentle introduction to imbalanced classification. Machine Learning Mastery. https://machinelearningmastery.com/what-is-imbalanced-classification/.
  43. 43 Johnson, J.M. and Khoshgoftaar, T.M. (2019). Survey on deep learning with class imbalance. Journal of Big Data 6 (1): 27. https://doi.org/10.1186/s40537-019-0192-5.
  44. 44 Google Developers (2020). Generalisation. https://developers.google.com/machine-learning/crash-course/generalization/video-lecture (3 August 2020).
  45. 45 Bateman, B. (2018). Challenges of generalisation in machine learning. https://blogs.oracle.com/datascience/challenges-of-generalization-in-machine-learning (accessed 24 July 2021).
  46. 46 Dai, J. and Lin, S. (2018). Image recognition: current challenges and emerging opportunities. Microsoft Research. https://www.microsoft.com/en-us/research/lab/microsoft-research-asia/articles/image-recognition-current-challenges-and-emerging-opportunities/ (accessed 24 July 2021).
  47. 47 Anderson, T. and Shattuck, J. (2012). Design‐based research. Educational Researcher 41: 16–25. https://doi.org/10.3102/0013189X11428813.
  48. 48 The Design‐Based Research Collective (2003). Design‐based research: an emerging paradigm for educational inquiry. Educational Researcher 32 (1): 5–8. https://doi.org/10.3102/0013189X032001005.
  49. 49 Easterday, M., Rees Lewis, D., and Gerber, E. (2014). Design‐based research process: problems, phases, and applications. In: Proceedings of International Conference of the Learning Sciences, vol. 1 (eds. B. Penuel, S. Jurow and K. O'Connor), 317–324. ICLS.
  50. 50 Hong, J., Cochrane, T., and Withell, A. (27 November 2018). Developing a design‐based research methodology for designing MR technologies for mountain safety. ASCILITE 2018, Deakin University, Geelong, Australia. https://www.researchgate.net/publication/329174708_Developing_a_design‐based_research_methodology_for_designing_MR_technologies_for_mountain_safety.
  51. 51 Nvidia Developer (5 March 2019). Getting started with the jetson nano developer kit. https://developer.nvidia.com/embedded/learn/get-started-jetson-nano-devkit (accessed 24 uly 2021).
  52. 52 Bhusari, S., Patil, S., and Kalbhor, M. (2015). Traffic control system using Raspberry Pi. Global Journal of Advanced Engineering Technologies 4 (4): 3.
  53. 53 Lokesh, S. and Reddy, P. (2014). An adaptive traffic control system using Raspberry PI. Internet of Things for Traffic Control. https://www.researchgate.net/publication/282612189_An_Adaptive_Traffic_Control_System:Using_Raspberry_PI (accessed 24 July 2021).
  54. 54 Jetson nano developer kit. (n.d.). Nvidia Developer. https://developer.nvidia.com/embedded/jetson‐nano‐developer‐kit.
  55. 55 Nvidia Developer (2020). Hardware for every situation. https://developer.nvidia.com/embedded/develop/hardware (accessed 25 August 2020).
  56. 56 Raspberry Pi (2020). Buy a camera module V2 – Raspberry Pi. https://www.raspberrypi.org/products/camera-module-v2/ (accessed 4 September 2020).
  57. 57 Nvidia Developer (2020). JetPack [Nvidia]. http://docs.nvidia.com/jetson/jetpack/introduction/index.html (accessed 4 September 2020).
  58. 58 Salonen, A.O. (2018). Passenger's subjective traffic safety, in‐vehicle security and emergency management in the driverless shuttle bus in Finland. Transport Policy 61: 106–110. https://doi.org/10.1016/j.tranpol.2017.10.011.
  59. 59 Nvidia (2020). Build smarter cities through AI. https://www.nvidia.com/en-us/industries/smart-cities/ (accessed 8 November 2020).
..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset