4
Industrial Automation (IIoT) 4.0: An Insight Into Safety Management

C. Amuthadevi1 and J. Gayathri Monicka2*

1Department of Computer Science and Engineering, SRMIST, Kattankalathur, India

2Department of Electrical and Electronics Engineering, SRMIST, Ramapuram Campus, India

Abstract

Industry 4.0 is the recent revolution in automotive and manufacturing industries for the production of smart devices to make the complete digitalization. One of the significant applications of IIoT is predictive maintenance. In hazardous places, prediction about disaster is helpful to manage the people’s safety and reduction of damages. Machine Learning (ML) approaches are very useful for prediction. ML develops a mathematical, trainable model to analyze the data and apply with different kinds of existing methods for interrogating data. In this chapter, it is going to be discussed about how the advanced and automated data processing is connected with the new types of Computer Vision (CV) applications which influences directly with the human lives and safety of physical assets in hazardous places. The activities such as disaster early warning, recovery, and reconstruction system are to be studied. Also, some of the practical issues in the planning and recovery processes will be discussed.

Keywords: IIoT, predictive maintenance, Machine Learning, optimization, disaster safety management

4.1 Introduction

Earlier industry revolutions in 1980s and 1990s herd innovative in manufacturing technologies. Industry 4.0 is the recent revolution in automotive and manufacturing industries for making the smart devices by combining wide range of computing technologies such as cloud, networking, Big data, fog/edge computing, Internet of Things (IoT), Machine-to-Machine Communication (M2M), Machine Learning (ML), and Cyber Physical Systems (CPS). Industry 4.0 is also called as Industrial IoT (IIoT). A crucial facade of Industry 4.0 is autonomous construction of approaches drove by the concept called IoT, the impression that by binding a connected physical/ virtual objects, devices, and machines which can communicate each other. Thus, the revolution is integration of computing and manufacturing [1].

To make the applications as intelligent, ML algorithms are useful and also improve the cognitive ability. ML is a kind of Artificial Intelligence (AI); once algorithm/model is trained with data samples, the model is able to get the test data and makes the output, based on the training. So, it is possible to mimic the human decisions in real-time situations based on the past experience (data driven reasoning). The most common real-time applications can be built by combining IIoT and ML algorithms. The pattern that is matching with new real-time (test) data should be identified with trained data, to classify the test data [2].

Figure 4.1 illustrates how the different domains are connected to make the functionality of IIoT. Different kinds of communication devices send the raw data to the connected nodes which may aggregate and respond immediately. Even though the decision is made in nearby nodes, the data along with solutions are updated in the cloud server. In few situations, specific alarm will be notified based on the nature of the application.

Smart city, smart vehicles management, smart positioning, manufacturing, and self-driving cars are some of the common applications of IIoT.

Schematic illustration of the working of IIoT.

Figure 4.1 Working of IIoT.

The smart decision-making used in these kinds of applications can be utilized in decision-making. There are two types of disasters: natural and man-made. Natural-disasters are triggered owing to many reasons alike soil corrosion, seismic activity, tectonic actions, air pressure, and also ocean currents.

Some of the natural disasters are:

  • (i) volcano,
  • (ii) tsunami,
  • (iii) flood,
  • (iv) earthquake,
  • (v) landslide,
  • (vi) forest fires, and
  • (vii) hurricane.

Among these, few of them can be predicted earlier and used to save many lives. The prediction also will be helpful to the government to make the necessary actions and alert the people to migrate to safer places.

Some of the man-made disasters are listed in the following:

  • (i) hazardous material spills,
  • (ii) fires,
  • (iii) contamination of groundwater,
  • (iv) accidents in transportation,
  • (v) structure failures,
  • (vi) mining accidents,
  • (vii) blasts and performances of terrorism.

Many times, the data to be collected from various kinds of sensors and all are battery-based devices. To conserve the energy, reduce the complexity, and improve the security, fast data processing is needed. Caching is one of the useful techniques that avoid the redundant transmissions of data and reducing the latency. Complexity can be reduced with increasing efficiency, because of emerging ML and evolutionary algorithms with large instances of data. IoT with analytics finds the valuable information from the raw data that will be sent to multiple analytics pipelines and may create the security breaches.

4.1.1 Fundamental Terms in IIoT

The different domains interconnected in IIoT are explained in the following.

4.1.1.1 Cloud Computing

Cloud computing is a distributed computing technology that offers all the heterogenous resources as on-demand services [3]. The major advantages of clouds are reliability, cost savings, and unlimited data storage capacity. It provides the following services:

  • (i) Infrastructure-as-a-service (IaaS): This provides physical machine, virtual machine, storage, and memory.
  • (ii) Platform-as-a-service(PaaS): This provides run time environments, development, and deployment tools.
  • (iii) Software-as-a-service (SaaS): This provides licensed soft-ware to the customers.

The deployment methods are private, public, hybrid, and community based. The important aspect is that the customer can utilize the resources as long as the need and pay only for what actually they used. One of the important cloud service providers is Google that many users are using in world wide.

4.1.1.2 Big Data Analytics

As massive growth of sensors and mobile devices are connected via inter-net, big data encompasses huge datasets [storage exceeds one terabytes (TB)] and will be useful to analyse the real-time data to make decisions. In IoT, data arrives from heterogeneous devices/events and in the form of Complex Event Patterns (CEP). Most of the events are interrupted quickly for immediate decision-making and applying in the current situation. Decision-making needs pre-processing, and correlating with what happened in near-time events from various data sources and need to create rules to predict the future events [4]. These allow establishments to shape real-time solutions using the concept of IoT and excerpt the information after the big data causes to arise the insights on or after millions of actions in nominal period.

The data collected may be structured, semi-structured, unstructured, and combination of more than one type. Some of the popular analytical tools are Yarn, MapReduce, HDFS, and Spark. They can be used to do clustering, classification, and prediction. Big data processing pipeline includes collection of data, training, and querying by big data tools.

4.1.1.3 Fog/Edge Computing

Many applications in IoT need quick response either because of critical kind of application or cost reduction. Therefore, the latency (duration of raw data transmission to actuator and receiving output from the actuators) should be minimal. If the cloud is used, then the cost will be lower, because of the shared resources, but the latency will be significant because of round trip time between sensor nodes to remote servers. Fog and edge computing are two kinds of solutions to solve this. The difference is where the computing happens. In edge computing, for sensitive applications, most of the processing such as aggregation and computation are done at the edge/local gateway but the data and other resource intensive operations should be reached to centralized data collection server [5].

In edge computing, all the data collection nodes should be connected with local gateway. In fog computing, the processing is moved to nearby Local Area Networks (LANs) which are somewhat situated far away when comparing to gateways in edge computing. Fog computing offers low-latency computing facilities at the LAN that is an enabler for the evolving IoT systems [6].

4.1.1.4 Internet of Things

Internet of Things (IoT) is an organization of interconnected physical/virtual devices/objects which are given Unique IDentifiers (UIDs) and all are able to communicate over a network without the need of either Man-to-Man or Man-to-computer interaction. Smart vehicles, smart watches, home automation, and remote healthcare monitoring are some of the real-time examples of IoT applications. The working of IoT devices along with analysis is represented in Figure 4.2. All the data produced by IoT devices are not having the analytical power. The data may be noisy and heterogeneous. To solve this issue, preprocessing is required. After pre-processing, big data analytics can be combined with IoT devices to identify the hidden or unobserved patterns and innovate into useful information [7]. This will be helpful for the companies, organizations, or individuals to manage the large amounts of data and extract useful information by big data-mining to make important decisions, and to identify the trends and make predictions.

Schematic illustration of the big data analytics with IoT.

Figure 4.2 Big data analytics with IoT.

4.1.1.5 Cyber-Physical-System

CPS is an innovative concept of digital system by combining “computing, interaction and controlling” as a single system by closed control feedback systems. CPS has two important concepts:

  • (i) It has fast connectivity to acquire the real-time data (such as temperature and humidity with the help of low-cost sensors) in the physical environment and make useful information from the cyber space.
  • (ii) Managing the data and using the advanced computation ability, it creates the cyber-space [8].

Therefore, CPS pairs the sensors and actuators to interact with real-time environment/physical world and systems can able to react to changes in the environments for automating or controlling some tasks.

The aim of CPS is to use in large-scale systems, to automate the tasks with improving the efficiency, reliability, usability, and safety. In future developments in CPS, the human control/intervention will be fewer and core control will be more, based on intelligence-based cores developed in automated systems. CPS differs from embedded system; embedded system focuses on devices embedded in single/stand-alone system, but CPS focuses on large number of connected physical and computational devices [9]. CPS uses many evolving technologies and given in the following:

  • (i) Multi-Agent Systems (MAS) for intelligence and version over de-centralized segmental systems,
  • (ii) Service-Oriented Architecture (SOA) for interoperability,
  • (iii) Big Data for large data mining,
  • (iv) Cloud for sharing remote resources,
  • (v) M2M for interconnectivity amongst devices, and
  • (vi) Augmented reality to allow the human inside the process.

4.1.1.6 Artificial Intelligence

To make the devices as “smart”, the devices need intelligence [10]. Different disciplines such as philosophy, computer-science, statistics, sociology, and psychology have different definitions for AI. AI is one of the technologies that make the system with reasoning power like human’s traditional decision-making. Still, it is arguable that which decision is best under some specific scenarios in specific domains. Sometimes, AI lacks with creativity in decisions, but humans can do correctly by taking long time. AI should like human’s intelligence to take the rationale decisions at appropriate times. Initially, AI algorithms were developed for playing board games with humans. Then, it was applied in symbolic reasoning. After that, because of popularity and success, AI becomes interdisciplinary approach. Intelligence can be improved by adding more domain specific knowledge, data, and analysis. Based on the need of applications, only certain tasks can be automated and this is called as narrow-AI.

AI highly relies on data-science methods and may need tools, because of voluminous and real-time data. Data science deeply gives broader insights of data [11] and analyzes and predicts the uncertainty. The origin of the data science is Knowledge Discovery process (KDD) and statistics. The analysis includes data exploration, hypothetical testing, regression, and time series analysis, as given in the following:

  • (i) Exploration of data does pre-process,
  • (ii) Hypothesis testing: questions arrive from the explored data is converted into statements called as hypothesis that may be proved as true or false
  • (iii) Regression finds the relationship parameters between the data and target information, and
  • (iv) Time series analysis is useful for temporal structured data where prediction of future values is the most challenging task.

4.1.1.7 Machine Learning

ML is one of the sub domains of AI; it focuses on developing a set of algorithms to make the components or computing devices to learn automatically and improve their performance based on the experience (given as data samples). Algorithms used in ML are iterative to discover the hidden or natural patterns available in the data samples and use it for reliable and better predictions or decisions. After training, when these algorithms are exposed to unseen data, they are able to adapt independently.

ML is one of the most trending method and used almost in all the domains with the intersection of more technologies such as AI and data science. There are many emerging algorithms are being developed to process the data faster and make accurate predictions [12]. The applications of ML are tremendous in speech recognition, Natural Language Programming (NLP), healthcare applications, financial modeling, recommendation systems, face detection, and object recognition (Computer Vision) etc.

Learning phase of ML contains the following steps:

  • (i) Collection of training data,
  • (ii) Pre-processing,
  • (iii) Feature selection or extraction,
  • (iv) Model training with selected features.

There are three major kinds of ML algorithms and given in Figure 4.3.

  • (i) Supervised learning,
  • (ii) Unsupervised learning, and
  • (iii) Reinforcement learning.

(i) Supervised Learning

Supervised learning has n-number of data samples as input-output pair (labeled data). The learning task is to find the function of mapping from input to output. Input is a vector and output is a single value/categorial label. If the prediction is categorial label, then it is called as classification, and in case of continuous value, it is called as regression. The mapping function may be linear or non-linear. Some of the supervised learning algorithms are Decision Trees (DT), Support Vector Machines (SVM), linear regression, Naive Bayes, logistic regression, K-Nearest Neighbor algorithm (k-NN for classification) and Neural Networks (Multilayer perceptron). Based on the algorithm selected, the parameters must be learned to improve the prediction accuracy. Parameters are significant in ML algorithms. In semi-supervised learning, most of the data samples are labeled and few are unlabeled.

Schematic illustration of the types of ML.

Figure 4.3 Types of ML.

Nowadays, Deep Learning (DL) algorithms are more popular, because it solves complex problems even though the data samples are varying, unstructured, and also interconnected. DL has multiple layers of neuron (similar to Artificial Neural Networks), along with one input layer, one output layer and multiple number of hidden layers are available. Algorithms like Back Propagation (BP) are used to learn the weights of interconnected neurons automatically by using the inputs such as images, text, or sound. There is no need of feature extraction because of convolution operation and pooling. Convolution does repeated filtering on input to get the feature map. Pooling operation makes down sampling and also dimensionality reduction.

(ii) Unsupervised Learning

In unsupervised learning, the data set is not labeled. The hidden pattern among the given data is identified and based on the similarities of features, and the given datasets are grouped into multiple numbers of clusters. Then, it may be labeled as Group 1, 2, …, n. There are different kinds of distance measures such as Manhattan distance, city block distance, and Euclidean distance, which are used to find the similarities between data samples. The usage of unsupervised learning is exploratory analysis and dimensionality reduction.

Figure 4.4 illustrates the difference between supervised and unsupervised learning using the same kind of data samples.

Fuzzy and K-means clustering are most commonly used for clustering. The objective of clustering is to reduce the dissimilarities within a cluster and increase the dissimilarities in inter-clusters. Dimensionality reduction reduces the feature set in the ML algorithms to take away only the useful and relevant features either for classification or clustering.

(iii) Reinforcement Learning

Reinforcement Learning (RL) is a goal-oriented approach and has multiple agents who interact with the real-world environment to make decisions. It may use either supervised or unsupervised learning to build the model. The agent reacts to the environment. It may be a hardware, software, or combination of them. When correct decision is made, the reward (positive score) will be given, in case of incorrect decision, penalty (negative score) will be given. The ultimate aim of RL is to get the highest cumulative score. In the nonappearance of a training sample-dataset, it is destined to learn from its experience. In robotics, RL behaves novel by using “Trial-and-Error” concept.

Schematic illustration of the supervised and unsupervised learning.

Figure 4.4 Supervised and unsupervised learning.

Schematic illustration of reinforcement learning.

Figure 4.5 Reinforcement learning.

The elements of RL are given in the following:

  • (i) Policy: It is a way of agent’s behavior based on the current situation. It is like stimulus-response.
  • (ii) Reward function: Increasing reward is the goal of RL. If the action taken in the current state is toward the goal, then a positive value is awarded and called as award.
  • (iii) Value function: It defines how much award the particular agent can get from the current state to the goal state. It is the impression of the behavior of the environment.
  • (iv) Model of the environment: Model predicts the next state and corresponding reward. The working of a simple agent is represented in Figure 4.5.

4.1.1.8 Machine-to-Machine Communication

M2M is a communication technology protocol for remote processing. The communication type may be serial, Power-Line-Connection (PLC), and wireless communication by IoT. It may also use LANs and Wide Area Networks (WANs) along with the Internet. To structure the IIoT environment, all the devices such as sensors, actuators, and gateways should be integrated and communicated autonomously without the intervention of human [13]. M2M can translate the data and activate based on the preprogrammed procedures. Most of the challenges are faced in M2M, because of heterogeneous networks, connectivity issues, computing machines, and policies [14].

4.1.2 Intelligent Analytics

As increasing numbers of connected sensors are used, they create voluminous data periodically. They should be aggregated before sending to remote server. To avoid the redundant and non-useful data, big data mining can be used. This helps the headache of data transmission.

Intelligent analytics helps to understand, analyze the data, and identify useful information which will be helpful to the organizations for making well-formed decisions and inform to the respective people in the organization at right time. It uses the concepts of big data, AI, and data science. Data visualization from different view point is to be taken care based on the needs of an application. By using appropriate tools, collection of current data, and history of information, analytics should result in accurate results. The mature predictions lead to optimization.

The intelligent analytics are categorized into three types. They are descriptive, predictive, and perspective using the mining, AI, and statistics [15].

  • (i) Descriptive analytics: used in streaming the context. It will answer for the following queries: “What and why a particular event has happened?” and “what is happening now?”
  • (ii) Predictive analytics: used to replying for the following queries: “What and why some event will happen in the future?”
  • (iii) Prescriptive analytics: answering for the following queries “What and why should be done now”. It is attracting to the researchers, because it is a matured data analytics and lead to reach the optimal solutions.

4.1.3 Predictive Maintenance

The integration of sensed data with predictions is used in IIoT applications to find the significant relationships with predictions and the procedures of maintenance. In manufacturing industries, the objective is to assure the maximum throughput in the manufacturing line and to improve the production with reduction of cost and throughout the life cycle of an equipment. The data driven approaches are used to make the predictive maintenance. In any organization, each equipment is an important asset. To avoid the complete shut-down, prediction of faults is vital. Complete monitoring of each and every device may be useful to predict about when the servicing is required to avoid the fault and decide on when to back up the data. There are three kinds of maintenance procedures used in industries [16].

  • (i) Predictive maintenance: where the component is restored or substituted after wear, fault, or break down.
  • (ii) Preventive maintenance: customs sensor data for continuous monitoring of a system and evaluates it in contradiction with historical events to predict the failure before it happens.
  • (iii) Corrective maintenance: recognizes the faults by a result of walkthroughs or inspections.

Based on the specific needs of applications, the guidance can be designed about how to utilize the streaming technologies and big data analytics tools for the use cases of predictive maintenance [17]. Each domain and application has different requirements, based on the requirements, the suitable analytical tool is chosen. While automating the complex operations, one of the challenges is reliability and safety of devices and data [18] which may have great impact on decision-making.

In this chapter, predictive maintenance is taken for assets safety during the hazardous events.

4.1.4 Disaster Predication and Safety Management

Natural disaster is beyond the human being’s control and unavoidable. Many countries faced different kinds of disasters and some are very deadly.

World Health Organization (WHO) stated that during the year 1900 to 2018, nearly 14 million disasters happened [19] in world wide.

4.1.4.1 Natural Disasters

Our country faced wide range of natural hazards such as flooding, drought, cyclones, extreme heat waves, landslides, wildfire, and earthquakes. The assets which need safety are listed in Table 4.1, according to the priority. Few of the unfortunate events happened in India are listed in the following:

  • (i) Uttarakhand Flash Floods in the year 2013, because of the heavy rainfall and enormous Landslides.
  • (ii) Kashmir Floods disaster in the year 2014, because of unremitting torrential rainfall and swelling of river Jhelum.
  • (iii) Bihar flood disaster in the year 2007, because of five times more rainfall when comparing to the monthly middling of 30 years.
  • (iv) The Indian Ocean Tsunami in the year 2004, because of Tsunami, it originated on Indonesia and affected 12 countries.

Table 4.1 Types of assets needs safety [50].

S. no.Type of assetExample
1.HumanityHuman lives
2.Physical assetBuildings, roads, and gateways and devices
3.SoftwareSystem and application software
4.CommunicationNetworks, clouds
5.DataSensor, big data
  • (v) Gujarat Earthquake in the year 2001, because of earthquake.
  • (vi) Super Cyclone, Odisha in the year 1999.
  • (vii) Great Bengal Famine in the year 1770, because of Drought or Famine.

4.1.4.2 Disaster Lifecycle

The systematic way of representation of disaster lifecycle is represented in Figure 4.6.

  • (i) The preliminary phase is disaster identification that is based on the earlier catastrophes which will be useful to forecast the occurrences in future. It will be helpful for arranging the needed resources in a timely manner for rescuing in appropriate time [20].
  • (ii) Prediction is carried for the forecasting disaster events based on the history of data, community experience, and analysis of environmental and geographical data. Based on the previous history, some wireless nodes can be deployed in the disaster occurred in places for effective monitoring.
  • (iii) Mitigation process is similar to prediction. Mitigation helps to save the human lives and reduce the economy loss because of disaster events. Some of the approaches used in mitigation are preventive maintenance, goal-oriented evaluation, land use, and regulation preparation for building standards.
    Schematic illustration of the disaster lifecycle.

    Figure 4.6 Disaster lifecycle.

  • (iv) Preparation involves planning a set of procedures to manage the emergency situations. It includes creation of awareness and ensuring that necessary essentials are available. Preparation ensures the timely response.
  • (v) Response and recovery: Response executes the plan immediately as per well-defined preparation to save lives, efficient and timely communication to the people with coordinated way. Recovery is a follow-up procedure acceptable for rehabilitation.

4.1.4.3 Disaster Predication

It is very difficult to predict the natural disasters like earthquake; it needs lot of activities, coordinated effect to save the people, and also manage the economic loss. Effective management needs collection of data to get previous situations during and after earthquake, making coordination and effective decision-making [21]. Even though the disasters are non-avoidable, the approach called disaster reduction can be done by the method using prediction.

Over the data collected for long times, many algorithms or approaches supports in prediction. Big data analytics combined with ML algorithms are useful to visualize (finding the common and hidden patterns, not identifiable easily by human) and to analyze disaster prediction.

This analysis can be useful in six major disaster management applications [22]. They are as follows:

  • (i) Early alarming about damage,
  • (ii) Assessment of damage,
  • (iii) Monitoring and detection,
  • (iv) Forecast and predict,
  • (v) Post-disaster organization and response,
  • (vi) Long-term assessment of risk and reduction of risk.

Some applications like prediction of landslide are required in the mount areas. Landslide is a dynamic and very complicated structure. It needs to process the continuous monitoring of time series data to forecast the landslide displacement [23]. The displacement leads to landslide.

For cyclone prediction, the exact positioning of high- and low-pressure areas are to be tracked to predict how heat in these areas are migrating in the tropical areas using satellite images and computer technologies. The model can predict 5 to 7 days in further from the starting date of prediction. Using the storm monitoring and the rainfall amount, floods can be predicted. Topography, moisture of the soil, and characteristics of river basin can be considered for flood prediction. Sometimes, the wireless sensors deployed in the water bodies give the water levels, and this data will be used for forecasting the flood.

Satellite images and remote weather prediction are used to predict the forest fires and aids to decrease the numbers of forest areas burned per annum. Forest fires may source the tripping of multiple transmission lines concurrently, making a most sever threat to the safety of power grid. The amount of risk may not be competently estimated and it is very difficult to lead the optimization of fire extinguishing equipment allocation.

4.1.4.4 Safety Management

In each country, the government is making different policies on safety management. The motivation of these policies is that the government wants to pool the resources from the humanity to identify the uncertainty of extreme disaster events. The mitigating role is recovering the people as early as possible based on the mitigation plan that was framed by the government already. Safety related policies are differing from one location to another. The disaster damage, loss, and number of people in the surrounding area will be used to give the priority about safety management policies. The disasters are vulnerable events and the assessment should be reshaped based on the social, physical, and economic issues [24].

While constructing in disaster happened place, it needs perfect planning because it involves multiple safety considerations and, in that task, multiple experts must be involved to make decisions. The group of experts examine and apply fuzzy theory to consider multi attributes and consider all the risk factors for analyzing the safety concerns. This technology will be helpful to planners and emergency responders to converge all the experts view in safety planning of reconstruction [25]. Safety confirmation system [26] suits for evacuation center operations. The headquarters consolidate the risks and analyze the operations given by operations suggested by evacuation center operations. In the safety confirmation system, it uses some communication to inform the details of refugee’s to relatives who are in remote locations from the government’s local website.

Emilio Tissato [50] suggested the phases of risk assessment.

  • Phase 1: Risk calculation context
  • Phase 2: Points of attack detection
  • Phase 3: Mapping of threat
  • Phase 4: Privacy, safety and security and reliability mapping
  • Phase 5: Detection of vulnerability
  • Phase 6: Estimation of probabilities
  • Phase 7: Estimation of impacts
  • Phase 8: Calculation of risk matrix
  • Phase 9: Prioritization of security controls
  • Phase 10: Action plan

4.1.5 Optimization

For any problem solving, multiple feasible solutions may be available. In each solution, it has different impact on the real-time scenario. Some of the nature-inspired optimization algorithms are Genetic algorithms (GA), Particle Swarm Optimization (PSO), Ant Colony Optimization (ACO), and Memetic Algorithms (MA). Selecting proper method improves the effectiveness. Efficient forecasting and scheduling operations play a vital role in the relief of saving people and lowering the damages in the event of disasters. These operations come under evolutionary and optimization problems. Response phase needs the concept of optimization to make selection of the location, allotment of medical tents, and evacuation in simultaneous manner [27]. The critical nature of scenarios must be considered to make the choices. Some of the approaches like Monte Carlo simulation and dynamic p-robust approaches are used to find the different scenarios and formulate the new paths for safety reliefs. Identifying various parameter values will be useful for robust reliable optimization. The rapid support decisions improve the response and recovery.

During emergency period the goal should be supplying the emergency services to the people with the goal of transport unit loss using the optimization model in the crowded conditions [28]. Bayes model theory with some optimization algorithms such as ACO for finding the efficient path from one place to another is needed to manage the congested conditions.

The remaining sections in this book chapter are organized in the following way. Section 4.2 describes the different methods/technologies related to predictive maintenance and safety management. Section 4.3 explains the issues of research work in this domain. Section 4.4 explains the new ideas found in this domain, and final section concludes and directions of research in future.

4.2 Existing Technology and Its Review

Many of the disasters are unpredictable, and their unexpected occurrences will cause unbelievable damages to human lives. Under these situations, active disaster management needs very rapid response and activities must be taken. This is a crucial task to mitigate the belongings of disasters. Efficient information management and fast communication recovery are vibrant mechanisms for disaster response and relief. In this section, some of the existing technology in IoT environment and the natural disaster management are going to be outlined.

4.2.1 Survey on Predictive Analysis in Natural Disasters

Hasegawa et al. [28] introduced a device called as QUEST (Q-shu University experiment with steady-state spherical tokamak) to originate the large current of 50 KA. To minimize the operational costs and reducing the danger of unexpected failures, the contact resilience of multiple ohms was evaluated in order to make the predictive maintenance. Predictive maintenance was achieved, by monitoring the occurrences of problems, the failure was blocked and maintenance was executed as a planned manner. The alarm system was built and the experimental information was shared among the systems by building a network. For few devices, static Internet Protocol (IP) was assigned. For control devices, it had static IP and Media Access Control (MAC) addresses to evade the wasting of resources of network.

W. Gao [29] used ANN to predict the landslide using Grey System. Monotonously increasing characteristics and displacement of landslide need to be analyzed the time series data in the specific area where landslide happened frequently. Grey system combined with Evolutionary Neural Network (ENN) was proposed for prediction system. Composition of displacement and time series trend were used by ENN architecture. Backpropagation algorithm (BP) was used in ENN and applied in landslide Xintan. The results proved that this method was better in landslide prediction.

Matthews [30] applied Decision Theory to analyze about how earthquake were predicted accurately and reliably. Even though several optimistic estimates are used to identify the key parameters, the reliable function for the action should be used. He identified the self-organizing behavior of earthquakes prediction was based on two key factors.

  • (i) the base rate of severe-quakes on the time scale of precursor, and
  • (ii) the qualified costs of disregarding perfect predictions and answering to false alarms.

Even though with common values of these factors, the resultant least accuracy taken by the likelihood ratio needed to meet the correct predictions of significant earthquakes is not perfectly achieved in state-of-the-art methodologies.

G. Molchan and Romashkova [31] worked in the earthquake prediction and the quality of space related to time by using two- dimensional error diagram (n, τ) in which n represented the fraction of “failures-to-predict” and τ represented local-rate of alarm (averaged) in the space. The earthquake events during 1985 to 2009 (which had higher magnitude between 8 and 8.5) were taken as data samples and the M8 algorithm was used for prediction. The upper estimates were taken to handle the uncertainty.

Takahashi et al. [32] developed a network in the ocean floor to predict the earth quakes and tsunamis in the Japan Trench. By deploying seismometers and pressure sensors, the magnitude above 8 were collected. Based on the sensor details, the arrival time, tsunami height, and duration were predicted. They observed that the prediction accuracy depends on the height of tsunami and target point. But using seismometers, real-time prediction was done with buoy system and proved that estimation of target area was strongly correlated with target point.

Voigt et al. [33] used capabilities of earth observation used in the national and international level services, relief efforts, and security issues considered during the situations of disasters using the multi-source satellite images. Image analysis algorithms were used to map the tasks of resource lifecycle events to management support. These algorithms were used for rapid mapping of disaster response in tsunami, landslide, and forest fires.

Nico et al. [34] in Canada and Rauste et al. [35] in Finland used the thermal images to map the areas into wild forest fires using Advanced Very High Resolution Radiometer (AVHRR). Raw thermal data from satellite was taken as input about the amount and the number of truly burning fires was predicted.

4.2.2 Survey on Safety Management and Recovery

Hanna et al. [36] discussed the various issues regarding Discovery Recovery Planning (DRP). This includes recovering the damaged computers, connected to the hospital groups. The planning assured about health-care applications to reach the people affected during the disaster without interruptions with the goal of hospital and nursing functions.

Alabdulwahab [37] implemented some technology-based support to business to recover from the disaster. The data, hardware, and software are the major assets to the IT companies. Different strategies to recover the data were framed out.

Sanderson et al. [38] examined the one of the essential needs called as transitional shelter that was used in Haiti, analyzed some of the decision-making process could be used for making the adoption policy and making T-shelters. Based on the decisions, the results were compared with different responses planned in urban areas, post-disaster recovery, and redevelopment skills.

Barnes et al. [39] developed a novel method for classification and detection of structures using high-resolution satellite images. The features of images in near shore area were used for response planning during emergency. Both the storm impacted and non-impacted features were extracted from the images especially in blocked routes and used to automatically finding the discovery and rescue areas. The σ-tree template structures were used to reduce the time needed for training and it did not use high amount of computing resources.

Harris and K. Laubsch [40] surveyed different kinds of the airships at different height level from the earth to capture the live videos to enable the agile response. Flight ceilings of airships are designed depending on the tallest buildings in disaster affected area.

Moore [41] identified the different issues by interactions with electrical workers, managers in corporate, and professionals in safety management in a national hospital facility. The challenges in the recovery processes were identified to explore the electrical safety and disaster preparedness. Few case studies were done and the following were identified.

  • (i) Elaborated challenges when handling the recovery project afterward an electrical disaster what affects employers and pendant legal action.
  • (ii) Energized work plan to be noticed by safety professionals and executives.
  • (iii) After the hazardous event, maintenance and safety culture to be followed by employers.

Slinn [42] proposed recovery system for railway management. After the interlocking failure happened, the communication between the sensors and control point the pictures were shared. From that picture, the details of train, states were identified to decide about what trains should be removed. To get the reliable pictures in railway management, COT technology was used. To assure the display the remote fail-safe image, encryption was suggested. For enforcing image lifetime, embedded image read-back was used.

4.2.3 Survey on Optimizing Solutions in Natural Disasters

Shao [43] surveyed the potential threats to the IT-based companies and proposed a discrete optimization model to use the redundancy mechanism for critical functions in recovery planning phase of recovery management. The overall objective of the author was maximizing the survivability of the IT functionalities in the organization. Each function was allocated with different level of redundancy based on priority. Dynamic probability–based programming was applied. In case of no redundancy, the model reduced the reliability related problems and lead to fault-tolerance.

Lu et al. [44] suggested the optimization method in urban places for disaster prevention using Geographic Information System (GIS) technology. He worked for optimal (by fast estimation) fire extinguishing equipment in a power grid during fire-disasters. The quick calculation method was proposed for identifying risk index in the transmission line. Hunan provincial power-grid was taken as case study. By using the computing resources, risk index was calculated soon to deploy the fire extinguishing equipment. The time of calculation was less than five minutes because 30 trillion operations were performed per second.

Unmanned Aerial Vehicle (UAV) was used by Zhang and J. Liu [45] because of wider coverage and low-vast, fast deployment in post- disaster environments. Two cooperative UAVs were used: one for downlink transmission and another one on rescue vehicles for emergency response. The data were sent as packets transmitted from an UAV to a vehicle, if time duration for the truck enclosed, UAV is more than the stated average network access delay. Performance metrics given for the network and this method gave good numerical results. Optimal setting of network parameters achieved optimal performances in the post-disaster areas.

Cheng-fang Wang and Yi-min Sun [46] analyzed different types of disaster in urban areas. Several case studies were done to identify the basic correlation between the urban morphology optimization and corresponding disaster prevention. The parameters such as location of the urban place or city, environment, size of the urban, structure, and the direction of the development of urban were used as parameters to analyze. The spatial information from GIS was used to optimize the locality of urban toward disaster direction.

Yuan et al. [47] proposed Resilient Distribution Network Planning Problem (RDNP) for coordinating and distribution of resources. The objective was to minimize the damage of the system. Hurricanes create huge losses economically and casual life of humans. The problem was formulated as two stage strong optimization model. Uncertainty was solved by N-K contingency method. The system was validated using micro-grids and proved that resilience was good during natural disasters. Yuan et al. [48] worked to improve the grid resilience during hurricanes, and the consequences such as physical and cyber-attacks can be reduced by optimal plans. One of the optimal solutions is planning the decisions to coordinate the DG replacement, distribution of the power flow model, and hardening.

Changfeng et al. [49] developed optimization based on Bayes risk function for managing emergency resources in the transportation network using a case study to direct the composite association between supplies distribution and selection of path, using the factors such as travel time uncertainty and the lasting capacity of road during distribution. Emergency logistics structure of the network was taken and total loss of disaster area was predicted using Bayes risk function. ACO algorithm was used in crowded situations and proved that the optimization model behaved in better manner during the congested conditions.

4.3 Research Limitation

Limitations are impacts which cannot be controlled by the researchers. They may be the deficiencies, situations, or influences that cannot be measured by the investigators. They influence the restrictions on the methodology to be applied and impacts on the conclusions.

4.3.1 Forward-Looking Strategic Vision (FVS)

The International Data Corporation (IDC) analysts identified that in most situations, most of the real-time environments are yet to deploy the sensors for data collection. So, many of the IoT applications are suffering to collect the real data. Based on the applications even though sensors are deployed and communications are done properly, huge data are collected, the users of IoT may not get satisfaction because of lacking data analytics skills and security concerns, because of not having or developing an FSV.

Many mathematical models of statistics and ML (sometimes combined as hybrid) approaches are needed with big data to get the scientific and quantifiable results. If suitable approaches are used with optimal parameters, then the goal can be attained. If the data set collected is biased, then for new kind of data, the result may be incorrect. The balanced interplay of insights of data analysis gives informative and successful solutions.

4.3.2 Availability of Data

The benefits of disaster management depend on the available and appropriate data. The data collected may not have the sufficient information, not in the structured format, and aggregated from different locations. Sometimes, the data may be irrelevant. False and incomplete data leads to uncertain outcomes. Sometimes, mathematical models have to be used for identifying missing and incorrect values in the collected data. Therefore, to make the data to be usable directly, some pre-processing methods are to be applied.

4.3.3 Load Balancing

There are four layers in IoT. They are sensor and actuator layer (sometimes, these two layers are combinedly called as perceptual layer), network layer, data processing, and application layers. Sensor and actuator layer have the direct access to gateway. In edge computing, most of the data processing work is done in the gateway. Most of the computing resources are needed at the local gateway.

In fog computing, data should be transferred to the data processing layer that is far away from the sensors. Therefore, based on the type of computing, allocation of resources is a challenging task. Dynamic scheduling model with priority based one should be selected, because of the nature of the application. Based on the scheduling method selected with real-time situation, the available resources should be balanced. Therefore, the resource allocation task makes the computing as distributed computing.

4.3.4 Energy Saving and Optimization

The overall energy consumption is calculated as the summation of autonomous consumptions created by machining, transfer of data at all the stages of hazardous events, and mining algorithms. In edge computing, much energy will be consumed at the local gateway and in the fog computing much energy will be consumed in devices at the data processing layer. Most of the real-time devices are battery based and rechargeable, so the energy-saving mechanism is to be considered. The different kinds of modes like sleep and wake up can be used and a particular event can be used for triggering sleep mode to active mode to save the energy.

Efficient allocation of computing resources is a challenging task because of

  • (i) Processing power limitation,
  • (ii) Rapid computationally intensive applications,
  • (iii) Energy consumption analysis needs the complex system by consideration of spatial attributes.

Optimization algorithms can be utilized to identify the threshold level for each kind of sensor about, under what values the switch over between wake up and sleep mode should be done.

4.3.5 Cost Benefit Analysis

Sometimes, the real cost benefit may not be identifiable in accurate way. Few applications can be done only in simulation, but not in the real world. It may be because of the cost of real-time implementation, nature of the event. It should not be trial-and-error effort.

Before going to real implementation, simulations can be done. Based on the success of simulation prototypes can be created to analyze the real-time risks and improve the efficiency. The task of identifying idea, simulation, prototyping, and evaluation should be iterative tasks until all the stakeholders get satisfaction.

4.3.6 Misguidance of Analysis

Many attempts can be done to find the approaches for predicting the major earthquakes, and sometimes, they are misguided because of the dynamic nature of the event. This may be created because of false alarms, device hacking, inefficient communication, and coordination lacking. The output of the ML algorithms depend on the based on the data samples fed into the algorithm. If some data collected has incorrect values, then misguidance may happen.

4.4 Finding

The principle outcomes of a study in this chapter give what are suggested in the literature and from this what are revealed or indicated.

4.4.1 Data Driven Reasoning

In many real-time problems, it is possible to find all the possible solutions beforehand and select the suitable one. Safety and recovery during disaster is such a kind of problem. But in the data driven approaches, multiple data sets in history can be collected, and experiences of them can be utilized to provide the solution in the basis of evidence-based approach. Problem solution can be created by rule based that gives solution similar to real prediction. Using advanced technologies for data collection and analysis techniques (with reasoning), effective recovery and reconstruction plans can be done and executed.

4.4.2 Cognitive Ability

Cognitive ability is a general rational ability that involves reasoning, solving problems, preparation, abstract thinking, comprehension of complex ideas, and self-learning using the experience. It needs more attention. Even though enough guidance were given to the people, the police, emergency handlers, NGOs, and others involving in the reconstruction may experience cognitive disruption and this can affect their response in rescue-related tasks. This may happen because of the unexpected scenarios faced by all the responding people. Based on the cognitive skills trained in the system, trained system (or algorithm) should make clear decisions to guide in stressful situations.

4.4.3 Edge Intelligence

Using edge intelligence, the sensing device takes the control of its data and communications organization, improves the accuracy and quality, and reduces the time delays. Intelligent edge algo enlarges set of associated systems closer to the users. The intelligent algorithms are used either in the device or in the edge of a network with particular coverage of locations should be used for more critical applications. Safety applications in disaster response got real-time experiences and insights, deliverable in a context aware manner.

4.4.4 Effect of ML Algorithms and Optimization

ML algorithms have better attainments than human beings, using the algorithms like regression and classification. Using optimization methods, the minimal resources would be utilized or minimal time can be taken and the better solutions can be found. Therefore, the overall execution cost can be less with user defined time. There may be multiple feasible plans for response and recovery generated. But based on the real-time scenario, optimal plan must be selected by using meta-heuristic optimization algorithms.

4.4.5 Security

The data collected and the decision made under different scenarios should be shared within the people or authorities who involved in the safety management application. The security for data, network, and data storage devices must be provided to avoid the active and passive attacks which are planned by the hackers. The risk should be predicted in earlier and standard encryption and corresponding decryption algorithms can be used to enhance the security. Based on the use cases, the appropriate algorithms can be utilized.

4.5 Conclusion and Future Research

4.5.1 Conclusion

Disaster management is as the group of activities, management of resources, and tasks for managing with all humanitarian features of emergencies, preparedness, response, and also recovery in direction to reduce the impact of disasters. Proper analysis and prediction on a time scale of decades makes the planning and response arrangements. During the planning and response, the factors such as safety of the people, safety of physical assets, and providing food, medicine, and temporary shelters must be taken care.

Based on the experience, the construction ideas, development in urban areas, green spaces, and the municipal affairs provide multi-discipline power to make the safe city. Different countries have different kinds of natural disasters based on the region in terms of latitudes, climate changes, industrialization and other conditions of nature.

4.5.2 Future Research

For further research, the assets can be categorized into essential, shelters, companies’ infrastructure, communication lines, vehicle track, road, and standing water, train track, based on that priority can be applied. Based on the category, appropriate plan can be applied to reduce the damage of assets. Different security levels can be defined based on the different use cases. Based on the use case types, recovery plans can be generated.

References

1. Chen, B., Wan, J., Lan, Y., Imran, M., Li, D., Guizani, N., Improving Cognitive Ability of Edge Intelligent IIoT through Machine Learning. IEEE Netw., 33, 5, 61–67, Sept.-Oct. 2019.

2. Mallapragada, P.K., Jin, R., Jain, A.K., Liu, Y., SemiBoost: Boosting for Semi-Supervised Learning. IEEE Trans. Pattern Anal. Mach. Intell., 31, 11, 2000– 2014, Nov. 2009.

3. Rodriguez, M.A. and Buyya, R., Deadline Based Resource Provisioningand Scheduling Algorithm for Scientific Workflows on Clouds. IEEE Trans. Cloud Comput., 2, 2, 222–235, 1 April-June 2014.

4. Akbar, A., Khan, A., Carrez, F., Moessner, K., Predictive Analytics for Complex IoT Data Streams. IEEE Internet Things J., 4, 5, 1571–1582, Oct. 2017.

5. Di Pascale, E., Macaluso, I., Nag, A., Kelly, M., Doyle, L., The Network As a Computer: A Framework for Distributed Computing Over IoT Mesh Networks. IEEE Internet Things J., 5, 3, 2107–2119, June 2018.

6. Shah-Mansouri, H. and Wong, V.W.S., Hierarchical Fog-Cloud Computing for IoT Systems: A Computation Offloading Game. IEEE Internet Things J., 5, 4, 3246–3257, Aug. 2018.

7. Marjani, M., Nasaruddin, F., Gani, A., Karim, A., Hashem, I.A.T., Siddiqa, A., Yaqoob, I., Big IoT Data Analytics: Architecture, Opportunities, and Open Research Challenges. IEEE Access, 5, 5247–5261, March 2017.

8. Hong, C., Applications of Cyber-Physical System: A Literature Review. J. Ind. Integr. Manag., 02, 1750012, 2017.

9. Barbosa, J., Leitão, P., Trentesaux, D., Colombo, A.W., Karnouskos, S., Cross benefits from cyber-physical systems and intelligent products for future smart industries. 2016 IEEE 14th International Conference on Industrial Informatics (INDIN), Poitiers, pp. 504–509, 2016.

10. Ghosh, A. and Chakraborty, D., Law, A., Artificial intelligence in Internet of things. CAAI Trans. Intell. Technol., 3, 4, 208–218, 2018.

11. Weihs, C. and Ickstadt, K., Data Science: the impact of statistics. Int. J. Data Sci. Anal., 6, 189–194, 2018, https://doi.org/10.1007/s41060-018-0102-5

12. Jordan, M.I. and Mitchell, T.M., Machine Learning: Trends, Perspectives, and Prospects. Science, 349, 6245, 255–60, 2015.

13. Esfahani, A., et al., A Lightweight Authentication Mechanism for M2M Communications in Industrial IoT Environment. IEEE Internet Things J., 6, 1, 288–296, Feb. 2019.

14. Moustafa, N., Adi, E., Turnbull, B., Hu, J., A New Threat Intelligence Scheme for Safeguarding Industry 4.0 Systems. IEEE Access, 6, 32910–32924, 2018.

15. Lepenioti, K., Bousdekis, A., Apostolou, D., Mentzas, G., Prescriptive analytics: Literature review and research challenges. Int. J. Inf. Manag., 50, 57–70, February 2020.

16. Carvalho, T.P., Soares, F A. A. M. N., Vita, R., Franciscob, R.d.P. Basto, J.P., Alcala, S.G.S., A systematic literature review of machine learning methods applied to predictive maintenance. Comput. Ind. Eng., 137, 1–10, September 2019.

17. Sahal, R., Breslin, J.G., Ali, M.I., Big data and stream processing platforms for Industry 4.0 requirements mapping for a predictive maintenance use case. J. Manuf. Syst., 54, 138–151, December 2019.

18. Yan, J., Meng, Y., Lu, L., Li, L., Industrial Big Data in an Industry 4.0 Environment: Challenges, Schemes, and Applications for Predictive Maintenance. IEEE Access, 5, 23484–23491, 2017.

19. Number of reported disasters by type, Hannah Ritchie and Max Roser, National Geophysical Data Center, 2019 [Online]. Available: https://ourworldindata.org/natural-disasters.

20. Johnson, G., Solving disaster management problems using ArcGIS. Nov. 9, 2003. http://campus.esri.com

21. Samaad, T., Tahir, G.A., Mansoor-ur-Rahman, Ashraf, M., Comparative Performance Analysis Between Agent-Based And Conventional Diaster Management System. 2018 International Conference on Smart Computing and Electronic Enterprise (ICSCEE), Shah Alam, pp. 1–6, 2018.

22. Arinta, R. and Andi, E., Natural Disaster Application on Big Data and Machine Learning: A Review, Researchgate publications, 2019.

23. Gao, W. and Zheng, Y.R., Study on Some Forecasting Methods in Geotechnical Engineering, in: Proc., 6th Conf. of Chinese Rock Mechanics and Rock Engineering Society, Beijing, Science Press, vol. 1, pp. 90–93, 2000.

24. Comfor, L., Risk, Security, and Disaster Management. Annu. Rev. Polit. Sci., 8, 335–356, 2005.

25. Wang, W. and Zhang, Y., Group decision making in safety planning for earthquake disaster area reconstruction. 2011 Second International Conference on Mechanic Automation and Control Engineering, Hohhot, pp. 6552–6555, 2011.

26. Ishida, T., Sakuraba, A., Sugita, K., Uchida, N., Shibata, Y., Construction of Safety Confirmation System in the Disaster Countermeasures Headquarters. 2013 Eighth International Conference on P2P, Parallel, Grid, Cloud and Internet Computing, Compiegne, pp. 574–577, 2013.

27. Fereiduni, M. and Shahanaghi, K., A robust optimization model for distribution and evacuation in the disaster response phase. J. End. Int., 13, 117–141, 2017.

28. Hasegawa, M., Hanada, K., Idei, H., Kawasaki, S., Nagata, T., Ikezoe, R., Onchi, T., Kuroda, K., Higashijima, A., Predictive maintenance and safety operation by device integration on the QUEST large experimental device. Heliyon, 6, 1–7, June 2020.

29. Gao, W., Predication of Landslide Based on Grey System and Evolutionary Artificial Neural Networks. 2010 International Conference on System Science, Engineering Design and Manufacturing Informatization, pp. 64–67, 2010.

30. Matthews, R.A.J., Decision-theoretic limits on earthquake prediction. Geophys. J. Int., 131, 3, 526–529, Dec. 1997.

31. Molchan, G. and Romashkova, L., Earthquake prediction analysis based on empirical seismic rate: the M8 algorithm. Geophys. J. Int., 183, 3, 1525–1537, Dec. 2010.

32. Takahashi, N., Imai, K., Sueki, K., Obayashi, R., Emoto, K., Tanabe, T., Real-Time Tsunami Prediction System Using Oceanfloor Network System. 2019 IEEE Underwater Technology (UT), Kaohsiung, Taiwan, pp. 1–5, 2019.

33. Voigt, S., Kemper, T., Riedlinger, T., Kiefl, R., Scholte, K., Mehl, H., Satellite Image Analysis for Disaster and Crisis-Management Support. IEEE Trans. Geosci. Remote Sens., 45, 6, 1520–1528, June 2007.

34. Nico, G., Pappalepore, M., Pasquariello, G., Refice, A., Samarelli, S., Comparison of SAR amplitude vs. coherence flood detection methods— A GIS application. Int. J. Remote Sens., 21, 8, 1619–1631, May 2000.

35. Rauste, Y., Herland, E., Frelander, H., Soini, K., Kuoremaki, T., Ruokari, A., Satellite-based forest fire detection for fire control in boreal forests. Int. J. Remote Sens., 18, 12, 2641–2656, Aug. 1997.

36. Hannah, K.J., Ball, M.J., Edwards, M.J., Disaster Recovery Planning, in: Introduction to Nursing Informatics. Health Informatics (formerly Computers in Health Care), Springer, New York, NY, 2006.

37. Alabdulwahab, M., Disaster Recovery and Business Continuity. Int. J. Sci. Eng. Res., 7, 3, March-2016.

38. Sanderson, D., sharma, A., Kennedy, J., Burnell, J., Asian Journal of Environment and Disaster Management. Asian J. Environ. Disaster Manag., 6, 131–151, 2014.

39. Barnes, C.F., Fritz, H., Yoo, J., Hurricane Disaster Assessments With Image-Driven Data Mining in High-Resolution Satellite Imagery. IEEE Trans. Geosci. Remote Sens., 45, 6, 1631–1640, June 2007.

40. Harris, M. and Laubsch, K., Disaster recovery. Eng. Technol., 4, 7, 26–29, 25 April-8 May 2009.

41. Moore, M., Case study: Electrical disaster recovery operations for a hospital. 2013 IEEE IAS Electrical Safety Workshop, Dallas, TX, pp. 69–76, 2013.

42. Slinn, J., Innovative technologies in disaster recovery: the big picture, in: Railway Safety Assurance. Management and Method in a Safe Network, pp. 1–22, 2014.

43. Shao, B.B.M., Optimal redundancy allocation for information technology disaster recovery in the network economy. IEEE Trans. Dependable Secure Comput., 2, 3, 262–267, July-Sept. 2005.

44. Lu, J., Guo, J., Jian, Z., Xu, X., Optimal Allocation of Fire Extinguishing Equipment for a Power Grid Under Widespread Fire Disasters. IEEE Access, 6, 6382–6389, 2018.

45. Zhang, S. and Liu, J., Analysis and Optimization of Multiple Unmanned Aerial Vehicle-Assisted Communications in Post-Disaster Areas. IEEE Trans. Veh. Technol., 67, 12, 12049–12060, Dec. 2018.

46. Wang, C.-f. and Sun, Y.-m., Optimization of urban morphology for comprehensive disaster prevention supported by GIS. 2011 International Conference on Multimedia Technology, Hangzhou, pp. 1136–1138, 2011.

47. Yuan, W., Zhao, L., Zeng, B., Optimal power grid protection througha defender-attacker-defender model. Rel. Eng. Syst. Saf., 121, 83–89, Jan. 2014.

48. Yuan, W., Wang, J., Qiu, F., Chen, C., Kang, C., Zeng, B., Robust Optimization-Based Resilient Distribution Network Planning Against Natural Disasters. IEEE Trans. Smart Grid, 7, 6, 2817–2826, Nov. 2016.

49. Zhu, C., Fang, G., Wang, Q., Optimization on Emergency Resources Transportation Network Based on Bayes Risk Function: A Case Study. Math. Probl. Eng., 4, 1–9, 2016.

50. Nakamura, E.T. and Ribeiro, S.L., A Privacy, Security, Safety, Resilience and Reliability Focused Risk Assessment in a Health IoT System: Results from OCARIoT Project, Global IoT Summit (GIoTS), pp. 1–6, 2019.

  1. *Corresponding author: [email protected]
..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset