2
Ethics and Transhumanism: Control using Robotics and Artificial Intelligence

2.1. Introduction to transhumanism

There are two paradigm changes that will quickly transform our lives and that raise questions of ethics: robotics and transhumanism (Chapter 2) and the uberization of our society (Chapter 3).

In this chapter, we analyze the implementation of ethics in new business environments, monitored and controlled through robotics- and artificial intelligence (AI)-based support systems.

Indeed, just to take an example, robotics is able to replace or compensate a physical deficiency we may have (e.g. a missing hand). This is considered as an advancement. Sometimes, however, instead of just being a physical recovery, we could implement an enhanced functionality, such as the transplantation of a super-hand. The result will be much better, and anybody could agree with the solution.

When, thanks to the new technologies, we develop a disease-free human being, or, thanks to AI, replace the given intellectual disability of a human being with a sophisticated feature enabling super-intelligence, we are faced with a problem of ethics. This is the case in industry when we develop so-called “augmented systems” [FER 16].

image

Figure 2.1. Olympic Games – Oscar Pistorius

When we use an exo-skeleton, or when we connect ourselves to a super computer endowed with a powerful AI reasoning system, we are entering the era of transhumanism.

In contrast, as soon we develop robots, like “Terminator”, we do not know if we are faced with “robots” or living beings. Where is the boundary?

This is the kind of problem we will address and try to explain, because we do not yet know how to implement advances in robotics and artificial intelligence, how to embed these advances in our body and, in terms of ethics, what the limitations (if any) we could integrate in the scientific advances? What is the impact in terms of management and control?

2.2. Ethics, robotics and artificial intelligence

2.2.1. Differences between computer, human brain, artificial intelligence and thinking

It is widely accepted that the computer reproduces the human brain, whereas artificial intelligence emulates the functioning of the human brain. In reality, this is a trend and much remains to be done. Among the most advanced achievements are the IBM Watson machine for certain types of cognitive problems, the Human Brain Project for the simulated construction of a brain in a computer and so on. For the latter, the current limits to computational power are such that it is not yet possible to implement it.

Thus, the results at present are still far from what people think: we are at the level of a limited transhumanism. Hereafter, we will come back to one of the above examples to explain another concept related to the externalization or internalization of a human function:

  1. 1) When we have a disability, it is possible to integrate or replace a technological organ in our body to reduce our deficiency. In this case, we externalize a human function and compensate for the deficit, for example, an artificial sensory organ such as an eye, ear or hand, or a paralyzed or missing leg.
  2. 2) In factories, it will be possible to use an exoskeleton to help a worker to lift heavier loads. To increase an assembly force 10-fold, for example, running faster, pedaling faster on a bicycle and so on, requires putting in place devices designed to enhance a human function (augmented humankind).

Nowadays, we can see that, at the level of transhumanism, defects, handicaps and diseases can be corrected (which is praiseworthy), but, and this is new, it is also possible to improve human beings and to provide them with enhanced capacities. Among these two cases, which is most ethical? Can the second one, because of its potential to be abused by an ill-intentioned person, be rejected?

To answer this question, we will recall the definition of ethics that we will adopt in this chapter: “Ethics is a philosophical discipline that reflects on the values of existence, the conditions of a happy life and the notion of ‘good’, justice or (when questions of morality are not yet defined), according to our own conscience, the behaviors to be followed to make the world humanly habitable and sustainable”. In this, ethics is a search for an ideal ecosystem and the best conduct of existence.

With regard to artificial intelligence, there are often machines that handle large volumes of data, perform repetitive tasks, monitor our pervasive actions, take insidious and ubiquitous control of our lives and take decisions that have an influence on our collective unconscious and subconscious.

In Japan, on the contrary, the machine is considered to have a soul, like any living being or object. The robot is seen in a positive manner: people have no apprehension about robotics as it is experienced in the West and a robot is considered as a friend or a partner.

The big challenge is the development of autonomous intelligence in automated systems. At present, we can carry out good computer programs, with advanced evolutionary algorithms, but they are endowed with neither autonomous intelligence nor reasoning based on consciousness.

We are, however, surprised to see computer systems that are using “big data” resources. For example, they are able to extract the right information or weak signal from the global knowledge available in the cloud. They are able to define a very precise and well-argued decision path. Application: in the field of medicine, a computer can study patient files in detail, thereby leading to the confirmation of a diagnosis sometimes better than that by the doctor. This is the case, for example, with the Watson machine, which in 2016 made it possible to diagnose a very rare disease for a patient living in Japan and to cure it rapidly (through a comparative analysis of several hundred thousands of medical records).

Again, the first “psys” software was developed: it is an expert system (knowledge-based system – KBS) that plays the role of a “psy” and gives a very precise diagnosis of some mental pathologies, with a high confidence level. The main characteristic is related to the mutual trust existing between the machine and the patient: the patient can speak freely to the machine and give in-depth information without being judged [NAC 14].

2.2.2. We cannot predict the future, as we are living in a complex system

It is said that any progress in science always has a positive impact on the economy or social environment. In terms of performance or activity and opportunities, Moore’s law is applied. Now, the question is, as advances are accelerating, will theory and practice align with one another?

As long as we have not reached the processing capacity of our brain and we are not too close to it, we do not know what will happen. There is no way to determine whether, over the next 5 years, 7.1 million jobs will be destroyed, to create only 2.1 million jobs, as suggested by the Research Center of the Davos Economic Forum in 2016. Can we anticipate such a trend? Would it be the opposite? Is this anticipation realistic?

For the first time, the integration of a new technology would be more job-destructive than job-creating, because much more people are involved, regardless of the area or the skill set. However, we cannot foresee anything because it is a balance between the creation of activities, wealth and jobs (parameters to be integrated into the notion of ethics). It is true that conventional robotics first eliminated many blue-collar jobs, which were replaced by new jobs aimed at producing more sophisticated services and products.

Today, cognitive robotics makes moderately skilled jobs in administrative tasks and services disappear: thus, white collars are now affected. These jobs may not be compensated for by more elaborate positions such as advanced computer science or engineering, hence the fears expressed by many specialists. Moreover, there is a prospective discourse, always based on Moore’s law: it would seem that in approximately 2050 or earlier, the exponential rise in computing abilities will be such that consciousness will emerge from our computers. The emergence of consciousness corresponds to a point of “singularity”: this is a disruptive event (in the sense of chaos theory or catastrophes).

2.2.2.1. Consciousness: a simple physiological phenomenon?

This question is just a follow-on idea that could explain the in-depth nature of consciousness and ethics and then provide a better understanding of the complexity of ethics.

To help diagnose damage to our most complex organ, the brain, researchers at Harvard, Princeton, Montpellier or Imperial College London are deploying sophisticated image analysis tools, powered by GPU computing, associated with MRI and in-depth learning [ZHA 15, CHA 16]. The objective is to carry out research across fundamentally important areas such as decoding the intricate structure of the human brain, that is, its wiring diagram (Connectome project), discovering the origins of the universe (MWA telescope project) and studying the quantum chemistry of molecules (Q-Chem project). This is not a surprise: in nature, all the underlying mechanisms are the same [MAS 15b].

In medicine, brain diseases are becoming one of the common high-risk sicknesses in the modern society. Thanks to advances in science (MRI is an important approach for diagnostic screening associated with machine-learning approaches such as convolutional neural network, CNN) and surgery, it becomes possible to treat these diseases more efficiently. Here, we have to mention two outstanding results:

  • – In Montpellier, neurosurgeon Professor Hugues Duffau [DUF 16] operates on the brain of the patient and proceeds step by step. By asking questions and by electro stimulation, he can probe the arcana of consciousness and define the limits of brain function. The approach consists of removing only the diseased parts of the brain that have no effect on the physical and psychological behaviors of the patient. To date, he has mapped more than 700 human brains, which has allowed him to detect important areas of the brain that must be preserved.
  • – In the Allen Institute of Seattle, Harvard and Princeton, we can note the work conducted by Professors C. Koch and S. Reardon [REA 17] related to the structure of the brain. They have provided evidence of some neurons that develop in an unusual way. Their length is very important; these neurons are supposed to act as a circular motorway around the cortex of our brain. They can be connected to other neurons throughout the brain and form a circular crown. As supposed by these neuro-scientists, such interconnections could play a role in the consciousness of humans. Indeed, as for introspection, consciousness is related to the synchronization of many different parts of the neocortex. These specific neurons issuing from a specific area of the brain (claustrum) seem to be connected to most of the external (and internal) parts of the brain: they could therefore achieve appropriate coordination and coherency of the brain.
image

Figure 2.2. Allen Institute for Brain Science. Reconstruction of a large neuron in the brain. For a color version of this figure, see www.iste.co.uk/massotte/ethics2.zip

Such advances are important as they allow progressive understanding of the adaptiveness, flexibility and agility of the brain; how our consciousness and ethics can emerge from our brain and how we can better control or just explain them.

In a more general way, we can extend these discoveries and knowledge to our global information system.

As the machine has access to not only all of the information stored in the networked computers of the planet (public or private information systems), but also all the computers that are also networked with individuals, smartphones (MID), sensors, control and surveillance cameras, drones and so on, and as everything is connected (Internet of Things), there is no longer only individual consciousness: just as there is, through the effect of cooperation and collaboration, collective intelligence, which will emerge from the interactions between all individual consciousness. As we talk about social innovation [MAS 13], we are able to talk about collective and social consciousness. It will emerge and can represent a norm, a world of life and so on, to which we must bend!

The dictatorship of thought imposed by governments will change the paradigm and become a societal dictatorship of thought. What about ethics that is now issued from the individual consciousness?

What about the concept of ethics as presently defined in the Rotary? Surely, it continues to apply to our conventional world, but it will adapt to integrate the new evolution and trends of our society.

To manage the behavior of robots (mechanical, electronic or cognitive), scientists and experts continue to refer to the three laws of Asimov published for the first time in 1942. These are common sense “laws”, and we detail them later in this chapter. In the actual world of robotics, there is no Asimov law implemented because this implies a certain vision of the world that robots are currently unable to understand: they do not have the symbolic dimension of anything. They act literally by executing instructions given to them by a computer program, autonomous or not, but designed by a human beings (not yet designed by other robots). Therefore, principles of life are not yet incorporated into them.

As a result, the laws of robotics are not usable as such, so that a robot cannot be interested by itself in human activities and concerns. However, this is somewhat what we are trying to do: in factories, the new-generation robots can work side by side with humans without hurting them. They are designed to feel the presence of a human being and control the movements generated by a worker, because humans are unpredictable! Some others will also talk about cobotics, which is completely different.

2.2.3. People who fear risks are predicting reproducible robots

Those who speak of risk envision times when the machines will create other machines that will escape us completely.

They are already self-repairable and self-configurable. However, there is indeed the idea that the machine, at one time, will be able to self-replicate and self-develop and perhaps to be reproduced by their own (according to the definition of life!). It is still science fiction: for now, no machine is yet capable of doing so. If a point of singularity exists (≈ 2050?), consciousness will have emerged and the machine will implement instructions to improve itself (self-maintenance and self-enhancement). It may even take the decision to improve itself if it deduces that human beings are taking too long to get there. So yes, such a machine could become a possible scenario and a major ethical problem. However, it is just one assumption among many others; today there is no proof to accredit this kind of theory.

2.2.4. Ethics: why scientists are so worried

It is difficult to know why people like Bill Gates or Stephen Hawking are so worried with the evolution of the technologies, but we can ask ourselves the same questions because we are dealing with many concepts and possibilities that may prove to be hazardous or unpredictable:

  • – genetic manipulation and the use of nanocomponents of fractal structures: all of them may have amplified actions we cannot yet predict.
  • – bio-engineering or bio-robotics: there are those who consider that in the depths of biology they are only machines (as recently seen with molecular motors) and that at the nanotechnological level, everything can be (re)programmed;
  • – hybridization or transhumanism: we are thinking of computer nanorobots (in the IOT) implemented in the human body that will communicate with direct (telepathic?) connections between the brain and computer networks.

Everything, everywhere, is bubbling and interpreted: we arrive at a vision where all these sciences are converging. The emergence of a world in which robotics, computer science, nanosciences and biology will merge together: it is a new paradigm that will arise; it is nightmare vision for many other people. For which kind of ethics? Is transhumanism congruent with ethics? Should humans be allowed to carry out actions just because they can? Can we consider that transhumanism is ethical?

2.2.4.1. In the field of defense, also, many ethical issues remain unresolved

As human beings, we may be concerned about the use of new military weapons. This is also an issue for military robotics. There are not only individual machines, such as drones, but also networked machines, interconnected robots in the form of clusters. They work together and are made to control a territory or nation by making the distinction between “friends” and “enemies”.

It is said that humans are behind these artificial systems and control them: but it is known that with drones controlled by humans, there are always errors due to design or development mistakes. Under these conditions, can we trust a robot and allow it to take its autonomy and control of the environment to avoid human errors? This is not too bad if they are only small flying machines, but there are air-fighter jets that are already flying computers, which are easily automated and autonomous: they can fly without the control or presence of human being. The same happens with underwater robots, tanks or totally robotic vessels, all may be working together in an interconnected way, without the direct control of human beings.

Another example is the Legged Squad Support System: it is a “dogrobot”, which is seen on TV: it can assist the soldiers carrying heavy loads. He is frightened when it is seen: it is “strange” and behaves like a donkey. If it leaves out of the control of the soldiers and becomes autonomous for whatever reason, it can be very dangerous.

In terms of ethics, some researchers believe that scientific research in this area must be stopped because they can take us to a level of involvement where return is quite impossible. Indeed, a point of singularity may appear and confront us with something impossible to control (the machine would refuse to obey orders defined by human beings). The point of no return (as a pass, in chaos theory) is comparable to an irreversible mutation (as for a mutant virus).

At the strategic level, in a company, such technological evolutions invite us to take the time to think carefully about the direction we are taking and to put safeguards in place for the security and benefit of all and for sustainability.

2.2.4.2. Are safeguards possible? In a mis-structured world, with sparse and competing R&D?

In 2007, Al Gore and the IPCC won the Nobel Peace Prize for their work on Climate and Environmental Protection. Discussions with some of these experts have shown that it is impossible to oppose progress and foresee deviances: there will always be a country, an organization and an army that will do what is forbidden. On the contrary, it is necessary to be able to imagine what deviations can arise and prepare to face them in a reactive way, if they come to emerge. In this case, ethics requires us to anticipate unpredictable events. And there is another aspect that is not really taken into account: hacking. One can hack a robot, a network of robots, which can behave aggressively: it can happen and it will be everywhere, thanks to the IOT. And we must ask ourselves the question: how can we control this? Is hacking ethical, as soon it is able to better decode the program of an out-of-control feature?

2.2.5. Ethics and safeguards in business

Security of data and protection of humankind are primary concerns in business ethics. Simply said, in a first approach, it consists of not being connected to all computers and robots via the Internet. For example, and to go in this direction, putting the electrical network on the Internet network and therefore making it vulnerable is a serious strategic mistake that penalizes all users, the economy and health.

Having a closed and internal network system for all vital functions is already a step forward because everything can be accessed, despite the advanced encryption systems we currently have.

In the area of R&D, and for a better understanding and self-regulation of a complex system, the following steps have to be implemented: when designing a complex system, standards should be discussed in the ethics committee, in order to have a more robust and ethical opinion. Indeed, global standards cannot be considered, if they exist, because some people will always go against the common good for their sole benefit. Let us not forget that our societies are like prey–predator systems and that the civilizations which disappear are those that have been unable to adapt for three reasons: lack of skill, ignorance or greed.

2.3. Ethics and robotics

2.3.1. Introduction

The fields of artificial intelligence and robotics are gradually spreading and permeating all sectors of the economy. Therefore, when we talk about business ethics, we cannot forget to integrate the positive and negative notions of ethics and behaviors (antagonistic values and virtues) we may have about the new technologies and the context.

In order to better understand how to address the problem of ethics with new technologies, the right way to proceed is to consider two significant examples:

  • – the field of “defense”, more precisely that of military weapons;
  • – the field of “games”, or social networks.

Why? Simply because these two application areas have significant funding, high demand and a large market share. They can also be used as a testing and launching platform for most of the technological changes to come in the civilian area. Indeed, thanks to the military, it is cybernetics and autonomous systems, logistics, control processes, robotics and artificial intelligence that will develop in a consistent way. With the activity sector of games, it is cooperative systems, collaborative systems as well as strategy problems, exploitation of big data and the phenomena of self-organization in social networks and organizations theory that will be able to develop. The first area will be studied in this chapter, while the theory of networks will be discussed in the next chapter devoted to social networking.

2.3.2. Some characteristics about the weapons sciences: intelligent robots and wars

Nowadays, the presence of information systems and robotics on the battlefield is a banality. They were first designed to protect human lives inside each opposed community. However, overtime, with technological advances, a new generation of weaponised machines are fighting in our place: they are diverse and capable of land combat as well as aerial combat. They can be remotely controlled (drones) and they possess autonomy (automatic turrets spotting targets thanks to infrared sensors and gyro-programmed shots).

These machines have proved to be effective in recent wars. However, fighters and observers have raised a problem: robots (such as drones) are blind and can strike their target without distinction (especially when unpredictable, and therefore unscheduled, events occur). Therefore, some people are opposed to the use of drones because they can cause the death of a large number of enemies, with significant “colateral damage” (of the order of 50-50).

image

Figure 2.3. “MQ-1 Predator” drone

Fights between different autonomous robots are possible but have not yet started. Nowadays, robots play a major role in wars. They have advantages and disadvantages when compared to human beings. At present, robots obey orders and have no feeling (anger, fear, vengeance) that could alter their decision-making in some situations. This is evolving, as robots are progressively able to integrate notions of semantics and abductive reasoning. Finally, it is impossible to divert a robot from its objective because it is “incorruptible”; thus, in this sense, they are reliable tools.

As robots are remotely controlled and directed by humans beings, sometimes thousands of kilometers away from the battle field, it is possible to hack the communication system dedicated to human–robot orders and information exchange and then to modify the mode of control of the weapons and to turn them against their owner. This requires the use of important ethical decisions for the design, development, management, control and monitoring of these systems [RUS 15].

Application. In the following, we consider lethal autonomous weapons (LAWs): they are considered a paradigm shift (a break) in the science of war.

Lethal autonomous weapons (LAWs) are a type of robot designed to select and attack military targets (people, installations) without intervention by a human operator. LAWs are also called lethal autonomous weapon systems (LAWS), lethal autonomous robots (LAR), robotic weapons or killer robots. Their autonomy is limited because there is a human in the decision loop: he gives the final command to attack; they have, however, more autonomy than unmanned combat aerial vehicles (UCLAV) or “combat drones”, which are currently remote-controlled by a pilot.

These autonomous weapons select and fire at targets without any human (or reduced) intervention. They can be considered killer weapons when the targets are human beings.

These stand-alone weapons make extensive use of artificial intelligence and robotics to build equipment and ensure target perception, engine control, navigation, mapping, tactical decision-making and long-range planning. The technologies used are similar to those used for stand-alone cars without drivers: this type of AI is based on the DQN system (from DeepMind). This company also developed the Deep Q-network (DQN) algorithm, which combines Q-learning with a deep neural network (they are networks of recurrent neurons which allow temporal dependencies to be taken into account). Q-learning is a widespread learning algorithm: it is used not only in playing games, but also in the “human-level control through deep reinforcement”.

These weapons, which are autonomous or semi-autonomous, are used in two programs designed by DARPA (Fast Lightweight Autonomy (FLA) and Collaborative Operations in Denied Environment (CODE)). They prefigure the management and production systems that we will use in future manufacturing plants (Industry 5.0) that operate in uncertain and disturbed environments. The aims of such systems, with their associated criteria, can be summarized as follows:

  • – military necessity (≈ industrial objective);
  • – differentiation between warriors and non-warriors (≈ diagnosis or differentiation of the state of operation of the systems in interactions);
  • – the proportionality between the value of the military objective and the potential for collateral damage (≈ quick choice of a suboptimal decision → Game theory and regenerative algorithms);
  • – taking into account human-like feelings based on the principles of ethics, therefore subjective and semantics.

At the level of global ethics: some countries such as Germany and Japan do not accept that the decision of life or death of a person is taken by an autonomous robot (weapon). This implies, as is done in an industrial company, that the decision to keep or “fire” a person, an employee or a group of people can only be done with the help of a DSS. It is a problem of human dignity and ethics, intended to avoid making serious mistakes.

Is it not said, in jurisdiction departments, that it is better to pardon a guilty man than to condemn an innocent man?

We have the same type of reasoning when we consider the management of “alpha” and “beta” risks in quality control in a production system.

This is not exactly what is done in politics or media systems, when people have difficulties in discriminantory analysis to distinguish an ethical versus non-ethical situation or fact.

In robotics, with the evolution of technologies and the possibility of implementing a “precision” approach (as we are doing in sustainability), the hardware capabilities of autonomous weapons will continuously improve. In contrast, software errors (hypothetical errors due to the artificial intelligence programs that will control them) are not yet under control.

Decision makers, scientists and specialists in robotics and artificial intelligence are then obliged to adopt a position (technical and ethical position), as is done with nuclear industry, chemical agents and biology (genetic manipulations, bacteriological weapons, etc.).

2.4. Artilects

The problems of artilects emerged in the 2000s with the emergence of the notion of cognitive robotics.

The question we had to solve in EMA was quite simple. In defense, or in industry, is it advisable to develop a lot of independent and autonomous fighting (working) units to solve a problem (either on a battle field or in the economy) rather than building a big aircraft carrier or a large manufacturing plant.

We will not detail this problem but just highlight some characteristics of the solution based on artilects.

An artificial intellect (or “artilect”), according to Dr Hugo de Garis, is such a cognitive robot. It possesses a computer intelligence superior to that of human species in one or more spheres of knowledge together with an implicit will to use the intelligence. Artilects are the concern of artificial intelligence scientists, who speculate that human society may soon have to face the question of whether and how we can restrain AI from making decisions inimical to humankind.

Artilects are useful programs that are able to accomplish specific tasks (O.R, decision-making) in an autonomous way, within a network computers. The concept is interesting because of the development costs that are proportional to the square of the number of lines of code (Massotte’s law) and the autonomy we may have to resolve difficult problems. As we can see, advantages are related to lean, flexible and reliable considerations, whereas disadvantages are related to maintenance, security and control.

Here, Dr de Garis assumes that within one or two generations, we will have computers more sophisticated than human brains, with the ability to experimentally evolve their intelligence into something beyond what humans might contemplate or understand. In that case, we are faced with sustainability and ethics problems. How to control uncontrollable situations?

Concerning ethics, can we build machines that will jeopardize the human species?

Ultimately, are we ready to accept an unemployment rate of 50%?

Finally, is it ethical to pay the same wage to the working or non-working people?

All these concerns are just question marks, but it would be advisable to work on them because mother nature and the associated sciences are evolving faster than expected with some top responsibilities.

Here, we will not develop the concerns formulated by Pr De Garis: we refer to part of the population believing that artilects would probably want to leave our planet to seek intelligence elsewhere in the universe. On the contrary, there are some people who believe that it would be too dangerous for the human species to allow artilects to be developed.

2.5. The world: a hybrid planet with robotics and living species

In terms of the general principle of robotics, we have two complementary agents:

  1. 1) Human beings that possess natural faculties of perception, cognition and action. We use our sensors to measure and understand the state of the world, and our brains think and choose the actions to be undertaken. Then, our bodies can perform these actions.
  2. 2) However, the capabilities are very different between autonomous robots and humans. Robots have limitations in perception, cognition and motor functions. They are not able to fully perceive a scene, recognize or manipulate an object and they are not able to understand all spoken or written languages. (Manuela Veloso, Professor of Computer Science at Carnegie Mellon University): “Robots will not be able to move on certain types of ground; They can assist human beings but not replace them; They will have to ask for help and how to express their internal functioning”.

2.5.1. Application of the cobots

Cobotics (i.e. collaborative robotics) is an emerging branch of technology that aims to produce robots able to assist humans by automating some of their tasks. To some extent, motorized exoskeletons, by means of the discussion above, can be seen as specific cobots: they are mechatronics robots, controlled by AI, which assist the movements of the operator, but are not autonomous; the difference being that they are generally driven “from the inside”. In terms of ethics, it does not matter.

image

Figure 2.4. a) NASA COBOT (assisted walking); b) exoskeleton DAEWO – assisted transportation and logistics.

(source: http://er.jsc.nasa.gov/images/ER4/jsc2012e064813_alt.jpg)

Faced with real environments, current CoBots are not yet able to perform any kind of pattern matching. They cannot recognize all objects: supervised learning steps are required to walk, grasp or manipulate an object. They can collect useful information while they are moving. For example, they can generate an accurate mapping of spaces and gradually increase their autonomy.

The concept of symbiotic autonomy also allows robots to directly ask for help from human beings or through the Internet. Now, robots and humans can help each other to exceed their respective abilities. It is a hybrid synergy involving both a worker and a robot. Is the collective intelligence of robots ethical, especially when compared to current approaches in companies?

There are always barriers to robots and humans co-habiting in a safe and productive way. The question is therefore to define the limits and modes of collaboration, because behind this there is some human material, with its emotions, psyches, strengths and weaknesses. Ethics therefore intervenes in terms of the functionalities to be covered, the complementary and collaborative roles of each, the interrelations between man and machine and so on.

2.5.2. From the drone to the autonomous car

Can we expect more and more cars without drivers, self-paced drones or robots in medicine, in agriculture or industry? The answer is probably yes, as all the technological advances and research results are first implemented in strategic sectors like defense and health before being transposed to more conventional areas.

  1. 1) In the case of autonomous cars, TESLA has opened the way to this approach, combined with electric engines. All manufacturers now participate in such developments.

    They are mainly based on the use of an automatic steering system. A recent update allows its on-board computer to take control of the vehicle on a highway, follow the dividing lines, make assisted lane changes and control vehicle speed with minimal human intervention. This car makes the driver’s life easier, but does not allow him/her to sleep at the wheel or go to any point in a completely autonomous way.

  2. 2) In transportation: Amazon plans to use drones for package distribution in the final stage of customer delivery. Similarly, in factories or in a large department store, the transport and distribution of supplies or money is entirely automated. However, are the inherent security risks well covered?
  3. 3) In Australia, at the West Angela plant of Rio Tinto, several mining vehicles are automated. They are 250-ton transport units fully equipped with a precise GPS system, an obstacle detection system and interconnection system based on a wireless network. There is no human driver on board. The trucks thus travel along the roads between the filling point and the deposit site. The drilling machines are able to drill tens of thousands of meters in a completely autonomous way and with an exemplary precision.
  4. 4) In road transport, millions of trucks move permanently, everywhere. Trucks that are not operated by humans can operate around the clock (24/7), excluding maintenance shutdowns and refunding. Productivity is therefore greatly improved on this point compared to that with human-driven systems. At the workforce level, automated trucks are more accurate and efficient, better manage the consumption of fossil fuels and reduce the hazards and accidents associated with fatigue, stress and other human factors. Moreover, as trucks are self-contained and unoccupied, much less work resources are required, which increases the company’s profitability and reduces the risk of accidents.
image

Figure 2.5. Autonomy: even big trucks are becoming IOT’s

2.5.3. A necessary adaptation

We cannot go back: according to a Eurobarometer survey, published in 2015, 72% of Europeans believe that robots are a positive step forward for society. The field of artificial intelligence and robotics is progressing rapidly, raising the question of the adaptation of our societies to these new technologies. The European Union currently supports more than 120 projects through different programs. For example, €700 million will be allocated until 2020 under the “Horizon 2020” program.

2.5.3.1. Management of the ethics and legal challenges in Europe

“The European Union keeps a global vision in its approach to artificial intelligence”. EU policy makers are currently playing a positive role in this adaptation by “proposing frameworks that promote stability, well-being and economic progress”.

Even if the questions of these above values are of key importance in the design and development of robotic systems, we have to focus our attention on the following viewpoint.

Pawel Kwiatkowski, a lawyer, stated that “robots are not recognized in civil law. Can a robot express something? The answer is quite simple when we talk about non-complex algorithms, but if things get complicated, we run the risk of having problems”. It is in this sense, and whenever such a situation arises, that the only regulator we have to deal with this kind of case is ethics. It is always the same thing: in daily life, when we are faced with a situation that is difficult or is unpredictable (complex system), it is sometimes necessary to take a quick decision, the decision maker is alone in front of his responsibilities and must act in conscience and soul to choose the best possible solution.

On 20 April 2016, the EU Committee on Transport presented a study on new technologies in the field of transport. The draft report of the Working Group on Robotics and Artificial Intelligence reflects the legal issues that need to be resolved. Indeed, in the event of an accident, how does a person become responsible: the manufacturer? The driver of the autonomous car? The text aims to elaborate avenues for the drafting of new rules voted by the Committee on Legal Affairs at the end of May 2016. Thus, ethics is still required to help resolve problems encountered.

2.6. Ethics and the elementary rules of Asimov in robotics

In robotics, it is usual to refer to the three Laws of Isaac Asimov (science fiction author) [ASI 82, ASI 42] published for the first time in 1942. These are “laws” of common sense, as follows:

  • – A robot may not injure a human being or, through inaction, allow a human being to come to harm.
  • – A robot must obey the orders given to it by human beings except where such orders would conflict with the first law.
  • – A robot must protect its own existence as long as such protection does not conflict with the first or second laws.

In reality, there is no Asimov law implemented in robots. This would imply a world view that robots are not capable of understanding: they do not have the symbolic dimension of things and have very limited semantic abilities. They only obey pre-defined programmed instructions.

Nowadays, however, with the concept of deep learning (as implemented in the IBM’s Watson machine), it is possible to consider robots that feel interest in human beings.

Similarly, in factories, new-generation robots (exoskeletons, cognitive robots, etc.) can work side by side with humans without hurting them, physically or even mentally. They are made to feel the presence of a human being, to control the movements of the body, not to crush the limbs when an unpredictable gesture appears and so on!

In this context, the three laws proposed by Isaac Asimov were supplemented and amended. As the first law was incomplete, he added a fourth or zeroth law to precede the three others in the following way:

  • – Zeroth law: A robot may not harm humanity, or, by inaction, allow humanity to come to harm.
  • – First law: A robot may not injure a human being or, through inaction, allow a human being to come to harm, except if in contradiction with the zeroth law.
  • – Second law: A robot must obey the orders given to it by human beings except where such orders would conflict with the first law or zeroth law.
  • – Third law: A robot must protect its own existence as long as such protection does not conflict with the first and second laws or zeroth law.

2.6.1. Ethics and sustainability

The rules of Asimov contain in themselves the roots of dependence, positive collaboration and resilience. It is a sustainable concept that is highlighted. It is also a framework that is defined and widely open to ethics. Therefore, when talking about ethics, new technologies are oriented, and it will be possible to define new BECC.

2.6.2. General comments

With the evolution of technologies, a new paradigm emerges. Those who speak of risk evoke the movement of a machines with other machines that will be totally out of our control. There is indeed the idea that the machine, at one time, will self-repair and self-develop. It is still science fiction: for now, because no machine is capable of doing so. Nobody can say if, and when, such a paradigm change will occur, but there still a lot of challenges that are to be developed and monitored, particularly in the fields of complex networking, neurobiology and cognition before trying to emulate the characteristics of some living beings.

Beyond this point, progress would no longer be the work of artificial intelligence in constant progression. It induces such changes on human society that any human being can neither apprehend them nor predict them reliably. The risk is the loss of human influence, political power, destiny and thus the creation of a robotic civilization, (associated with a meta-governance).

Personal consciousness would no longer be that of some particular living beings, but a collective consciousness derived from robots (including notions of consciousness of related to its life, self-consciousness, with choices of its own future, with instincts of survival, etc.) and give them the ability to think and decide on their own actions.

Such discussions and thinking happened by the end of 2014 when many scientists and experts in artificial intelligence gave a push signal by calling for developing some research in this field plausibly more “reliable and beneficial” to everybody. Then, because of the great potential of artificial intelligence, it is important to ask how to collect its advantages and fruits while avoiding the potential traps”.

This refers to the possible issue of man’s loss of control over the machine that humankind himself created. Steve Wozniak, Steve Jobs’ friend and co-founder of Apple, even imagined a “scary future”, in which humans could be transformed into “domestic animals” or “crushed as ants” by the robots they created. For his part, Bill Gates, while pointing out that a quarter of Microsoft’s research efforts are devoted to artificial intelligence, wondered, in 2016, how humankind cannot be worried.

We can also imagine several scenarios. First, a robot can be the equal of a human being or superior; it would be able to learn, think and have its own philosophy of things: this can go much further with the slavery of human beings, or the idea of bio-robotic-assisted conception (by regulating, for example, the conception of female progeny), etc. This opens important questions of the ethics, consciousness or objectivity of robotics. Is it possible to develop an AI that is always more efficient and closer to the human being without breaking certain ethical limits? Domination drifts?

The problem that arises is that of robot-ethics. Robot-ethics is the ethics applied to robotics. It focuses on humankind ethics to guide the design, development, production and use of robots. Robot-ethics, which dates back to the 2000s, goes much further than the laws of Asimov. It is a human-centered ethics, which must respect the most important and widely accepted principles of the Universal Charter of Human Rights. This discipline will therefore combine scientific disciplines (physics, life sciences, mathematics, etc.), humanities, law, philosophy, religion and spirituality.

2.7. Conclusions and perspectives: the problems that could arise from robotics

Our world is changing. With the introduction and the integration of artificial intelligence and robotics, we are living a paradigm change. How far will we go with robotics? And what is the future place and destiny of humankind? Is there a new technical, social or societal status to be expected for robotics? What is the role of ethics and how can we define it within this context?

Even if robots became smart enough to do jobs that people do not want to do anymore, there is no evidence that they have the same efficiency and quality of work as a human being: if the robot encounters a problem that it has never faced before, if it is not properly trained and if there is no longer a human being to monitor it, then there is a risk of getting a completely unpredictable robot, with dangerous reactions that will not necessarily be most appropriate and could cause damage to anybody.

Today, the humankind is faced with a huge problem; we are able to react, improvise and unconsciously make a decision of “good sense” and sensitivity. It is what we call “true expertise”. Up to now, AI is becoming high performing but it is very far from providing such true expertise. Nothing says if AI would be good for humanity.

Finally, the new ethical, sustainable and legal challenges associated with the development of robotics and artificial intelligence are enormous. In modern and advanced technologies, we can say that “there are urgent questions to which we must find answers” and that there is still room to adapt ourselves and to create a smarter and more sustainable planet.

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset