6
Robots: Challenge to the Self-Understanding of Humans

Robotics is not one of the classic NEST developments. It is rather part of a long-standing development that usually has very clear areas of application: industrial robotics, service robotics, drones, self-driving automobiles, messenger robots and care robots. Their rapid development in the last 10 years has, however, motivated numerous questions about the meaning of robotics for the future relationships between humans and technology. My thesis in this chapter is that we are forced by the advances in robotics to better understand ourselves as humans, both as individuals and in our social context.

6.1. Autonomous technology: challenges to our comprehension

For decades, we have been able to observe how technological systems have been equipped with more “autonomy”. An early application in the mass market was the use of automatic gears in automobiles, which quickly gained acceptance globally. In the 1970s and 1980s, the use of industrial robotics resulted in the extensive automatization of assembly line work, thus making production substantially more efficient but also making millions of jobs superfluous in Germany alone. In this use, robots execute precisely defined activities in the production process. Robotics is also employed in areas in which human activity is not possible, is unduly dangerous or is very annoying. Familiar examples are the use of robots in space flight, for maintenance in nuclear power plants and as artificial deep-sea divers.

While many of these developments have taken place in factories, behind safety systems or in inaccessible areas and thus out of sight of the public, substantial advances in robotics increasingly may affect the life-world of people. Technical advances in sensors to improve our perception of the environment, in mechatronics to make movements such as climbing steps possible, and in electronics that have been made possible by the enormous increase in the available information processing, permit robots to take over certain tasks as autonomous systems in a human environment and, for example, to take the place of humans. Window-cleaning robots replace window-cleaning crews, search engines make inquiries on the Internet, autopilots fly planes, service robots work in chemical or nuclear power plants and automated monitoring systems replace or supplement human guards.

A certain category of technology futures have made robots familiar to us for decades. From ideas about the future such as those presented in science fiction literature and movies we are familiar with the idea of robots taking over important functions, of them cooperating and communicating with humans, of them being empowered to make decisions on their own and of them being able to carry out actions. Robots belong to the entirely natural inventory in the imagined future worlds of global hits such as Star Wars or Star Trek, which would be hard to imagine without them.

This seems to show that a normalization of robots has been achieved even before they occupy the place in the human world that they already have in these futures (in contrast to the normalization of nanotechnology; see Chapter 5). Even so, they take on two forms: as a threat to humans by staking a claim to taking control or as a source of help and support. This duality contains in a nutshell a central question of many technology debates and thus, ultimately, the fundamental ambivalence of technology [GRU 09a], where assessments alternate between the feared loss of control over technology and the desire for more and more support.

The immense progress made in technology has been, on the one hand, a result of human creativity and productivity, thus giving us reason to be proud and self-confident. Analogous to the Genesis story in the Old Testament, Man, as the creator of technology, could look back at his work at the end of each day and say that it was good. But this does not take place, at least not unambiguously. We are plagued by doubts regarding what this work that we ourselves have created means to us. The techno-visionary prospects of Industry 4.0 (I refer to Wikipedia due to the lack of citable publications), for instance, are only in part stories from a wonderful world of future production. The question that is instead inseparably linked to these prospects is where man’s place is in this context, above all where he will find opportunity to work. This was put into a succinct formulation: “Why the Future Doesn’t Need Us” [JOY 00]. Technological progress is accompanied by concerns that the future might sometime not even need us anymore. It is a concern that we could be the victims of our own success in the medium or long term. This gnawing self-doubt prevents us from looking back favorably at our work at the end of the day and just feeling satisfied.

My thesis in this chapter is, on the one hand, that this ambivalence is demonstrated particularly succinctly in robotics. On the other hand, it is that we should learn from this ambivalence instead of simply accepting it and complaining. Employing the hermeneutic perspective, I would therefore like to ask what the advances in robotics mean for our understanding of ourselves as humans and for our understanding of humans. It will be seen that technological progress forces us to formulate our self-description more precisely, at any rate when we consider ourselves as the “other” of robots or other technological artifacts. As a case study, I will examine the question of whether robots can make plans, and what it means for us humans if they conquer this domain traditionally reserved for humankind (section 6.2). While this question is an attempt to interpret developments that have been taking place for a long time, taking at least a short look at the techno-visionary developments of robotics offers further opportunities to trace current developments in the relationship between humans and technology (section 6.3).

6.2. Robots that can make plans and Man’s self-image

In this chapter, the question whether planning competence can and should be attributed to robots – as is common parlance in robotics – serves as an illustrative example. In the focus are autonomous robots which, for example, have to find their way through unknown surroundings and are not operated by remote control. The question is, in what manner, by what right and to what ends one could say that these robots plan, what understanding of acting and planning it is based on and what are the conceptual implications. Profound hermeneutic questions regarding the distinction between human beings and robots are hidden behind these seemingly simple questions.

6.2.1. Planning robots

In speaking about robots, in particular about autonomous systems, terms from the field of planning are often used. This use of planning language reaches back even to the early times of the artificial intelligence (AI) movement in the 1970s. The roboticists’ fundamental assumption about planning is going back to a statement decades ago:

“Solving problems, searching, and planning are the means with which instructions […] can be obtained […] out of a large knowledge base […] it is a matter of defining the initial state, the goal, and any known intermediate states […]. In general, a number of paths lead to the goal. […] In planning, a means-end table is first drawn up in order to recognize central decisions” [DEC 97].

The classical application in direct analogy to humans in their life-world is that a robot has to find its way through surroundings unknown to it, for instance, by overcoming or circumventing obstacles in moving forward. Another application is soccer-playing robots, which requires coordinating the action of several players. On the part of the constructors, development toward greater robot autonomy is progressing rapidly. This will further increase the demands on planning which the robots will have to do in order to perform the tasks intended for them and to be able to deal with unforeseen, unprogrammed situations in executing the autonomously established plan:

“In the future, robots will be far more than special machine tools. They will work together autonomously with human beings to do complex tasks and to do defined work without exact specification of the course of action in unknown or changing surroundings” [STE 01].

This 15-year-old expectation has in the meantime largely become reality. For example, autonomous automobiles [MAU 16] are nothing else than autonomous robots that move on wheels, transporting people or goods. Street traffic is a sequence of continuous unexpected events. Drones are flying robots that can independently search for their target in a military area. And some robots are already in use in the field of caregiving.

Aspects of planning have played a central role in the theory of AI and of the realization of autonomous artifacts for decades now [POL 95]. A robot as an autonomous system given the task of finding its way through an unknown environment and of carrying out a set task – for example, a transport operation within a building – is one of the most important applications and tests of its performance. This poses the question of the understanding of planning that applies here and of the relationship between the robot’s planning and human planning. If planning competence is ascribed to robots in a more than metaphoric manner, they are incorporated into a “community of planners” – a step toward a socialization of technology [JOE 01]. The importance of such an attribution becomes apparent when compared to philosophical anthropology, in which the ability to plan and the disposal over the linguistic means for imaging possible futures necessary for planning were seen as elements of humanity’s special position (Sonderstellung) [KAM 73]. Answers to these questions obviously touch upon questions of responsibility – who would be responsible for a robot’s planning and for its consequences?

The quotations given here continue to be up to date despite the rapid technological progress that has been made in some fields. You could say that they were ahead of their time. Newer developments such as evolutionary robotics, organic computing or adaptive ambience [GUT 15] lead to advances and new technical opportunities as well as to new sociotechnical constellations [RAM 07]. The fundamental questions regarding the interface between humans and robots remain, however. Examples of these are questions about the collaboration between humans and robots at work [MON 15], the question of whether robots can evolve [GUT 15] and the question of if and when it will be possible for the activity of robots to replace that of humans [JAN 12]1.

6.2.2. Planning as special type of acting

Because planning is a specific form of action [HAB 68, GRU 00], the introduction of the concept of planning begins with the following definition of action which contrasts action to behavior [JAN 01]:

  • – actions can be attributed to actors (as the cause of the action) from the perspective of an observer;
  • – actions can be carried out or omitted, not in the sense of any freedom whatever, but on the basis of reasons;
  • – actions can succeed or fail, i.e. there are criteria and conditions for success which are commonly explained in instrumental rationality of means and ends.

The classification of something as action is an interpretation made by external observers or by the actor themself [LEN 93]. In the latter case, it is necessary that the actor dissociates himself from himself in order to be able to make the interpretation. Acting thus is no ontological predicate, but an interpretation of the corresponding situation and an argumentatively explicable attribution. If a truck drives by, we do not say that the truck is acting but that the truck driver is acting, whereby a causal relation between the driver’s action (putting on the brakes) and the perceivable effects of these actions (the truck stops) is assumed. Slipping on ice or a coughing bout would not fulfill the criteria for acting, as a rule, but are “occurrences” [KAM 73] – events which simply happen. A coughing bout can neither succeed nor fail. This is only seemingly trivial because there are situations in which coughing can be chalked up as acting: removing crumbs from the respiratory tract, wanting to attract attention in the concert hall or warning a business partner that he is in danger of making a mistake in negotiations.

This shows clearly that the classification of a phenomenon as acting is done through attribution, which is based on an interpretation of each specific situation and its context [SCH 87, LEN 93]. Two coughs may be phenomenologically identical, can, however, possibly through the interpretation, in the one case be categorized as behavior and occurrence, and in the other case as acting. Inasmuch as interpretations can be controversial, the attribution of the concept of acting can also be challenged in individual cases.

Acting in the above-mentioned sense allows for learning. It can be derived from the criteria given above that acting is seen as capable of being improved, mere behavior, however, is not. Knowledge about relations of means and ends and knowledge for correcting faults can only be related to action, and not to behavior. Acting is subject to conditions for success, out of which a measure of success can be derived, and makes learning possible, because – technically speaking – the gap between the target aimed at and the achievements made so far can be used to inquire about the causes of this difference and to think about and introduce possible improvements.

The concept of acting in the sense mentioned above also allows building relations to the concept of responsibility (Chapter 2). With reference to actions, one can speak of reasons, consequences and responsibilities. Inasmuch as humans describe themselves as being capable of acting, they set themselves in a sociocultural context and define themselves as social beings who can develop and discuss actions, who can choose among alternative options for action, who can carry out the actions and who can – both beforehand and afterward – talk about consequences and responsibility (Chapter 2). The concept of action in differentiation from mere behavior is part of the (modern) human being’s self-constitution. However, there are alternative self-descriptions. Cultures are conceivable which do not distinguish between acting and behavior but subsume all of this under behavior. Consequently, these cultures must omit concepts such as responsibility, guilt, justice and injustice. Recent debates on a purely naturalist image of humans have pointed into this direction [JAN 12]. It therefore has to be asked in the following whether and to what extent an attribution of a capacity for action or a planning competence to robots would go hand-in-hand with human acting and planning, or where conceptual differences remain.

Planning is a matter of active preoccupation with future action for the purpose of consideration and preparation. Planning is an anticipatory reflection on purposes and goals or on action schemes without directly actualizing them: a drafting of future options for action in the sense of a test action [SCH 81, STA 70]. The purpose of planning is the previous drafting, reflection and judgment of the possible options for action to the end of preparing the action. Planning allows realizing purposes only indirectly. Only the plan implementation is supposed to realize the purpose, not the plan as such. Planning is a hypothetical and experimental action in the space of possibilities and alternative options of what could be done. It presents itself:

“[…] as a dramatic testing of different, competing possible directions of acting in the imagination […]. Experimentally, various elements of habits and drives are combined with one another, in order to find out how the resulting action would look in case it was initiated” [DEW 22, p. 190].

Schütz [SCH 71, SCH 81] emphasizes, footing on Dewey [DEW 22], planning from the perspective of a phenomenologist and likewise points to the difference between the projected and the real action:

“[…] devising action takes place in principle independently of all real action. Any drafting of action is much rather an imagination of action, i.e., an imagination of spontaneous activity, but is not the spontaneous activity itself” [SCH 81, p. 77].

Planning is devising and preparing goal systems or action structures which prima facie are neither known nor evident: if a decision maker had an action routine already at hand, he or she would not need to plan at all. Planning is an intellectual anticipation of future action and a method for attaining adequate action anticipations [STA 70]. Planning always is only concerned with situations where there is a need for design, preparation, construction, composition, choice and decision making.

An essential attribute of the concept of planning is its second-order purposive rational character [GRU 00]. First, each of the individual action steps of a plan has to be purposive rational and should expectably lead to the realization of certain subgoals. Second, these elements also have to be arranged in a purposive rational manner. The composition of the singular elements has to be done so that the objective as a whole can be attained: planning consists of the purposive rational composition of purposive rational elements [HAB 68]. This second-order purposive rationality implies that planning takes place within the space of reasons and knowledge, and has to be done discursively (see Chapter 4) [GRU 00]. A planning discourse consists of (1) a discourse on the determination of the purposes and goals, (2) the elaboration of alternative options for pathways of how to reach the envisaged goal by certain means and (3) the decision for choosing among the alternative options. This structure will be an important pattern to analyze the planning of robots.

6.2.3. Step 1: Can robots act?

The definition of acting given above leaves open who (or what) comes into question as an actor. It is an empirical question whether the criteria necessary for acting can be fulfilled only by human beings, by certain human beings in certain situations, by rational beings in the sense of Immanuel Kant, by certain animals (e.g. primates) – or even by robots. The definition of acting determined by criteria is open in both directions: not all human beings have to be capable of acting, and actors do not necessarily have to be human. A look at very young children, at demented individuals, at coma patients, at certain types of disabilities and at people with compulsive mental disorders shows that not all human beings can act. Even sleeping humans are not able to act.

Conversely, beings which do not belong to the species Homo sapiens but can nonetheless act are at least conceivable. It is an empirical question of the fulfillment of criteria on the basis of interpretations and reconstructions. Here, it can admittedly come to considerable problems of judgment, when the behavior of a chimpanzee, for example, is supposed to be classified as action. The necessary interpretations could be criticized as mere anthropomorphic misrepresentations because humans do not share their discursive community with primates. The latter also applies to the relationship between human beings and robots. A difference to the primates, however, consists of the fact that the robot, as a construct of human beings, should be better known in its functioning than primates.

The next step now is to ask how the types of robot planning described in section 6.2.1 above appear in the light of the definition of acting given. In short, the three criteria seem to be fulfilled:

  • – action causation: there is presumably no question that autonomous robots can cause something, in the sense that the effects of their actions can be causally ascribed to them;
  • – identification of success or failure: inasmuch as such robots have a task (e.g. defusing mines, forwarding a message to an address or bringing persons from A to B), success, failure or partial success can easily be determined from the perspective of an external observer;
  • – capability to omit: a specific robotic action, for example, circumventing an obstacle, seen from the perspective of an external observer, could have been omitted, analogous to the action of a human agent – namely, if the arguments that were decisive for the action had been different, for instance, due to a different diagnosis of the situation. Inasmuch as human freedom is not to be understood in the sense of a randomizer, but means the freedom to decide on the basis of good reasons, one would have to concede that a robot which chooses one option out of a spectrum of action schemes which is suited to the diagnosis of the situation and to the tasks it has to carry out, would have desisted from this action and have chosen another, if the reasons had been different.

These considerations do not lead to any arguments for denying autonomous robots the capability of acting. However, a consequentialist argument against this conclusion is repeatedly brought forward: the argument of the ascription of responsibility connected with performing an action (see Chapter 2). If the capability to act was assigned to robots, then responsibility would also have to be assigned to them, some say. These voices conclude that because the attribution of responsibility to robots seems to be contra-intuitive, robots cannot be considered capable to act.

A somewhat closer look at the concept of responsibility shows the fallacy in this argumentation. The argument that the attribution of action competence to robots implies the attribution of legal or moral responsibility is based on a confusion of the concept of responsibility (see [GRU 12c] and Chapter 2). The capability to act is a necessary precondition to be made responsible but not a sufficient one. The mere responsibility for causing an action would be applicable to the robot, inasmuch as acting is conceded to it – but this does not automatically imply any legal or moral responsibility of the robot. Also in the case of a causal responsibility on the part of the robot having acted, the attribution of the legal and moral responsibility could lead to the robot’s owner, its operator or its manufacturer [CHR 01, DEC 13].

The argumentation for attributing action competence to robots presented above is subject to a premise, which gives reason for further differentiation. Understanding acting as an attributive term was, in this argumentation, assumed from the perspective of an external observer. Let us now consider a thought experiment: first, an acting human being is observed in this manner. Then, this human being will be replaced by a robot acting in a functionally equivalent manner. If now an external observer would consider this robot, the result of his interpretation should be the same for the robot as for the human being: both are regarded as acting. This thought experiment confirms, on the one hand, the train of thought developed above: robots which replace acting humans act. On the other hand, a difference remains. In the thought experiment, neither the human being nor the robot observed was asked about his/its activities, tasks, diagnoses and reasons. The interpretation was made solely from the observing and reconstructing external perspective. This seems to be artificial concerning human beings: why not asking acting persons for their reasons? In case of the robots, however, asking for their reasons to act might be more difficult or even impossible. This observation indicates a deep-seated difference between humans and robots even in case both are regarded as being capable to act. We will return to this difference later on.

6.2.4. Step 2: What do robots do when they plan?

Usually, two types of planning robots are distinguished according to the differences normally made between artificial intelligence (AI) and artificial life (AL) robots [KIN 97 ]:

  • – robots which, based on an analysis of environment data, can choose from a predetermined number of options for action according to a likewise predetermined set of criteria;
  • – robots based on neural networks, which can incorporate learning effects and then change the basis for planning and deciding.

In both cases, planning as a preparation for future action obviously plays a crucial role. The first type is of a rather simple nature because the number of actions and set of criteria are predetermined. Planning is, in this case, limited to an assignment of options for action to a diagnosis of the situation. The second type, however, is of particular interest because the action schemes coming into question are possibly created originally by the robot and are not previously programmed as part of a predetermined quantity. Learning through an accumulation of experience from carrying out actions – for example, in moving through an unknown terrain – is at the heart of this type of planning. New ways of acting can result from learning processes in an unforeseeable manner [KIN 97]. This makes a control architecture necessary: the unforeseeable behavior of a robot due to learning could lead to undesirable results. Robots could get out of control. The control architecture has to ensure that the robot’s behavior remains within a defined frame, or that the robot will be turned off.

If planning is understood to be an experimenting test action [SCH 71], the question of the more detailed sequence in a robot’s learning process poses itself. The explanation:

“Learning consists of the reorganization and re-evaluation of the individual links within a neural network […]. We have previously spoken of supervised learning, by which, for example, human beings exercise control. If we go further to unsupervised learning, then we replace the monitoring system by a set of well-defined rules for learning. The entire system optimizes itself according to these learning rules” [SCH 93b]

leads to the conclusion that robot’s learning happens through experience. In reflecting on an autonomous robot of the type AMOS [KNI 94] or ARMAR [ASF 99], it is, first, important that it does not dispose over a prefabricated model of its surroundings but produces one itself through experience and continuously improves and adapts it. An illustrative example of dealing with obstacles is a delivery messenger or courier robot in an administrative body moving around in a building. Through sensor signals, the robot generates a model of its surroundings while it is moving. As long as these surroundings are static, the model produced consisting of walls, doors, elevators, etc., can be used without problems. During operation, the robot constantly checks, by means of sensor technology, whether its model is still up-to-date. If a door, which is normally open, is once closed, it comes to a breakdown of the plan, just as when an obstacle unexpectedly prevents moving ahead. A breakdown in plans shows a deviation between the real situation and the expectations. In such cases, the robot defines the area in which a difference between the model of the surroundings and reality occurs as a region of interest (ROI) [KNI 94, p. 77]. Through experimental handling of the unexpected situation, the robot can gather experience. It can try to bring the obstacle to make way by giving an acoustic signal (the obstacle could be a human being who steps aside after the signal), it could try to push the obstacle aside (maybe it is an empty cardboard box) or the robot could, if nothing else helps, notify its operator. Maneuvers such as parking or making a u-turn in a corridor can be planned in this manner [SCH 95]. One of the most important challenges in this work is classifying the plan breakdowns [KNI 94, p. 80] in order to later diagnose the problem and then take the appropriate measures as fast as possible.

The underlying planning theory paradigm consists of the cybernetic planning model of feedback in a system–environment interaction. The heart of this planning concept [STA 70, CHU 68, CHA 78] consists of a cybernetic feedback loop: a planning system plans to change certain parameters of its surroundings and takes measures to achieve this goal. It then monitors the effects of the implementation of the measures and evaluates them against the expectations. Deviations from the expectations are detected by means of this feedback control mechanism and are taken into consideration in subsequent measures. Learning consists of repeated runs of this cybernetic loop, with a corresponding accumulation of empirical information. A robot’s experimenting with unknown surroundings and the use of the resulting experience can, in fact, be interpreted as planning processes within the framework of cybernetic planning theory.

The reference to the concept of planning introduced above, in particular the specifics of purposive rationality of the first and second order and the necessity of distinguishing and selecting among alternative options (see Chapter 3 and [GRU 00]) do not provide any grounds for rejecting the concept of the planning robot. The robot makes – through sensors – an interpretation of its present situation and compares it with a goal situation. It compiles possible plans of action derived from a knowledge base and then decides on the choice and composition according to a given set of criteria. The specifics of planning, especially the instrumental rationality, are obviously included. Planning-theoretical modeling of the robot’s planning is possible and adequate considering the cybernetic loop.

The robot’s planning in a cybernetic model is, however, an extraordinarily limited type of planning in comparison with the complexity of human planning [GRU 00]. This needs to be elucidated in two directions: (1) by exposing the cybernetic feedback as a poor and deficient planning model and (2) by examining the preplanning agreements:

  1. 1) The cybernetic mechanism consists of learning from experience through more or less well-prepared testing and practical trials of action steps in the cybernetic control loop. In the model of adaptive and continuous planning, the robot adapts itself to the conditions of its surroundings. The normativity of planning – namely, to make a plan according to certain aims and, if applicable, to implement it – is neither taken into account in the cybernetic model nor is there a mechanism which could reconstruct this normativity [GRU 12c]. The mechanism of checking the results of planning and comparing the present situation (“what is”) with the desired one (“what ought to be”) simply acts as a substitute for the normativity of the planning goals. The robot is not compelled to determine objectives beforehand, to reason about means for reaching the goals and about possible incidental consequences, but it can try actions and classify the results. What is without doubt sensible in the case of, for instance, AMOS and ARMAR (see above), fails, however, in planning tasks of other types, such as building a house, or of large-scale technical projects where much more elaborated anticipation is needed [SCH 81]. Instead of adapting to environmental conditions, it is, in the latter case, a question of defining objectives and of realizing them by applying appropriate measures. The specifics of this type of singular planning are the normativity involved and the previous modeling and simulation of the entire process including a permanent reflection. In comparison, cybernetic planning is nothing more than an improved method of trial-and-error – a method which, as a rule, plays no great role in normal human planning. Thus, even if robots are capable of planning, although they can only do this within a poor concept of planning.
  2. 2) A second type of limitation of robots’ planning competence results from the question on the decisions made before planning. Concrete planning is not free of premises but is based on preliminary decisions through which the space for possibilities, options and searching for the solution of the respective planning task is predetermined. The initial conditions determined before starting the planning process define the stage and set the scene for subsequent planning. They decisively influence the manner in which one can plan and how possible plans could look like. Elements of such preplanning agreements (see section 4.2 and [GRU 00]) include determination of the target areas to be taken into consideration, of the addressed system’s borders, of the range of admitted goals and means and of criteria for choosing a plan among several possible ones. Preplanning agreements are contextual restrictions of the principally conceivable diversity and should reduce contingency. Planning contexts can be distinguished according to whether the preplanning agreements are under the planners’ control or whether they were set for the planners from outside. Robots are, in this respect, in a weak situation.

6.2.5. The difference between planning humans and planning robots

A robot’s planning is, as portrayed, describable in the cybernetic planning model (section 6.2.3). The objectives stated and the target setting are limited: in part, algorithm-like sequences are determined, in part, the knowledge base is predefined, limits are set for the robot through the control architecture and so on. Preplanning agreements (section 6.2.4) have been made which cannot be revised by the planning robot itself. Thus, though the robot’s behavior can definitely be designated as planning, it is a very special and reduced type of planning:

“The behavior of autonomous robots is – by using the techniques of information processing available today – marked by their knowledge base in the form of programs and data, and precognition in whatever form it takes. This knowledge base, its use, and expension is, even in the case of so-called self-learning systems, predetermined by humans in the realization of robot systems” [STE 01].

A first intuition now might be: the planning robot is forced to plan within preplanning agreements made by humans which cannot be changed by the robot – a poor situation. Humans, on the contrary, could be regarded as free to change preplanning agreements. However, the simple juxtaposition of a freely planning human being and of a strongly controlled robot planning falls short. Human planning often also takes place in strongly restricted possibility spaces (e.g. within restrictive employment relationships). It seems, and this would be the resume, that there is a gradual transition from the simple planning of a robot with restrictive planning agreements to free and complex planning processes, which does not necessarily make a qualitative jump at the transition from robot to human being.

In this manner, it becomes possible to reconstruct shifts in limits. Inasmuch as technological progress will increase robots’ planning competence, the previous limits will be shifted. The borders between humans and technical artifacts are becoming blurred. However, Latour’s demand [LAT 87] to speak of robots and human beings in the same language and to acknowledge a complete symmetry between them, tums out to be not very helpful in this connection. Though it is possible to speak of planning robots as well as of planning humans, a complete symmetry between humans and robots cannot be deduced from this as has been shown above. Deriving a full symmetry between humans and robots from the application of the same planning terminology would be possible only by an extreme disregard of the different planning models, the differing disposal over preplanning agreements and the different treatment of the normative level involved. Differentiations have to be made in order to arrive at a better understanding of planning robots and human beings, of their similarities and differences and of shifts between them over time. Only painstaking deliberation and consideration of the differences is instructive: one can, in the comparison of planning robots and planning humans, also learn something about planning humans, namely, the characteristics of human planning and their, in part, very narrow limits, set by heteronomous preplanning determinations, bound by certain terms of reference, e.g. in employment law or in the organization of labor in industrial processes.

When we reflect on planning robots and humans, we also reconstruct ourselves (Joerges [JOE 01, p. 196], with reference to Latour). The fact that we can speak of planning robots does not mean that the planning of humans and robots is to be put on the same level [GRU 12c]. In a certain sense paradox, the use of the same terms for planning robots and human beings intensifies the asymmetry instead of bringing about symmetry. If we reconstruct the work of a messenger robot, we will – formally – find the same action-theoretical structures as when we reconstruct the action of a human messenger as a messenger. The putatively strongly limited objective-setting competence on the part of the robot (in which the tasks are programmed) is no counterargument because, in an environment regulated by employment law, even the human messenger has a very restricted objective-setting competence; in principle, he or she has to do what his or her superior demands, in the scope of his or her job description, to which he or she has consented. On this level, the activities of the human messenger and the messenger robot are equivalent – otherwise, the messenger robot could not replace the human messenger.

Despite this, there is a considerable asymmetry between the messenger robot and the human messenger. A robot which is functionally equivalent to the human messenger, i.e. which brings the same messenger performance, plans the errands and the solution of problems occurring in them in a specific sense and under predetermined initial conditions. The human messenger plans in that he/she fills his/her role, according to an analogous understanding of planning, and with probably similar criteria. While the messenger robot, however, is committed to its role as a messenger through programming and control architecture, the human messenger can abandon this role. The requirement for the ability to omit acting in order to distinguish action from behavior has to be differentiated. For the robot, it is already fulfilled when it has the choice between some few alternative options for action – but yet, it remains within its role. The human messenger, on the other hand, can understand the omission much more radically and can abandon his or her role. For example, the human messenger might follow a strike organized by his or her trade union. Or another example: if a human messenger observes during performing his/her job that a person urgently needs help for health reasons, he/she immediately would stop doing his/her job and instead help the person – the robot would not be able to do this. The measure of the ability to omit planned actions and to move to another track of acting and planning proves to be central to the distinction between planning humans and planning robots, and also to be a parameter for measuring future shifts in this field.

6.3. Technology futures in robotics

The best known technology futures in robotics have without a doubt been created and spread by literary works and movies in science fiction. Long before any technological possibilities were available, robots played a central role in novels and movies. Well-known examples are the programmable machine man from Metropolis by Fritz Lang (1927), HAL-9000, who was presumed to be infallible in Stanley Kubrick’s 2001: A Space Odyssey (1968) and the ball-shaped R2D2 from Star Wars by George Lucas (1977). A topic of the movie I, Robot by Alex Proyas (2004) is how the NS-5 robot Sonny attains consciousness in an emergent manner, at least in the movie. During this long tradition of robots in science fiction, which frequently have been and are great public successes, the social appropriation of robots has already taken place. The assumption has been expressed over and over again that many people would not be very surprised to meet robots on the street, while shopping or at work. Such scenes have already taken place all too often in movies.

The relationship between technology futures and the real development of technology in robotics is therefore very different from that in other NEST developments. While for the others, visions and anxieties can run ahead of technological developments and the social debate can be extremely irritating because of the far-reaching possible consequences in every direction (see Chapter 5 for nanotechnology and Grunwald [GRU 16a] for the field of synthetic biology), the social appropriation of robotics has already taken place. In contrast to the early visions of nanotechnology, the visions of robotics seem much more familiar. This is, on the one hand, because of the long history of such characters in science fiction, but on the other hand, because the humanoid robot is the ideal of classical robotics. The size and shape of such a robot is modeled on humans, as was already the case for Fritz Lang (1927), where the machine model perfectly emulates a woman and looks accordingly. As a result of this design, robots have been assimilated to humans even in their construction. For many practical purposes, this is sensible. It is advantageous, if not even necessary, for robots to be similar to humans in size and shape if they are to be used, for example, as an assistant or companion [BÖH 14]. By their very shape, robots – the creations of humans – visually demonstrate their closeness to humans.

As robots are today increasingly entering our daily lives, a spot has already been prepared for them to fit into [DEC 11]. A complex and risky process of normalization (Chapter 5 for nanotechnology) is not required because it has unintentionally already taken place. The hermeneutic issues are therefore very different from those raised by typical NEST developments. The point is not simply to first understand what meanings robots could have in the present or future world or what the visionary debates could say about us today in order to lead to the possible meanings of the RRI debates (Chapter 1). An abundance of role models for robots and of models for the relations between humans and robots has, on the contrary, been prepared by science fiction. The question as to the meaning and also to the roles of robots in [MEI 12, MAI 15] can start directly with the existing role models and their depictions in movies, art and literature.

Viewed in this manner, concerns with these possible roles is not a vision assessment (as interpreted by Böhle/Bopp [BÖH 14]), since we are not dealing with visions of possible new futures containing robots. We are instead dealing directly with the present. The question regarding meaning takes the form of a hermeneutic concern with the roles that are already present, as mediated by science fiction and frequently addressed by government research funding programs and by robotics research’s own description of itself.

The role of the robot as a “companion” for humans has received particular attention [BÖH 14]. The future relationship between humans and robots are frequently formulated using the rhetoric of assistance, of the colleague and of cooperation, such as in the approach followed by Industry 4.0 (see Wikipedia due to lack of alternative sources). These relationships are propagated at the level of R&D policy and by related research projects, e.g. by the European Commission’s ICT policy following the aims:

“We want artificial systems to allow for rich interactions using all senses and for communication in natural language and using gestures. They should be able to adapt autonomously to environmental constraints and to user needs, intentions and emotions” [ECE 12, p. 12].

Support is provided for the research to reach these goals. The research being funded is aimed at:

“[…] unveiling the secrets underlying the embodied perception, cognition, and emotion of natural sentient systems and using this knowledge to build robot companions based on complexity, morphological computation and sentience […]” [ECE 13, p. 168].

In the German long-term project, “a companion technology for cognitive technical systems”, funded by the German National Science Foundation DFG, the vision reads as follows [DAU 07]:

“Technical systems of the future are companion-systems – cognitive technical systems, with their functionality completely individually adapted to each user: They are geared to his abilities, preferences, requirements and current needs, and they reflect his situation and emotional state. They are always available, cooperative and trustworthy, and interact with their users as competent and cooperative service partners” [WEN 12, p. 89].

The role of the companion was differentiated by Böhle/Bopp [BÖH 14] as follows:

  • – artificial companions as guardians “should accompany and supervise the user while monitoring his or her health status and environmental indicators (e.g. room temperature, pollution)” [BÖH 14, p. 162]. Artificial companions as guardians could have a role in ambient assisted living of elderly or handicapped people in order to support them and allow a safer and autonomous life;
  • – artificial companions as assistants should enable “the user to fulfil tasks, which she or he would otherwise be unable to perform” [BÖH 14, p. 163]. The authors see “cognitive support” as a frequently desired form of support: the artificial companion should remind the person to, for example, plan an agenda or take medication. The demand placed on companions is, above all, that they be empathetic and socially acceptable [DEC 11], which requires the development of a corresponding human–machine interface;
  • – artificial companions as partners “appear as conversational vis-à-vis artificial playmates and interdependent actors. The emphasis shifts from monitoring and assistance to companionship services” [BÖH 14, p. 164]. The objective of this role is to build relations between humans and robots and to associate emotions with the relationship.

Interestingly, all of these are roles from the world familiar to today’s humans. Sometimes, it is even possible to name job profiles that fit these roles. They are not visions of a future world but an expression of present expectations that the present relations between humans – the role of the guardian, assistant and partner – can and also should be adopted by robots.

This expectation covers diagnoses of the current world that something between humans in their different roles is not functioning well or not well enough. If the vision of an artificial companion is a vision and thus positively occupied, then we are apparently dissatisfied with our present human companions or we fear that we will be dissatisfied in the near future. Why should we otherwise want to have artificial companions at all and invest considerable public funding in them that is consequently no longer available for other purposes? The hermeneutic analysis would thus concern itself, to continue using “disentanglement” as the metaphor for artificial companion [BÖH 14], with the reasons, diagnoses and perceptions of deficits that make this metaphor appear so positive that it is supported by substantial amounts of public funding. This appears particularly relevant because the positive perception even entices one to take a prognostic view of the future, i.e. to a society populated by artificial companions:

“[…] the companion metaphor may also serve as an expression indicating that in the ‘next society’ various types of intelligent artefacts will accompany us providing services and be part of our everyday life” [BÖH 14, p. 166].

While the vision of an artificial companion is thus ultimately a rather conservative one because it is related to the present, refers to available role models between humans and expects such artificial companions to produce an improvement in these models, some visions go further:

“Imagination is the first step. We dream it, then we do it. Robots have lived rich fictional lives in our books and movies for many years, but bringing them into the real world has proved more challenging. 21st Century Robot has a new approach: A robot built by all of us, tapping into the power of open source software and a worldwide community of talented dreamers. The stuff of science fiction is fast becoming science fact” [JOH 15].

Together with the thesis mentioned above that the propagation of a future artificial companion logically presupposes a diagnosis of deficits in present human companions, a consequence of this vision is an improved understanding of expectations: future robots serving as artificial companions are ultimately imagined to be the better humans. As companions, they will always be in a good mood and fulfill their role as partner or assistant perfectly; they will be well-mannered and will not tire of indulging us by serving us. Emerging from behind the expectations for technological progress to put artificial companions at our side in the future is the wish for a better human and thus criticism of ourselves (which by the way is also an important backdrop for the debate on human enhancement; see Chapter 7). In view of the many historical failures in forming better humans, whether through education, upbringing or propaganda, technological progress here takes on the key role of offering promise. This is a topic that should be embraced by a hermeneutic analysis of robots functioning as artificial companions.

An entirely different topic, one that is apparently much closer to reality but that is fraught with visionary features, is industrial production in the sense of Industry 4.0 (see Wikipedia due to a lack of alternative sources). A fact on which all characterizations agree is that Industry 4.0 will assign great significance to autonomously acting technical systems and their cooperation with humans. In this context, the artificial companion is a topic that appears in particular as talk about the “colleague robot”, which of course does not have to be humanoid in appearance. According to Wikipedia, in this case, it is the assistance function of the artificial companion that is decisive:

“First, the ability of assistance systems to support humans by aggregating and visualizing information comprehensibly for making informed decisions and solving urgent problems on short notice. Second, the ability of cyber physical systems to physically support humans by conducting a range of tasks that are unpleasant, too exhausting, or unsafe for their human co-workers” [WIK 16b].

In this future world, industrial production is supposed to run in a self-regulated and autonomous manner. Industry 4.0 implies

“[…] the ability of cyber physical systems to make decisions on their own and to perform their tasks as autonomous as possible. Only in case of exceptions, interferences, or conflicting goals, tasks are delegated to a higher level” [WIK 16a].

Being “delegated to a higher level” could be interpreted as an indication of the sovereign authority of humans to make decisions, but this is not definite; it could also be a higher level of authority within the framework of a software or a control architecture. The question as to the role of humans in this future world of industry must still be answered. The official descriptions are conspicuous precisely by putting “the human” – whoever that may be – emphatically at the focal point, although his function is becoming increasingly unclear.

This rhetoric requires explanation. It raises the suspicion that humans are placed at the focal point of Industry 4.0 precisely because this is supposed to hide the fact that there is hardly a spot left for humans in the far-reaching visions. I do not pursue this suspicion here, but clarification of this suspicion and the future human–technology interface in Industry 4.0 is urgently necessary, particularly in view of the manifold concerns about the sphere of work [BÖR 16].

6.4. The hermeneutic view of robots

In contrast to classical NEST histories, robots are familiar to us from science fiction. Their appropriation by society no longer requires any complex processes because there are diverse role models for robots, either considered alone or in cooperation with humans. The techno-visionary futures in the debates over meaning, in contrast, take a backseat. The hermeneutic view covers our understanding of constellations involving robots and humans that are only new by being possible in practice, but not by being present in man’s imagination.

A central element of these constellations is the distribution of responsibility between human and robot. As noted above, the attribution of the capacity to perform acts and to make plans in no way implies the attribution of moral or even legal responsibility. Even if robots make decisions autonomously, this does not mean that they have to expect legal sanctions in case they make a mistake. A self-driving automobile in which the software causes a traffic accident will not have to stand trial in court. But the question of who is legally or morally responsible is a rapidly growing field of research on autonomous driving [MAU 16]. Autonomous driving will not establish itself in practice before this question is answered unambiguously in a manner that will stand up in court.

This is a matter of making the prospective study of new sociotechnical constellations [RAM 07] and their practical elaboration the objective of respective RRI debates. Yet, these constellations in autonomous driving, just like those in robot caregiving or in the scenarios of Industry 4.0, are by no means futuristic. They are by and large very similar to constellations in today’s society, only that autonomous technology takes over functions that until now were carried out by humans. It is essentially about substituting technology for human actions [JAN 12], whether in driving, caregiving, delivering mail or industrial production. That which is substituted and the contexts in which the substitution takes place are thus quite familiar since they are aspects of our current world. The task of hermeneutics, and this is the conclusion here, is thus not to understand the meaning of techno-visionary futures that first have to be socially appropriated but to understand the present human–human and human–technology constellations and their possible transformation in future sociotechnical constellations. This offers us not only the chance to shape future constellations, but also to learn about present constellations, such as about analogies between current working conditions and a robot-like implementation of plans (section 6.2).

A recurring techno-visionary concern in this – what can be called – everyday task of the constructive appropriation of gradual technological progress is Man’s possible loss of control. Here are a few examples:

  • – Man, as part of the machinery that he created, can only keep it working by degrading himself to a functioning cog in the wheel (from Charlie Chaplin’s movie Modern Times, 1936);
  • – the superiority that an economic-technological system that has taken on a life of its own exerts over the individual [MAR 67];
  • – the antiquatedness of man toward his technological creations [AND 64] and his consequent shortcomings with regard to remaining master of his technology;
  • – the fear that the future does not need mankind at all because technology has made itself independent continues to develop on its own, and is thus no longer dependent on us [JOY 00].

Science fiction is a pioneer in this regard too. Movies such as the Matrix trilogy (1999–2003) and I, Robot (2004) make a topic precisely of the assumption of power by technology that has become autonomous and of man’s loss of control, contributing to the attention paid to this potential technology future. That anxieties over a loss of control have accompanied the history of technological progress does not say anything about a real loss of control being inevitable nor that it poses a real danger. The future will determine this. But the fact that such a loss continually accompanies the use of the results of progress may say something about ourselves, namely, that at the end of a day of creation, we cannot simply find pleasure in what we have done, but feel discomfort that is tied to the fundamental ambivalence of technology [GRU 09a].

It is this situation that leads to questions being raised in the context of RRI debates with regard to the consequences of the thoughts presented here. The central topic does not consist of the irritating techno-visionary futures but in the constellations of the present and their desired or foreseeable transformation in new constellations in which robots play roles that are today taken by humans, above all as a companion. To understand these transformations and the opportunities and risks that they pose, a constellation analysis that empirically examines the respective role relationships and the changes they undergo and also pursues questions regarding the relationship between humans and technology from the perspective of hermeneutics or the philosophy of technology is needed. At the same time, this analysis must automatically also examine man’s self-image. Ultimately, the issue of “what ought to be” is at the forefront, i.e. the issue of shaping, instead of blindly following technological developments. This perspective on issues of responsibility includes classical issues of robot ethics [DEC 12, LIN 12, VER 06], but goes beyond them in both a hermeneutic and empirical sense.

This results in an expansion of the object of responsibility (Chapter 2) with regard to the role images of our relationship to technology. This may not take place in a merely instrumental sense but must also be grasped as a sociotechnical constellation that is not only concerned about technical services of robots to and for people but about man’s self-conception. This affects in particular the new distribution of responsibility in view of the increasingly autonomous realm of technology. Robots cannot assume responsibility today or in the foreseeable future, possibly not as a matter of principle; only humans can. We must prevent a world that nonetheless eliminates human responsibility and transfers it to technological systems in an unreflected manner.

Changes in the image of man can be seen, for example, in the reflections it casts on technology. The projection of the longing for a better human onto robots, as is visible in the projections onto a prospective artificial companion, indicates humans’ discomfort with themselves and their mistakes, shortcomings, finiteness, moodiness and limitations. It appears questionable whether this discomfort can be overcome by a perfect artificial companion. Such a presumably perfect product of technology might make humans feel even more disillusioned and depressed when looking in a mirror. RRI debates should also be concerned with these anthropological issues.

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset