3
Responsibility: A Polysemous Concept

Chapter 2 allowed us to evaluate the different approaches to RRI proposed by the European Commission and by the academic literature. As yet, however, none of these approaches have explored the concept of “responsibility” in detail, nor do they distinguish between legal responsibility, ethical (or moral) responsibility or social responsibility1. Here, we shall focus on the concept of moral responsibility. Indeed, it is this concept that best fits the framework of RRI. As shown in Chapter 1, to be ethical is not simply a question of respecting the law, a limitation of ethical reviews. This is even truer for RRI in which, as we shall see, ethics and responsibility go beyond with respect to the law.

If we consider the etymological roots of the word “responsibility”, it seems that the initial meaning comes from the Latin term respondere (to respond). In French (as in German with the term Verantworten), répondre (to respond) refers both to the idea of communicating a response and to the idea of being accountable for one’s actions (répondre de ses actes, in French) by taking on responsibility. The philosopher Ricoeur [RIC 95] notes a second meaning for responsibility linked to the idea of imputation, meaning the act of attributing an action or a result to a person. In the case of responsibility as response we focus on the intention of the actor, whereas with the idea of imputation the overriding factor is the causal relationship linking the actor to his or her acts within a particular chain of events [PEL 04]. Therefore, in one case, the individuals are responsible in so far as they must be aware of their actions, justify them and defend them, whereas in the other case, the individual’s responsibility arises from the ability to attribute to them an action of which they are recognized to be the author or authors. These two interpretations open up two different paths of reflection.

However, in order to move towards the creation of a complete map of responsibility, several other meanings that have been explored, using political and moral philosophy in particular (see for example [HAR 68, GOO 86, BOV 98, DUF 07, CAN 02, WIL 08, VIN 09, VIN 11, VAN 11]), must be added to these two interpretations. From these works, to which the recent works concerning RRI must be added [OWE 12, OWE 13b, notably GRI 13], 10 different understandings of responsibility can be reached2. We shall present these in brief in the following, along with examples to highlight the differences between them.

It is possible to identify responsibility as3:

  1. 1) cause4; for example, the tsunami is responsible for the deaths of 10,000 people. It is the cause;
  2. 2) blameworthiness5; for example, Mrs. Y is responsible for betraying her friend. For this reason, she is blamed;
  3. 3) liability; for example, Mr. X is responsible for the car accident and must pay for the damages;
  4. 4) (a and b) accountability; for example, the company director is responsible before his or her shareholders, and must justify his or her actions and their consequences;
  5. 5) task (or role)6; for example, the lifeguard is responsible for supervising the pool. This is the task that has been assigned to him;
  6. 6) authority; for example, the chief of the police squad is responsible for operation X. He has the authority to make decisions;
  7. 7) capacity; for example, Mr. Y has the cognitive and moral ability to act in a responsible way;
  8. 8) obligation; for example, the lifeguard has an obligation to look after the people in the swimming pool. He must use the necessary means in order to prevent any accidents from happening;
  9. 9) responsiveness; for example, Mrs. Z has the ability to respond to a problem appropriately, promptly and with precision.
  10. 10) virtue (care); for example, Mr. K has the tendency to act in a responsible way. It is as if he has trained himself to be responsible.

Among these different meanings, we can distinguish between negative interpretations and positive interpretations of responsibility7. The negative interpretations – responsibility as blameworthiness (2), as liability (3) or as accountability (4a)8 – are essentially retrospective. Responsibility is noted afterwards, once the harmful event has occurred.

However, responsibility as accountability (4a), or as an obligation to repair damages (3), also includes prospective elements. This means an individual must consider the future. Indeed, a director who follows the instructions of his shareholders sees his present actions being conditioned by the future obligation to justify his choices and decisions. Likewise, an individual’s current recognition of his or her responsibility as liability determines his or her future actions, for example to repair damage caused or to offer financial compensation.

The negative interpretations of responsibility are particularly focused on the individual. The person or people behind an act and who deserve the blame are essential. Indeed, it is a case of identifying the person or people responsible for the morally or legally reprehensible act, after the act has occurred. This forces the people behind the act to justify their actions and, in certain cases, to attempt to make up for the damage. These types of responsibility are therefore attached to the individual and to the chains or lines of causation that link them to the course of events in question. Above all, these interpretations depend on the idea of imputation, which is contained within the concept of responsibility as mentioned above.

The positive types of responsibility include a prospective element, reassuring those identified as responsible that an action is carried out (or avoided) and that one or several goals are met. In this case, there is a projection toward the future in order to determine morally desirable goals. Such projection determines the possible actions and decisions that will allow the person to work toward these goals in the best way possible. However, the positive decisions, as we shall see (Chapter 4), do not exclude a retrospective aspect. Responsibility, both as accountability (4b) and as responsiveness (9), assumes that past actions and decisions will be looked at again with a view to either justifying or correcting them. In contrast with the negative interpretations, however, the positive understandings of responsibility help to inspire our current conduct and remain conditioned by a certain normative horizon. From a not exclusively consequentialist perspective, the positive understandings not only influence our actions in terms of certain goals that must be met, but also our disposition and intentions (particularly in the case of responsibility as virtue, see Chapter 4).

3.1. Negative understandings

Before continuing, it is a good idea to go back to the first interpretation of responsibility as cause (1) as this underpins all other conceptions of responsibility. This interpretation corresponds to cases where one states that event A is responsible for consequence X, such as, for example, when we say that the heatwave in Europe in 2003 was responsible for the death of 70,000 people and the destruction of several crops. All negative interpretations of responsibility assume that there is a causal link between an action or event and a certain result, which is judged as harmful. However, these interpretations add to our understanding of responsibility a specific link between the person or people behind an act, and its consequences. Let us note here that, while negative interpretations often refer to results that are judged as harmful, responsibility as cause also involves a more positive interpretation, whereby a person is the cause of praiseworthy results and is hailed as being responsible for them.

The negative understandings of responsibility correspond to responsibility as blameworthiness (2), as liability (3) and as accountability (4a)9. We present them in brief in the following in order to offer a critical analysis of them in section 3.2.

3.1.1. Responsibility as blameworthiness (2)

We shall begin with responsibility as blameworthiness, which is the most commonly accepted and used interpretation. We hold agent A legally and/or morally responsible for action X (A is responsible for the burglary committed the previous night, in the case of criminal culpability). This understanding of responsibility assumes five conditions as follows [THO 80, BOV 98, COR 01, SWI 06, VAN 11]:

  1. moral agency: the agent has the mental capacity to act in a responsible way. In the case of cognitive deficiency, the perpetrator of a morally reprehensible act could not be held responsible (culpable) for their actions;
  2. causation: Agent A is, in one way or another, causally implicated in action X;
  3. wrongdoing10: Agent A has committed a wrongdoing;
  4. freedom: Agent A was not forced to commit act X (if he or she had been, culpability could land on the person who forced A to commit X);
  5. awareness: Agent A knew, or could have known, that committing X would lead to undesirable consequences.

A few comments about these conditions. First of all, we can recall that this type of responsibility as blame has consequences for the person held responsible: legal repercussions in the case of culpability (criminal law) – which can be enforced by various agents from the police to legal and prison staff – and opprobrium, exclusion, moral condemnation or interior torment in the case of moral responsibility (of the kind encountered by Raskolnikov in Fyodor Dostoyevsky’s Crime and Punishment). In these cases, responsibility implies a damaging event and translates to blame (which comes from an institution, society in general or one’s moral awareness). As we have seen, this is not the case for all types of responsibility.

Furthermore, the conditions necessary for identifying responsibility as blameworthiness (or as culpability, when the wrongdoing falls under criminal law) define the outlines of the awareness and freedom with which an individual acts. In order to be recognized as morally or legally culpable, we must have been free to act, both in terms of movement and awareness [GIA 16]. However, to recognize and appreciate an individual’s freedom can sometimes be a difficult task, beyond psychological issues alone. Let us consider, for example, a case where the perpetrator of an act who deserves the blame acts according to a physical constraint (enforced by an aggressor, for example) or an order from a hierarchy (as is often the case in the army). In these two cases, the freedom to act is reduced by the will of another. Thus, determining the reality and the limits of one’s responsibility is not easy. In the same way, causal links that link a perpetrator’s action with certain morally reprehensible consequences are not always easily identifiable (see section 3.2.1). In this case, it can be arduous to determine who is responsible and identify culpability and the subjects of blame. Finally, the condition of awareness is also subject to debate: in the particular context of RRI, it is necessary to clarify what is assumed to be the level of awareness required in order to be able to affirm responsibility, while taking into account, unlike in other cases, the ontological incertitude that characterizes innovation and research. We will come back to this question in more detail in section 3.2.1.

3.1.2. Responsibility as liability (3)

This type of responsibility presupposes responsibility as blameworthiness. It is mainly determined by the law, but does not exclude the moral obligation to provide reparations. In this type of responsibility, the legal or moral culpability of agent A when held responsible for action X forces agent A to repair any material, bodily or psychological damage suffered by the victim(s) or at least to compensate for them. For example, A will have an obligation to offer financial compensation or some form of symbolic compensation (by expressing apologies, for example). In certain cases, the obligation to offer reparations goes with legal sanctions. This perspective on responsibility, as we have said, uses present and future actions to compensate, when possible, the victim or victims for the harm they have suffered. However, this is not a positive understanding of responsibility in that it does not aim in any way to prevent certain outcomes seen as undesirable. Since this form of responsibility is one of the most frequently called upon (most notably by the law), will shall not study it any further11.

3.1.3. Responsibility as accountability: the passive form (4a)

This last negative interpretation of responsibility refers to the idea that an agent should be held accountable to a principal. Bovens [BOV 98] distinguishes between two accepted meanings for the term accountability: one positive, being virtue (which we shall analyze in the following), and the other negative, as a mechanism. We shall now focus on the latter. According to this interpretation, which is historically derived from the notion of a balance sheet, the notion of accountability corresponds to a “relationship with a specific social mechanism that involves an obligation to explain and justify one’s behavior” [BOV 98, p. 30]. In such an interpretation of responsibility, emphasis is placed on the political and social control that allows appropriate practices in governance to be carried out. This understanding involves the use of investigative and verification practices in order to determine whether the agent’s behavior was appropriate. It establishes a specific relationship between the person who must be held accountable, the agent (company director, minister, delegate, etc.) and the principal that has delegated his, her or its power to that agent (shareholders, parliament, professional associations, etc.). The agent is therefore obligated to justify their actions in front of a forum, which questions them on the legitimacy of their behavior, makes a judgement and sometimes even imposes repercussions in case of misconduct. Bovens pays particular attention to the idea of mechanism, as this interpretation of the obligation to be held accountable is purely instrumental. The agents are commanded to conduct themselves well by an authority, under the threat of possible blame or repercussions. They are contractually mandated and have a legal obligation to justify their actions a posteriori. Of course, this obligation conditions their actions in the present and pushes them to act in the interest of the principal. However, such an obligation relies on the threat of sanctions; the good will and moral capacity of the participants in an action do not come into play. There is of course a prospective dimension to this understanding of responsibility, but it does not consider things in a positive light. Once linked with a mandate, it is possible for individuals to act in the name of the common good, but not necessarily by fully adhering to a normative horizon whose standards they have “internalized” and interpreted. In Chapter 4, we shall see that, in contrast, the second accepted meaning of accountability (4b) tends to link the organizational aspects of responsibility with the moral motives of individuals, while leaving space for a type of responsibility understood as individual virtue.

3.2. Responsibility: between excessive pressure and dilution

We must recognize that these negative understandings of responsibility are often a first step in understanding this concept. The idea of responsibility as moral blame and culpability, or even as liability, can in effect allow the social order to be maintained. These constraints influence the actions of individuals and dictate their behaviour in order to contribute, in an appropriate way, to a collectively determined common good (at least in democratic societies). The negative term could allow us to think that these particular understandings of responsibility operate much like brakes. As there is a reticence sometimes expressed by researchers12 against ethical assessments, forcing individuals – in particular those participating in innovation and research – to assess their responsibilities, this can be seen as an extra barrier to their creativity and freedom. Yet contemporary societies are built on the idea that creativity and freedom are the conditions of innovation, research and, ultimately, the progress that leads to economic and social growth as well as relative political stability. To force participants in innovation and research to face their responsibilities would act as an obstacle to scientific and technological development.

However, the negative understandings should not be seen only as hindrances. Indeed, to anticipate possible disapproval or repercussions, that is to say to internalize certain norms and prohibitions that exist in moment t and which determine social practices, allows the aims of research and innovation to be redefined in order to be more clearly aligned with the social norms (and laws) that are in place at moment t. Since one must take into account the set of norms in place that dictate the boundaries and uses of science and technology, anticipating these rules can end up saving precious time. Notably – using purely instrumental logic – since social hostility and lack of understanding can be detrimental to complex technological projects, it is necessary at the very least to respect the legal norms in place. Admittedly, the legal framework can be modified by the emergence of new technologies or by changes to the norms that underpin assessment, since debate or ethical or bioethical controversies come about before legal stabilization, which takes longer. The fact remains that, in many cases, the threat of repercussions is an effective way to shape the behaviour of individuals (in this case, participants in innovation and research). If, as below, we are to defend the idea that one must also nurture a tendency to act in a responsible way, then instrumental rationality, which takes into account established social constraints, is already an early step towards responsibility.

In the specific context of innovation and research, however, the negative understandings only based upon this type of rationality are not enough, and lead to three different types of problem: (a) they do not imply a moral commitment from people; (b) they often lead to a dilution of responsibility; (c) they sometimes even lead to the disappearance of the agent(s).

3.2.1. The lack of normative commitment

Responsibility as blameworthiness or as liability assumes that an agent will be accused of wrongdoing a posteriori, taking into account his or her actions and decisions. The standards and laws that determine this type of responsibility can therefore dictate the actions of people through the sanctions they will impose in case of wrongdoing. However, they involve a significant retrospective aspect, which is inadequate in the case of innovation and research and requires one to attempt to anticipate, however imperfectly, an uncertain future. When faced with modern challenges, it is not sufficient to rely solely on a type of responsibility that is decided retrospectively, once the harm has been done. Emerging technologies, for example information and communication technologies (ICT), bio- and nano-technologies, or even geoengineering, require a cautious approach in order to prevent the serious or irreversible harm they have the potential to cause. This involves a prospective form of commitment (see Chapter 4). Furthermore, in a certain way these understandings of responsibility, which are founded on fear of repercussions, do not engage the person morally. Indeed, the person chooses to act in a “morally appropriate” way in order to avoid condemnation, social judgment or financial or criminal repercussions. However, from a normative viewpoint, to act according to the law or the norms in place in a society is not necessarily to act in an ethical way. In other words, respecting the legal order does not involve acting in a morally appropriate way. The distinction between respect for laws and norms on the one hand, and moral action on the other, has a long tradition in moral philosophy and there are many examples that illustrate it. In particular, we can cite the laws promulgated to discriminate against Jews under the Vichy regime in France. These were legal since they emanated from a legitimate legislative system. However, they are recognized as morally questionable by most normative systems. Conversely, there are acts of civil disobedience that are illegal but which can be morally justified, such as illegal abortions carried about before the Weil law of 1975 decriminalizing abortion in France.

If we wish to promote RRI, it is necessary to think of a theoretical framework that could set out and enforce a normative level of commitment for participants in innovation and research, of a kind that is not limited in its promotion of morally appropriate behaviour by merely using the threat of repercussions. Instead we must encourage a process of training and adaptation to allow participants to interpret existing norms, and possibly to create new ones, in order to best fit the developments in that field. Further on, we shall see how the positive understandings of responsibility endeavor to move in this direction.

An interpretation of responsibility founded solely on respect for the law (or pre-existing norms) and avoidance of sanction leads to a final adverse effect, which relates the level of knowledge required. The social actors, businesses or institutions behind research, innovation and the development of new sciences and technologies can be held responsible for certain damages that occur following their research. From a legal standpoint, however, their criminal responsibility can only be considered whilst taking into account the existing knowledge relative to the science or technology in question. Nobody can be expected to do the impossible. The level of responsibility and the compensation offered for damages that could not reasonably have been anticipated will not be the same as those required in cases of negligence, that is to say either a voluntary or involuntary lack of knowledge.

In other words, purely retrospective responsibility implies a particular level of knowledge that participants are assumed to possess in order to anticipate future harm. This can lead to deviant behaviour on the part of these participants who, in order to reduce future responsibility, will attempt to ensure that the level of awareness and knowledge required is as low as possible. The greater or more complex our awareness of the possible difficulties that a technology or scientific investigation could lead to, the more likely it is that the participants in innovation and research will be held responsible for future damages. They will therefore have an incentive to reduce the amount of knowledge that it is necessary to have, to the detriment of anticipating future problems.

A very recent example is the controversy caused by the work of a team directed by Gilles-Eric Séralini, which investigated the possible toxicological risk related to the consumption of genetically modified organisms (GMOs) treated or not treated with herbicides. It was an article [SER 12] initially published in September 2012 by leading toxicology journal Food and Chemical Toxicology which sparked this controversy. This article covers a two-year study and puts forward the following conclusions: Monsanto’s Roundup-resistant NK603 GM maize and Roundup itself both cause long-term toxic effects at certain doses and in a particular type of rat. However, the study also shows that rats fed with lower doses of NK603 GM maize developed tumors less frequently than those that were not fed with NK603 maize. The results differ depending on whether male or female rats were used. As soon as it was published, the article caused intense controversy and was the subject of much scientific criticism. The journal Food and Chemical Toxicology states that it received several letters demanding its retraction, even though the article had passed the peer review process necessary for anything published in the journal. The article’s defenders (anti-GMO NGOs, for example) have been quick to maintain that a portion of these letters came from Monsanto, the manufacturer called into question by the article. In the end the article was retracted in November 2013, on the grounds that the team’s results were not “conclusive”, although this reason is not included among those listed by the journal as grounds for retraction.

After October 2013 the French Agency for Food, Environmental and Occupational Health and Safety (Anses), the Haut Conseil des Biotechnologies (HCB) and the European Food Safety Authority (EFSA) rejected the conclusions of the Séralini study. There seem to be several reasons for the criticism: the choice of the type of rat (known to spontaneously develop tumors as they age), the number of rats used (judged as insufficient to provide statistical validity), the incomplete and imprecise presentation of results (leaving out certain contradictory results), and Séralini’s spectacular method of communication, which showed images of rats deformed by tumors.

However, there was no lack of supportive reactions to the Séralini study, including those that were critical. These focused mainly on the long duration of the study (2 years), compared with previous short-term studies (90 days). On November 14 2012, French newspaper Le Monde published an open letter signed by 140 researchers reiterating the fact that, whilst Séralini’s procedure contains faults, it is very comparable to those used by experts in order to back up decisions to accept GMOs. And to ask: why deviate from this protocol only when the results contradict commercial logic? In 2014, two years after its initial publication, the article by G.E. Séralini’s team was republished with a few modifications in Environmental Sciences Europe.

As highlighted by sociologist David Demortain [DEM 13], this controversy is enlightening as it questions the way in which toxicological standards are gauged. The Séralini affair at least had the benefit of questioning current norms (for releasing GMOs onto the market) and the lack of scientific debate on the subject. In reaction to this, Anses and the HCB emphasized the need to carry out long toxicological experiments (more than 90 regulatory days) on animals fed diets containing Monsanto’s transgenic, herbicide-tolerant maize. Furthermore, certain unfounded criticisms that have been directed at this study (which has been called dishonest, for example), along with the poorly disguised activism of manufacturers to discredit it, are testament to the pressures placed on studies into the toxicity of GMOs, which are justified by the significant potential commercial losses, but which serve to hinder the quest for information. Yet the question of the level of relevant knowledge concerning the health risks of GMOs must be understood in a collective and transparent way. Left solely for the consideration of experts and manufacturers, it is possible that the standards do not inspire an optimal level of effort. A purely retrospective type of responsibility, linked with blame or the necessity to make up for a wrongdoing, does not allow the problem to be understood until afterwards, once the damage has been done (for example if the toxicity of these organisms was conclusively recorded). Here again, only a prospective and positive type of responsibility allows us to bypass this type of reasoning, which is counterproductive in the long term.

3.2.2. Dilution of responsibility

The second type of problem that these negative interpretations of responsibility give rise to stems from their individualist dimension. We have seen that they are based on attributing the responsibility for action X to individual A. This attribution can of course involve a group of individuals. However, it is fundamentally a case of attributing an action – or a series of actions – to an identifiable entity (an individual or group of individuals). This can lead to two types of harmful consequences for responsibility.

The first has been identified by the political theorist Dennis Thompson as the problem of “many hands” [THO 80]. This describes the case where an action is carried out by several agents, each having different and partial responsibility, as is common in the fields of science and technology. In such cases, it is often difficult to attribute responsibility precisely and by name, and to untangle the complex chains of causation that led to undesirable event X13. It is difficult to say precisely who is responsible for what. One can speak of the dilution of responsibility in the complexity of events. In short, nobody is responsible for anything anymore. This is even truer when future actions are influenced by uncertainty.

As written by Ricoeur:

“[…] taking all the consequences into account, including those most contrary to the original intention, results in holding the human agent responsible for all, indiscriminately, which is to say responsible for anything he can take charge.” [RIC 95, p. 66]

Purely consequentialist approaches to responsibility should take into account the fact that our actions have “adjacent effects” – to use one of Ricoeur’s terms – and these produce unexpected consequences that are sometimes completely the opposite of the initial intention. At any rate, the multiplicity of participants can also contribute to the dilution of responsibility. As formulated by Hannah Arendt:

“Where all are guilty, nobody is.14” [ARE 03, p. 173]

Here we find ourselves caught between a rock and a hard place [SWI 06]. On the one hand, too broad an understanding of responsibility: (a) can constitute a hindrance to the creativity of the participants in innovation and research, when the threat of repercussions serves to limit their actions; or (b) paradoxically results in a dilution of responsibilities due to the complexity of the causal chains and the multiplicity of participants.

On the other hand, the desire to disengage the participants from their responsibility risks reducing that responsibility to an empty shell. This therefore raises the question of to what extent individuals can be held responsible for their actions.

To promote responsibility in innovation and research entails relying neither on an infinite kind of responsibility, where each person can or must be responsible for everything, nor on a responsibility that is purely restricted to the actions of an individual. It is therefore a case of finding an appropriate horizon of causation, meaning the spatial and temporal limits within which it is legitimate to expect the responsibility of participants to be exerted. This would allow responsibility to be attributed to one or several participants. The conclusion of this chapter will offer a partial solution to this problem.

3.2.3. Understandings of responsibility with no agent

Thirdly, our modern understanding of responsibility has been modified considerably by the idea of solidarity against risk which is emerging during the second half of the 20th century, and which translates to the birth of the welfare state in Europe and North America [RIC 95, EWA 86]. Responsibility understood as an obligation to repair damages (which assumes that an error can be attributed to an agent) thus transforms into a faceless type of responsibility, with no agent. In this interpretation, members of society have to be protected against risks that have no perpetrator (in the case of a natural disaster, for example). With this passive form of responsibility, which focuses on the “vulnerability” of members of society, imputation of blame is left in the shadows, and even disappears, since the risk and damages cannot be attributed to anyone.

A recent case illustrates this point. On October 4 2015, the Alpes-Maritimes region of France experienced very heavy rainfall. This caused several waterways to rise (notably the Brague river), submerging the streets of several towns (Cannes, Antibes, Mandelieu-la-Napoule, Villeneuve-Loubet and Nice). More than 20 deaths, over 30,000 casualties and almost a billion euros of material damages were recorded15. At first glance, the bad weather that caused the floods was not attributable to humans without alluding to climate change (but the causal link between this type of bad weather and greenhouse gases is still difficult to establish16). Neither can we blame a lack of forecasting, as the area was already classed as an orange risk zone according to Météo-France (the French national meteorological service), and it is not yet possible to determine the intensity and precise location of rainfall. It therefore seems difficult to attribute the deaths of the victims to any individual, group of individuals, or institution. The State must take on a form of responsibility, which requires it to compensate the victims (particularly in the case of material damages) without taking on the blame for the event.

However, the limits of such interpretations of responsibility are clear. It is rare to find cases where it is completely impossible to establish cause and allocate responsibility in a natural disaster or in scientific and technological development. Responsibilities can be numerous and difficult to trace. They are rarely without a perpetrator, and this is even truer in the case of innovation and research.

If we go back to our previous example, the very scale of the catastrophe has human causes: construction in areas that may flood; land take due to urbanization preventing water absorption and increasing runoff17. These factors are also causes frequently cited as explanations for the number of victims. It should be added that, in this case, it is possible to identify some responsible parties: public powers working with construction businesses, businesses more concerned with profit than with the organization of land, and even individuals who buy property in areas at risk of flooding, at a low cost.

We can see that even in the case of natural disasters, human decisions can have direct consequences on the overall impact of an event which is difficult to predict (see also the case of the Aquila earthquake, analyzed below in section 3.3.3.).

The same types of questions previously mentioned are again raised: to what extent can public institutions prevent risks (and this in a context where the States are in large amounts of debt and therefore tend to reduce social services)? If future generations are taken into account (as extolled by sustainable development), what is the reasonable limit of responsibility that must be taken into account by the States in order to determine their actions (whether preventative or corrective)?

These questions, which are also considered in theories of environmental justice [BLA 09], cannot be answered in general or a priori when considered without any context. They are mentioned here, not so much to provide answers as to demonstrate how the negative understandings of responsibility pose significant challenges to human rationality. These difficulties in confirming ex post facto who is responsible for what highlight the need to find a more streamlined way of linking actions to moral commitment. Only when we see that every human activity generates some form of responsibility – a form that can be defined using the various interpretations of the term – can we hope to get around the inescapably tedious task of separating out the tangle of responsibilities (who has done what, under what authority, etc.) following a catastrophe whose causes may have been “natural” or human. Of course, such an exercise is extremely useful. However, it alone cannot represent all the complexity of moral responsibility.

3.3. The example of scientists’ responsibility

In order to pursue this line of thought, let us leave behind theoretical reflection for a moment and focus on two historical examples that intensely question the limits of the scientist’s responsibility. This step, which continues a line of analysis that began in chapter one of the ethics and responsibility of participants in research, may not seem as relevant when it comes to issues specific to innovation. However, it allows us to revisit three important points that could apply in a more general way to all participants in research and innovation.

First of all, these examples show the way in which individual responsibility is exerted in case of a problem, and how it entails a more collective dimension of responsibility. Secondly, they clearly illustrate the limits of the retrospective understandings of responsibility. Finally, they allow us to move towards forming a type of understanding where the moral engagement of participants is essential.

In our first example, we shall explore the extent to which the scientists Albert Einstein and Robert J. Oppenheimer can be held responsible for the bombings of Hiroshima and Nagasaki on August 6 and 9, 1945, and their terrible outcomes. Next, we shall cover certain aspects of the controversy that surrounded the scientific management (assessment and communication of risks) of the 2009 Aquila earthquake in Italy.

3.3.1. The atom bomb: responsibility as blame and management ex post facto

Let us briefly revisit the respective involvement of Einstein and Oppenheimer in the development of the A-bomb and what we know about the way they recognized their own responsibility. We know that on August 2, 1939, Albert Einstein signed a letter that was written by physicists Léo Szilard and Eugène Wigner and sent to President Roosevelt. This letter18 attempts to alert the President of the United States of the possibility that Nazi Germany was developing a nuclear weapon. It defends the concomitant necessity of conducting urgent research on nuclear energy in the United States. The letter contributed most notably to the launch of the Manhattan Project, which culminated in the creation of the atomic bomb. We also know that Einstein was dismissed from the Manhattan Project for his pacifist views. After the war, and until his death in 1955, he was an ardent defender of pacifism and the fight against the arms race. Shortly before his death, he cosigned, with the philosopher Bertrand Russel among others, what would become the Russel-Einstein manifesto to denounce the dangers of nuclear weapons and to demand that political leaders move towards pacifistic solutions. With Leo Szilard, he also founded the ECAS (Emergency Committee of Atomic Scientists), whose many missions included training and informing people about the dangers of this new type of weapon.

These few historical elements allow us to define the shape of a committed scientist who conceives of, and follows the progress of, technologies that he helped to develop (even indirectly, in Einstein’s case). In this example, the scientist’s commitment becomes political in order to limit, or at least organize in advance, the uses of a technology that is dangerous but which also carries huge positive potential. However, the responsibility that Albert Einstein seems to accept regarding the atomic bomb does not end there. Indeed, as reported by his friend and colleague Linus Pauling, he came to express explicit regret concerning his involvement in the Manhattan Project. In his journal, Pauling transcribes a few extracts from a conversation he had with Einstein on November 16, 1954, during which Einstein confided to him:

“I made one great mistake in my life – when I signed that letter to President Roosevelt recommending that atom bombs be made; but there was some justification – the danger that the Germans would make them19”.

With these words, Einstein testifies to a feeling of guilt and recognizes his individual responsibility for a precise act, a noteworthy political (and not scientific) act, which indirectly contributed to the bombings of Japan in August 1945.

The case of Robert Oppenheimer proves to be more complex than that of Einstein20. Indeed, as scientific director of the Manhattan Project from 1943 to August 194621, he was, in contrast to Einstein, an essential architect in the development of the nuclear weapon. At the heart of the Manhattan Project, which would produce Little Boy and Fat Man22, he defended military use of nuclear weapons against Japan and gave his agreement on the choice of targets. When the bombs were dropped, he also recommended that the Japanese should not be informed of the imminent bombing, as confirmed by a report released by the Interim Committee created by President Truman on May 2 1945, with the purpose of discussing the use of the bomb and its political implications. On June 1 1945, this committee, which Oppenheimer was part of as scientific adviser, concluded that:

“The bomb should be used against Japan as soon as possible; it should be dropped on a war industries plant surrounded by worker housing; and it should be dropped without prior warning”. [RHO 86, p. 650-651, cited by RIV 02, p. 216]

However, a few years later (in 1962) Oppenheimer would come to regret this absence of warning, which could have reduced the number of civilian deaths.

“… Nevertheless, my own feeling is that if the bombs were to be used there could have been more effective warning and much less wanton killing.… than took place actually in the heat of battle and the confusion of the campaign.”

Furthermore, Oppenheimer was a scientist who was conscious of and even worried about the moral implications of his actions. In a speech given to the American Philosophical Society on October 16 1945, he said:

“We have made a thing, a most terrible weapon, that has altered abruptly and profoundly the nature of the world … a thing that by all standards of the world we grew up in, is an evil thing. By so doing (…) we have raised again the question of whether science is good for man”. [BIR 06, p. 323]

Oppenheimer seems to express contradictory feelings towards the enormous power of what he helped to develop. For example, Bird and Sherwin [BIR 06] describe him as being particularly tormented, morose and anxious at the end of the war, particularly after the first tests were carried out in New Mexico and labelled as a success, before the bombs were dropped on Japan. This torment continued after the war ended. In 1948, in an article published by Time magazine, Oppenheimer writes:

“The experience of the war has left us with a legacy of concern… nowhere is this troubled sense of responsibility more acute… than in those who participated in the development of atomic energy for military purposes. … Physics, which played a decisive role in the development of the nuclear bomb, is a direct product of our laboratories and our journals… In some sort of crude sense, which no vulgarity, no humor, no overstatement can quite extinguish, the physicists have known sin; and this is a knowledge that they cannot lose”. [RIV 02, p. 305]

The scientist, here, evaluates his responsibility in macabre events and expresses a form of culpability when faced with the “evil” that has been created because of him. After the war, this feeling of responsibility would push him to actively contribute to the destiny of his “creation”. He came to direct the General Advisory Committee (GAC) of the Atomic Energy Center (AEC), from 1947 to 1952, and to greatly influence American politics in nuclear matters. He would work to set up a national and international legal framework to regulate the creation and use of nuclear weapons and to try to contain the arms race.

The examples of Einstein’s political stance, and Oppenheimer’s attitude regarding their research, bring to light a form of responsibility that limits the excesses of the two extremes outlined above. Einstein was directly committed to influencing the political attitude towards nuclear matters, even though he did not directly take part in the construction of nuclear weapons. Oppennheimer, as we have seen, actively contributed to the creation of these weapons, but deplored some of their consequences. Fully conscious of the destructive power of what he had helped to develop, he seemed to demonstrate a form of responsibility tainted with inescapably retrospective guilt. Nevertheless, whilst he had misgivings during his years at Los Alamos, where the scientific epicenter of the Manhattan Project was located, in no way did this stop him from pursuing his research.

All the ambiguity of this character is clear. On the one hand, Oppenheimer legitimized the use of the bomb – never did he seem to oppose it or even be sorry for it. He argued that an intervention on the ground, which was seen as a possible alternative to the use of the bomb, would come at immense human cost [RIV 02, p. 226]. On the other hand, he was alarmed ex post facto by the profound geopolitical changes and social consequences brought about by this new power.

Another example demonstrates this ambivalence. In contrast to the enthusiasm of scientist Edward Teller and political pressure from Lewis Strauss (a member of the AEC), Oppenheimer, during and just after the war, came to firmly oppose research on the thermonuclear bomb. He feared that its destructive potential would be much greater than that of the atom bomb. In 1951, however, he had a radical change of opinion, and came to defend its creation after studies showed that it was technically possible. Michel Rival [RIV 02, p.265] wrote:

“This sudden turnaround is shocking to say the least, and in it we should probably see this particular form of schizophrenia that affects some scientists when a new discovery fascinates them and paralyses their judgement. “Technically sublime” considerations thus outweigh moral considerations, even in men as sensitive and intelligent as Oppenheimer.”

As director of the GAC, Oppenheimer therefore became personally committed to slowing down the progress of research whose objective seemed to him to be morally reprehensible. He presents the image of a scientist who feels personally responsible for the possible consequences of his actions, a type of responsibility that however vanishes in the face of scientific curiosity and excitation.

It is not our intention to pass moral judgement on Robert Oppenheimer’s attitude or the attitudes of others at this moment in history, nor to break down the question of whether these bombings were necessary and/or justified, considering the extremely tense and dramatic geopolitical context of the time, or whether it was justified to support development programmes for a thermonuclear weapon. Then again, we must pay attention to the limits of retrospective responsibility, which only truly takes on significance in light of the particularly serious consequence of the almost total destruction of two Japanese towns.

Let us add that it is perhaps this particular ex post facto form of responsibility that, after the war, pushed Oppenheimer to become a political and scientific figure whose dynamism greatly fuelled the debate on nuclear energy, contributing in particular to making the international community consider the geopolitical changes that such a discovery could bring about. In doing this, he took on a prospective and positive form of responsibility, by assuming the image of a committed scientist anxious to consider the possible uses of a technology that he helped to develop.

In the cases of Einstein and Oppenheimer, we are far from the figure of the neutral scientist, who leaves the consideration of the social uses of science and technology up to others. Both scientists seem to fully comprehend the scope of their responsibility, and the scale of certain dramatic consequences of their discovery. They attempted to remedy this, and more generally to influence future use of these discoveries through activism and political engagement. Finally, despite the mistakes of these brilliant scientists, these examples illustrate the necessity of individual engagement. As we shall see in Chapters 4 and 5, this should be accompanied by reflection on the governance of institutions. We are not saying here that scientists and participants in innovation should be virtuous in all their actions. Whilst this may seem a commendable goal, it is probably unattainable. It may, however, act as a normative horizon which is both precise and out of reach, allowing us to advance towards full and explicit recognition of ontological responsibility giving rise to action.

3.3.2. Responsibility: the individual and the collective

In order explore in depth the lessons learnt from this case, let us return to the line of thought we began in section 3.2, on the limits of responsibility, between dilution and hyperbole. We have seen that the degree of responsibility that can be placed on participants in innovation and research can pose a problem within the context of complex networks of responsibilities.

Let us take for example the existentialism of Jean-Paul Sartre [RIC 99]. In Being and Nothingness [SAR 43], man is always totally responsible in relation to the world he lives in; his responsibility stems from the ontological freedom he enjoys (or suffers) when choosing his destiny according to the values that he himself has determined (since God can no longer be a source of reason). Consequently, this hyperbolic perspective on responsibility23 is not the simplest gateway to RRI in that it favors the dilution of responsibility that we highlighted previously.

An interesting way to define the problem comes from an interpretation of collective responsibility, suggested by political thinker Hannah Arendt, for understanding the “banality of evil”. Indeed, this interpretation allows responsibility to be attributed to individuals for acts that they have not necessarily committed, and does so in a non-hyperbolic way.

Arendt, to outline the shape of collective responsibility, distinguishes first of all between responsibility and guilt. We can be held responsible for acts we did not commit. We cannot, however, be labelled guilty [ARE 03, p. 173], in the sense of being blamed or condemned, for acts we did not commit. Guilt therefore often remains individualistic, attached to the perpetrator or perpetrators of a wrongdoing.

However, once we have dismissed the possibility of judging an individual as guilty of acts he or she did not commit, there remains the idea of collective responsibility. Arendt describes two conditions for establishing collective responsibility.

“I must be held responsible for something I have not done, and the reason for my responsibility must be my membership in a group (a collective) which no voluntary act of mine can dissolve, that is, a membership which is utterly unlike a business partnership which I can dissolve at will.” [ARE 03]

From this perspective, and to return to one of the examples given above, the scientific and engineering community is responsible: it is these groups that made the atom bomb possible. This whole community – not only those who physically took part in the Manhattan Project and the research in Germany and Russia – can be held responsible.

For those who may nonetheless be afraid that this notion of collective responsibility will weigh too heavily upon the shoulders of scientists, engineers and innovators, it should be pointed out that it does of course require the involvement of the social actors of innovation and research. However, it does not imply that these people will be judged as guilty for possible damages (unexpected damages, for example). With this interpretation of collective responsibility, we attempt to capture the (normative) commitment of a professional when it comes to decisions that are accepted or dismissed by the community to which they belong (doctors, aeronautical engineers, biochemists, etc.). Einstein’s pacifistic engagement, and his role in preventing the arms race, despite any direct involvement in the creation of a nuclear fission bomb, demonstrates this particular form of responsibility. This would thus serve to arrange the cluster of responsibilities mentioned in Chapter 1, as determined by ethical reviews and by professional codes of ethics.

Finally, we add that this understanding of responsibility does not necessarily offer a precise normative answer to the question of whether, for example, the research of 1951 into the hydrogen bomb should have been pursued. Nonetheless, this is a central element to the possibility of productive debate, which, as we have seen, is an essential condition of RRI.

3.3.3. The Aquila earthquake and the responsibility of scientists in helping to reach a decision

Our second example allows us to further refine the limits of scientists’ responsibility. The setting for this example is the Aquila earthquake, which led to the condemnation of six scientists (who ended up being acquitted through appeals), and the mobilization of the international community regarding the question of scientist responsibility. This case is interesting since the responsibilities which led to the scientists being charged – and which are often miscommunicated by the press – are not those that might be expected. They do not stem so much from a lack of planning (negative responsibility) as from a concern for the way in which the message of a scientist labelled as an expert is delivered, transmitted and used in public and individual decision making. This example brings to light the necessity of adopting positive understandings of responsibility, which focus above all on our ability to think ahead in order to control, regulate and follow up on the outcomes of theories and technologies that we formulate or help to develop.

Let us recall the facts. On April 6 2009, an earthquake of magnitude 6.3 on the Richter scale hit the Abruzzo region of Italy, leaving over 300 dead, 1,500 injured and 65,000 homeless, according to the ICEF’s final report24 published in 2013. There had been strong seismic activity for several months prior to the event. Low-intensity, repeated tremors had begun in February and caused growing anxiety among the residents of the Aquila province.

Guido Bertolaso, the head of Italian civil protection, called a one-off meeting of the National Commission for the Forecast and Prevention of Major Risks. On March 31 2009, 6 days after the earthquake, the commission, also known as the Major Risk Commission, was held in Aquila. Its participants included reputed seismologists, volcanologists and physicists: Enzo Boschi, who was then president of the Italian National Institute of Geophysics and Volcanology (INGV) in Rome; Franco Barberi, volcanologist at the University of Rome III; Mauro Dolce, head of the Seismic Risk Bureau at the National Department for Civil Protection in Rome; Claudio Eva, from the University of Geneva; Giulio Selvaggi, director of the INVG’s National Earthquake Center; and Gian Michele Calvi, President of the European Centre for Training and Research in Earthquake Engineering in Pavia. Also present was Bernardo De Bernardinis, then deputy director of the Department for Civil Protection and an engineer in fluid mechanics 25.

Among the elements to be discussed during the trial, before any scientific discussion, was the fact that Bernardo De Bernardinis, an hour before the Major Risks Commission met, had given an interview whose aim was to reassure the population. In this interview, he declared that the seismic situation in Aquila was definitely “normal” and did not present any “danger26. He came to add to these reassuring remarks during the press conference that followed the Commission’s meeting, which was not attended by the scientists involved. De Bernardinis then stated before the cameras: “the more the earth shakes, the more it releases energy. It should subside. I would say: ‘Go home and have a nice glass of Montepulciano’”27. However, in the official report of the meeting, the scientists in attendance concluded that it was impossible to predict the occurrence of a large-scale earthquake with any certainty, but also emphasized that it was impossible to completely rule out such an event. This official report (published after the earthquake) also recommended that anti-earthquake measures should be taken more seriously, particularly in the construction of housing.28

This desire to simply provide reassurance rather than clear information about the difficulties of prediction is what led the substitute prosecutor, Fabio Picuti, to press charges. Indeed, he classed the information that followed on from the Major Risks Commission as “incomplete, imprecise and contradictory when it comes to the dangers of the seismic activity in question” [HAL 11].

Among those bringing civil action for damages, some people who were close to victims spoke in particular of the fact that the reassuring remarks of this commission had influenced their initial intention of evacuating their homes, to the point of completely dissuading them from doing so29.

At the end of the first trial on October 22 2012, the seven members of the commission were judged guilty for involuntary “negligent homicide” and sentenced by the judge Marco Billi to 6 months of imprisonment and 9.8 million euros of damages and interest, to be paid out of solidarity to the civil parties. Under appeal, only De Bernardinis would end up being sentenced “to 2 years in prison for ‘involuntary homicide and negligence’ towards certain victims, whilst being acquitted for the deaths of others.30

To enter into the twists and turns of the negotiations that might have led to such a turnaround in the judgement on the responsibility of the scientists would require a deeper investigation beyond the scope of this book. But the case remains very enlightening in spite of this. Indeed, after the first conviction, the international scientific community reacted strongly and was troubled by the judgement. For example, the American Association for the Advancement of Science (AAAS), in a letter to the President of Italy, recalled that it was worrying that the threat of legal sanction now weighed on scientists, a threat that “discourages them and prevents the free exchange of ideas essential to scientific progress.” Another letter, signed by more than 4,000 scientists, was also sent to the Italian President.

Many, starting with the lawyer of the accused, also defended the idea that scientists could not be held responsible for not having predicted the earthquake. The knowledge available at the time meant it was not possible to predict that a quake would be of such magnitude. From that point on, how can they be held responsible for the damage caused by the earthquake31?

This is not, in fact, the right question, since the accusations of the trial made no mention of the fact that scientists had not succeeded in predicting the event any more accurately. What engages the responsibility of the commission members relates to not knowing how to clearly communicate their results and leaving it up to the authorities to offer reassuring (or nuanced) information when the situation is anything but sure.

We can easily imagine that for the government there is a certain risk of dramatizing a situation and recommending in vain that a region be evacuated, which carries risks other than financial cost. Nonetheless, in no way does this diminish the responsibility of scientists who are classed as experts. When their statements and their research are supposed to aid in public decision making, these experts must account for a new type of responsibility: that of communicating the results used in a clear, comprehensive (in this case with some uncertainty) and understandable way. Although the act of translating science to making decisions is not easy, it could have served as a useful counterweight to the Department of Civil Protection’s obvious desire to play down the risks. Though the responsibilities are shared (construction of fragile buildings in a seismic zone, reluctance of public powers to evacuate the town), there cannot always be a strict division of work according to which the expert submits their report and the decisions makers make decisions.

This example, which could possibly set a precedent, has therefore shown how necessary it is for the expert’s opinion to be delivered in the most clear and transparent way possible when it comes to making complex decisions [GRA 12]. One of the virtuous effects of RRI practices could be the acquisition of this type of communication and translation ability, notably in complex cases where recommendations from experts have an impact not only on the decisions of public powers, but also on the behaviour of individuals.

More generally, this example once again highlights the limits of a purely retrospective form of responsibility. By neglecting to focus on the outcomes of the conclusions reached at their meeting, the scientists in the Major Risks Commission demonstrated a form of disengagement regarding the residents of the Aquila region. Any form of responsibility that was more positive (we shall see these in detail in the following chapter) would have caused them to be more concerned with the outcomes of their recommendations.

3.4. Conclusion

This chapter has attempted to show how negative understandings of responsibility are necessary and important. However, because they are based on a consequentialist and instrumental perspective on responsibility, they do not allow us to see individual actions from an ethical point of view. They neither call upon nor involve the individual’s moral capacity when considering the appropriate way to act (from a moral point of view).

We must therefore turn towards positive understandings of responsibility. These will allow a truly normative dimension (a shift towards what “ought to be”) to be added to individual action, as well as to help us face an uncertain future without attempting, in vain, to anticipate all consequences.

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset