Chapter 1

Simulation: History, Concepts, and Examples 1

1.1. Issues: simulation, a tool for complexity

1.1.1. What is a complex system?

The world in which we live is rapidly advancing along a path toward complexity. Effectively, the time when component parts of a system could be dealt with individually has passed: we no longer consider the turret of a tank in isolation but consider the entire weapon system, with chassis, ammunition, fuel, communications, operating crew (who must be trained), maintenance crew, supply chain, and so on. It is also possible to consider the tank from a higher level, taking into account the interactions with other units in the course of a mission.

We thus pass from a system where a specific task is carried out to a system of systems accomplishing a wide range of functions. Our tank, for example, is now just one element in a vast mechanism (or “force system”), which aims to control the aeroterrestrial sphere of the battlefield.

Therefore we must no longer use purely technical system-oriented logic, but use system-of-systems-oriented capacity-driven logic1 [LUZ 10, MAI 98].

This is also true for a simple personal car, a subtle mixture of mechanics, electronics, and information technology (IT), the conception of which considers manufacturing, marketing, and maintenance (including the development of an adequate logistics system) and even recycling at the end of the vehicle’s useful life, a consideration which is becoming increasingly important with growing awareness of sustainable development and ecodesign. Thus, the Toyota Prius, a hybrid vehicle of which the component parts pollute more than average, has an end-of-life recycling potential, which is not only high, but also organized by the manufacturers who, for example, offer a bonus for retrieval of the NiMH traction battery, the most environmentally damaging of the car’s components. In this way, the manufacturer ensures that the battery does not end up in a standard refuse dump, but instead it follows the recycling process developed at the same time as the vehicle. In spite of this, the manufacturer is able to remain competitive.

These constraints place the bar very high in engineering terms. Twenty years ago, systems were complicated, but could be simplified by successive decompositions which separated the system into components that were easy to deal with, for example, a gearbox, a steering, and an ignition. Once these components were developed and validated, they could simply be integrated following the classic V model. Nowadays, engineers are confronted more and more often with complex systems, rendering a large part of the development methods used in previous decades invalid and necessitating a new approach.

Thus, what is a complex system? Complex systems are nothing new, even if they have gained an importance in the 21st century. The Semi-Automatic Ground Environment (SAGE) aerial defense systems developed by the United States in the 1950s, or Concorde in the 1960s, are examples of complex systems even if they were not labeled as such. SAGE can even be considered a system of systems. However, the methods involved were barely formalized, leading to errors and omissions in the system development processes. In 1969, Simon [SIM 69] defined a complex system as being “a system made of a large number of elements which interact in a complex manner”. Jean-Louis Le Moigne gave a clearer definition [LEM 90]: “The complexity of a system is characterized by two factors: on the one hand, the number of constituent parts, and on the other, the number of interrelations”.

Globally, then, we shall judge the complexity of a system not only by the number of components but also by the relationships and dependencies between components. A product of which a large proportion is software thus becomes complex very rapidly. Other factors of complexity exist, for example, human involvement (i.e. multiple operators) in system components, the implication of random or chaotic phenomena (which make the behavior of the system non-deterministic), the use of very different time scales or trades in sub-systems, or the rapid evolution of specifications (changeable exploitation environment). An important property of complex systems is that when the sub-systems are integrated, we are often faced with unpredicted emergences, which can prove beneficial (acquisition of new capacities) or disastrous (a program may crash). A complex system is therefore much more than the sum of its parts and associated processes. Therefore, it can be characterized as non-Cartesian: it cannot be analyzed by a series of decompositions. This is the major (but not the only) challenge of complex system engineering: mastery of these emergent properties.

On top of the intrinsic complexity of systems, we find increasingly strong exterior constraints that make the situation even more difficult:

– increasing number of specifications to manage;

– increasingly short cycles of technological obsolescence; system design is increasingly driven by new technologies;

– pressure from costs and delays;

– increasing necessity for interoperability between systems;

– larger diversity in product ranges;

– more diverse human involvement in the engineering process, but with less individual independence, with wide (including international) geographic distribution;

– less acceptance of faults: strict reliability constraints, security of individuals and goods, environmental considerations, sustainable development, and so on.

To manage the growing issues attached to complexity, we must perfect and adopt new methods and tools: modern global land-air defense systems could not be developed in the same way as SAGE developed in the 1950s (not without problems and major cost and schedule overruns). Observant readers may point out that a complex system loses its complexity if we manage to model and master it. It is effectively possible to see things this way; for this reason, the notion of complexity evolves with time and technological advances. Certain systems considered complex today may not be complex in the future. This work aims to contribute to this process.

1.1.2. Systems of systems

In systems engineering, numerous documents, such as the ISO/IEC 15288 norm, define processes that aim to master system complexity. These processes often reach their limits once we reach a situation with systems of systems. If we can, as a first step, decompose a system of systems hierarchically into a group of systems which cooperate to achieve a common goal, the aforementioned processes may be applied individually. However, to stop at this approach is to run considerable risks; by its very nature, a system of systems is often more than the sum of its parts.

It would, of course, be naïve to restrict the characterization of systems of systems to this property, but it is the principal source of their appeal and of risks. A system of systems is a higher-level system which is not necessarily a simple “federation” of other systems.

Numerous definitions of systems of systems can be found in current literature on this subject: [JAM 05] gives no less than six, and Chapter 1 of [LUZ 10] gives more than 40. In this case, we shall use the most widespread definitions, based on the so-called Maier criteria [MAI 98]:

– operational independence of constituent systems (which cooperate to fulfill a common operational mission at a higher level, i.e. capacitive);

– functional autonomy of constituent systems (which operate autonomously to fulfill their own operational missions);

– managerial independence of constituent systems (acquired, integrated, and maintained independently);

– changeable design and configuration of the system (specifications and architectures are not fixed);

– emergence of new behaviors exploited to improve the capacities of each constituent system or provide new capacities (new capacities emerge via the cooperation of several systems not initially developed for this purpose);

– geographical distribution of the constituent systems (from whence the particular and systematic importance of information systems and communication infrastructures in systems of systems).

As a general rule, the main sources of difficulties in mastering a system of systems are as follows:

– intrinsic complexity (by definition);

– the multi-trade, multi-leader character of the system, which poses problems of communication and coordination between the various individuals involved, who often come from different cultural backgrounds;

– uncertainty concerning specifications or even the basic need, as mastery of a system of systems presents major challenges for even the most experienced professionals. This difficulty exists on all levels, for all involved in the acquisition process, including the final user and the overseer, who may have difficulties expressing and stabilizing their needs (which may legitimately evolve due to changes in context, e.g. following a dramatic attack on a nation or a major economic crisis), making it necessary to remain agile and responsive regarding specifications;

– uncertainty concerning the environment, considering the number of people concerned and the timescale of the acquisition cycle, which may, for example, lead to a component system or technology becoming obsolete even before being used. This is increasingly true due to the growing and inevitable use of technologies and commercial off-the-shelf products, particularly those linked to information and communications technology which leads to products becoming obsolete with increasing speed.

To deal with these problems, a suitable approach, culture, and tools must be put into place. Simulation is an essential element of this process but is not sufficient on its own. Currently, there is no fully tested process or “magic program” that is able to deal with all these problems, although the battle lab approach does seem particularly promising as it has been developed specifically to respond to the needs of system-of-systems engineering. This approach involves battle labs and is described in detail in Chapter 8.

1.1.3. Why simulate?

As we shall see, a simulation can be extremely expensive, costing several million euros for a system-of-systems simulation, if not more. The acquisition of the French Rafale simulation centers, now operational, cost around 180 million euros (but generates savings as it removes the need to buy several Rafale training aircraft, with one aircraft costing more than double the price of the centers before even considering the maintenance costs involved in keeping them operational). The American JSIMS inter-army program was another example of this kind, but it was stopped after over one billion dollars of investment. These cases are, admittedly, extreme, but they are not unique. Some of these exceedingly costly simulation systems are themselves complex systems and have been known to fail. If, then, a simulation is so difficult to develop, why simulate?

First, a word of reassurance: not all simulation programs are so expensive, and not all are failures. By following rigorous engineering procedures, notably, and a number of the processes described in the present work, it is possible to guarantee the quality of simulations, as with all industrial products. Second, simulation is not obligatory but is often a necessary means to an end. We shall review the principle constraints that can lead to working with simulations to achieve all or part of a goal.

1.1.3.1. Simulating complexity

Those who have already been faced with system engineering, even briefly, will be aware of just how delicate a matter the specification and development of current systems is. The problem is made even trickier by the fact that current products (military or civilian) have become complex systems (systems of systems). We do not need to look as far as large-scale air traffic control systems or global logistics to find these systems; we encounter them in everyday life. Mobile telephone networks, for example, may be considered to be systems of systems, and a simple modern vehicle is a complex system. For these systems, a rigorous engineering approach is needed to avoid non-attainment of objectives or even complete failure.

In this case, simulation can be used on several levels:

– simulation can assist in the expression of a need, allowing the client to visualize their demands using a virtual model of the system, providing a precious tool for dialog between the parties involved in the project, who often do not have the same background, and guaranteeing sound mutual understanding of the desired system;

– this virtual model can then be enriched and refined throughout the system development process to obtain a virtual prototype of the final system;

– during the definition of the system, simulation can be used to validate technological choices and avoid falling into a technological impasse;

– during testing of the system, simulations developed during the previous stages can be used to reduce the number of tests needed and/or to extend the field of system validation;

– when the system is finally put into operation, simulations can be used to train system operators;

– finally, if system developments are planned, existing simulations facilitate analysis of their impact on the performance of the new system.

These uses show that simulation is a major tool for system engineering, to the point that nowadays it is difficult to imagine complex system engineering without simulation input. We shall not go any further into this subject at present as we shall discuss it later, but two key ideas are essential for the exploitation of simulation in system engineering:

– Simulation should be integrated into the full system acquisition process to achieve its full potential: this engineering concept is known as simulation-based acquisition (SBA, see [KON 01]) in America and synthetic environment-based acquisition (see [BOS 02]) in the United Kingdom (the different national concepts have different nuances, but the general philosophy is similar).

– The earlier simulation is used in a program; the more efficient it will be in reducing later risks.

To understand the issue of risk reduction, we shall provide a few statistics: according to the Standish Group [STA 95], approximately one-third of the projects (in the field of IT) do not succeed. Almost half of all projects overrun in terms of cost by a factor of two or more. Only 16% of projects are achieved within the time and cost limitations established at the outset, and the more complex the project, the further this figure decreases: 9% for large companies, which, moreover, end up deploying systems which lack, on average, more than half of the functionality initially expected. We have, of course, chosen particularly disturbing figures, and recent updates to the report have shown definite improvements, but the 2003 version [STA 95] still shows that only one-third of projects are achieved within the desired cost and time limitations. This represents an improvement of 100%, but there is stillroom for a great deal of progress to be made.

1.1.3.2. Simulation for cost reduction

Virtual production is costly, admittedly, but often cheaper than real production. The construction of a prototype aircraft such as the Rafale costs hundreds of millions of euros. Even a model can be very expensive to produce. Once the prototype or model arrives, tests must be carried out, which in turn carry their own costs in terms of fuel and the mobilization of corresponding resources. Finally, if the test is destructive (e.g. as in the case of missile launches), further investment is required for each new test.

For this reason, in the context of ever-shrinking budgets, the current tendency is toward fewer tests and their partial replacement by simulation, with the establishment of a “virtuous simulation-test circle” [LUZ 10]. The evolution of numbers of tests during the development of ballistic missiles by the French strategic forces, a particularly complex system, is explained in this respect:

– The first missile, the M1, was developed in only 8 years, but at great cost. Modeling was used very little, and a certain level of empiricism was involved in the tests, of which there were 32 (including nine failures).

– For the M4, slightly more use was made of simulation, but major progress was essential due to the implementation of a quality control process. Fourteen tests were carried out, including one failure.

– During the development of the M51, which represented a major technological leap forward, simulation was brought to the fore to reduce flight and ground tests, with the aim of carrying out less than 10 tests.

Nevertheless, it should be highlighted that, contrary to all too widespread beliefs, simulation is not in principle intended to replace testing: the two techniques are complementary. Simulation allows tests to be optimized, allowing better coverage of the area in which the system will be used with, potentially, fewer real tests. Simulation is not, however, possible without tests, as it requires real-world data for parameters for models and for validation. Without this input, simulation results could rapidly lose all credibility.

Figure 1.1 illustrates this complementarity: it shows the simulation and a test of armor penetration munitions, carried out with the OURANOS program at the Gramat research center of the Defense Procurement Directorate (Direction générale de l’armement, DGA). The simulation allows a large number of hypotheses to be played out with minimum cost and delay, but these results still need to be validated by tests, even if the number of tests is reduced by the use of simulation. Chapter 3 gives more detail on the principle of validation of models and simulations.

Figure 1.1. Comparison of simulation and real test (DGA/ETBS)

ch1-fig1.1.jpg

One area in which simulation has been used over the course of several decades is in the training of pilots. The price range of a flight simulator certified by the aviation authorities (Federal Aviation Administration (FAA), USA; Joint Aviation Authorities (JAA), Europe; and so on) is sometimes close to that of a real airplane (from 10 to 20 million euros for an evolved full flight simulator and approximately 5 million euros for a simplified training-type simulator2). The Rafale simulation center in Saint-Dizier, France, costs a trifling 180 million euros for four training simulators. Obviously, this leads to questions about whether this level of investment is justified. To judge, we need a few more figures concerning real systems. To keep to the Rafale example, the catalog price of one airplane is over 40 million euros, meaning it is not possible to have a large number of these aircraft. Moreover, a certain number must be reserved for training. The use of simulation for a large part of the training reduces the number of aircraft unavailable for military operations. Indeed, approximately 50 aircraft would be needed to obtain the flight equivalent of the Rafale Training Center (Centre de simulation Rafale, CSR) training. Moreover, the real cost of an airplane is considerably more than its catalog price. Frequent maintenance and updates are required, alongside fuel, all of which effectively doubles or triples the total cost of the system over its lifespan. An hour’s flight in a Mirage 2000 aircraft costs approximately 10,000 euros, and an hour in a Rafale costs twice that figure3. The military must also consider munitions; a simple modern bomb costs approximately 15,000 euros, or more depending on the sophistication of the guiding system. A tactical air-to-ground missile costs approximately 250,000 euros.

An hour in a simulator costs a few hundred euros, during which the user has unlimited access to munitions. We need not continue …. Moreover, the simulator presents other advantages: there is no risk to the student’s life in case of accident, environmental impact (and disturbance for the inhabitants of homes near airfields) is reduced, and the simulator offers unparalleled levels of control (e.g. the instructor can choose the weather, the time of flight, and simulate component failures).

In this case, why continue the training using real aircraft? Simulation, as sophisticated as it may be (and in terms of flight simulators, this can go a long way) cannot replace the feelings of real flight. As in the case of tests, simulation can reduce the number of hours of real flight while providing roughly equivalent results in terms of training. For the Rafale, a difficult aircraft to master, between 230 and 250 h of flight per year, are needed for pilots to operate efficiently, but the budget only covers 180 h. Without simulation, the Rafale pilots could not completely master their weapons systems. We are therefore dealing with a question of filling a hole in the budget rather than an attempt to reduce flight hours. Note, though, that in civil aviation, so-called transformation courses exist, which allow pilots already experienced in the use of one aircraft to become qualified on another model of the same range through simulation alone (although the pilot must then complete a period as a co-pilot on the new aircraft).

1.1.3.3. Simulation of dangerous situations

Financial constraints are not the only reason for wishing to limit or avoid the use of real systems. Numerous situations exist where tests or training involves significant human, material, or environmental risks.

To return to the example of flight simulation, international norms for the qualification of pilots (such as the Joint Aviation Requirements) demand that pupils be prepared to react to different component failures during flight. It is hard to see how this kind of training could be carried out in real flight, as it would put the aircraft and crew in grave danger. These breakdowns are therefore reproduced in the simulator to enable the pupil to acquire the necessary reflexes so that the instructor can test the pupil’s reactions.

On a different level, consider the case of inter-army and inter-ally training: given the number of platforms involved, the environmental impact would be particularly significant. Thus, the famous NATO exercise, Return of Forces to Germany (REFORGER), training which measured the capacity of NATO to send forces into Europe in the case of conflict with the Soviet Union and its allies involved, in the 1988 edition, approximately 97,000 individuals, 7,000 vehicles, and 1,080 tanks, costing 54 million dollars (at the time), of which almost half was used to compensate Germany for their environmental damage. In 1992, the use of computer simulations allowed the deployment to be limited to 16,500 individuals, 150 armored vehicles, and no tanks, costing a mere 21 million dollars (everything is relative …). The most remarkable part of this is that the environmental damage in 1992 cost no more than 250,000 dollars.

1.1.3.4. Simulation of unpredictable or non-reproducible situations

Some of our readers may have seen the film Twister. Setting aside the fictional storyline and Hollywood special effects, the film concerns no more, no less than an attempt by American researchers to create a digital simulation of a tornado, a destructive natural phenomenon which is, alas, frequent in the United States. As impressive as it is, this phenomenon is unpredictable, thus difficult to study, hence the existence of “tornado chaser” scientists, as seen in the film. In this movie, the heroes try to gather the data required to build and validate a digital model. This appears to be very challenging, as tornadoes are unpredictable and highly dangerous, and their own lives are threatened more than once. However, the stakes are high; simulation allows better understanding of the mechanisms which lead to the formation of a tornado, increasing the warning time which can be given.

Simulation thus allows us to study activities or phenomena that cannot be observed in nature because of their uniqueness or unpredictability. The dramatic tsunami of December 2004 in the Indian Ocean, for example, has been the subject of a number of simulation-based studies. The document [CEA 05], a summary of works published by the French Alternative Energies and Atomic Energy Commission (Commissariat à l’énergie atomique et aux énergies alternatives, CEA), shows how measurements were carried out, then the construction of digital models, which enabled events to be better understood and, whether we can avoid repetition of the terrible consequences of the tsunami, which left around 200,000 dead, is still in question.

1.1.3.5. Simulation of the impossible or prohibited

In some cases, a phenomenon or activity cannot be reproduced experimentally. A tsunami or a tornado can be reproduced on a small scale, but not, for example, a nuclear explosion.

On September 24, 1996, 44 states, including France, signed the Comprehensive Test Ban Treaty (CTBT). By August 2008, almost 200 nations had signed. By signing and then ratifying the treaty 2 years later, France committed itself to not carrying out experiments with nuclear weapons. Nevertheless, the treaty did not rule out nuclear deterrence, nor evolutions in the French arsenal; how, though, can progress be made without tests?

For this reason, in 1994, the CEA and, more specifically, the directorate of military applications launched the simulation program. With a duration of 15 years and a total cost of approximately 2 billion euros, the program consists of three main sections:

– An X-ray radiography device, Airix, operational since 2000, allows us to obtain images of implosion of a small quantity of fissionable material, an event which allows a nuclear charge to reach critical mass and spark a chain reaction and detonation. Of course, as these experiments are carried out on very small quantities, there is no explosion. To measure these phenomena over a duration of 60 ns (i.e. thousandths of a second), the most powerful X-ray generator in the world (20 MeV) was developed.

– The megajoule laser (LMJ) allows thermonuclear micro-reactions to be set off. Under construction near Bordeaux at the time of writing, the LMJ allows the conditions of a miniature thermonuclear explosion to be reproduced by focusing 240 laser beams with a total power of 2 million joules on a deuterium and tritium target. The project is due for completion in 2012.

– The TERA supercomputer has been operational since 2001 and is regularly updated. One of the most powerful centers of calculation in Europe, TERA-10, the 2006 version, included more than 10,000 processors, giving a total power of 50,000 teraflops (billions of operations on floating-point numbers per second), and was classed at number 7 on the global list of supercomputers (according to the “Top 500 list”, www.top500.org).

Moreover, to provide the necessary data for the simulation program (among other things), President Jacques Chirac took the decision in 1995 to re-launch a program of nuclear tests in the Pacific, a decision which had major consequences for the image of France with other countries.

This allows us to measure the scale of the most important French simulation program of all time, which aims to virtualize a now prohibited activity; in this case, simulation is essential, but it is not simple to put into operation, nor is it cheap.

1.1.4. Can we do without simulation?

Simulation is a tool of fundamental importance in systems engineering. We should not, however, see it as a “magic solution”, a miraculous program which is the answer to all our problems — far from it.

Simulation, while presenting numerous advantages, is not without its problems. What follows is a selection of the principal issues.

– Simulation requires the construction of a model, which can be difficult to construct, and the definition of parameters. If the system being modeled does not already exist, the selection of parameters can be tricky and may need to be based on extrapolations of existing systems, with a certain risk of errors.

– The risk of errors is considerably reduced by using an adequate process of checks and validation, but processes of this kind are costly (although they do provide considerable benefits, as with any formalized quality control process).

– In cases where a system is complex, the model is generally complex too, and its construction, as well as the development of the simulation architecture, requires the intervention of highly specialized experts. Without this expertise, there is a serious risk of failure, for example, in choosing an unstable, invalid, or low-performing model.

– The construction of a model and its implementation in a simulation are expensive and time consuming, factors which may make simulation inappropriate if a rapid response is required, unless a pre-existing library of valid and ready-to-use models is available in the simulation environment.

– The implementation of a complex model can require major IT infrastructures: note that the world’s most powerful supercomputers are mainly used for simulations.

– The correct use of simulation results often is not an easy task. For example, when simulating a stochastic system — a fairly frequent occurrence when studying a system of systems — results vary from one implementation to another so that data from several dozen implementations of the simulation are sometimes required before reliable results can be obtained.

Other methods that can help resolve problems without resorting to simulation exist. These methods are generally specific to certain types of problem but can provide satisfactory results.

After all, 100 years ago, we got along without simulation. Admittedly, systems were simple (“non-complex”) and designed empirically, for the most part, with large security margins in their specifications. Reliability was considerably lower than it is today. Nevertheless, one technique used is still frequently encountered: real tests on models or prototypes, that is, the construction of a physical model of the system. For example, we could build a scale model of an airplane which could be put in a wind tunnel to study the way the air moves around the model, from which the behavior of the full-scale aircraft in the same circumstances can be deduced. This scaling down is not necessarily without its problems, as physical phenomena do not always evolve in a linear manner with the scale of the model, but the theoretical process is relatively well understood. On the other hand, the uses of this kind of model are limited: they cannot accomplish missions, nor can a wind tunnel reproduce all possible flight situations (without prohibitively expensive investment in the installation). Flying models can be built, for example, NASA’s X43A, a 3.65 m long unpiloted model airplane, equipped with a scramjet motor and able to fly at Mach 10. Of course, numerous simulations were used to reach this level of technological achievement, but the time came when it was necessary to pass to a real model: the results of the simulations needed to be validated, in a little-understood domain, that of aerobic hypersonic flight. Moreover, from a marketing point of view, a model accomplishing real feats (the model in question has a number of entries in the Guinness Book of Records) is much easier to “sell” to the public and to backers than a computer simulation, much less credible to lay people (and even to some experts).

Models are therefore useful, even essential, but can also be expensive and cannot reproduce all the capacities of the system. A prototype would go further toward reproducing the abilities of the real system, but it is also too expensive: each of the four prototypes of the Rafale costs much more than the final aircraft, produced in larger numbers.

Using models, we are able to study certain aspects of the physical behavior of the system in question. The situation is different if the system under study is an organization, for example, the logistics of the Airbus A380 or the projection of armed forces on a distant, hostile territory. There are still techniques and tools other than simulation which can be used to optimize organizations, for example, constraint programming or expert systems, but their use is limited to certain specific problems.

For the qualification of on-board computer systems and communication protocols, mathematical formal systems (formal methods), associated with ad hoc languages (Esterel, B, Lotos, VHDL, and so on), are used to obtain “proof” of validity of system specifications (completeness, coherence, and so on). These methods are effective, but somewhat ungainly and their operation requires a certain level of expertise. These methods also cannot claim to be universal.

Finally, simulation does not remove the interest of collaborative working and making use of the competences of those involved in the system throughout its life. This is the case of the technico-operational laboratory (laboratoire technico-opérationnel, LTO) of the French Ministry of Defense, a main actor in the development of systems of systems for defense. We shall discuss this in Chapter 8. When dealing with a requirement, the work of the LTO begins with a period of reflection on the subject by a multi-disciplinary work group of experts in a workgroup laboratory (laboratoire de travail en groupe, LTG). Simulation is used as a basis for reflection and to evaluate different architectural hypotheses. The LTO is a major user of defense simulation, but it has also shown that by channeling the creativity and analytical skills of a group of experts using rigorous scientific methods, high-quality results may be obtained more quickly and at lower cost than by using simulation.

Simulation, then, is an essential tool for engineering complex systems and systems of systems, but it is not a “magic wand”; it comes with costs and constraints, and if used intelligently and appropriately, it can produce high-quality results.

1.2. History of simulation

Simulation is often presented as a technique which appeared at the same time as computer science. This is not the case; its history does not begin with the appearance of computers. Although the formalization of simulation as a separate discipline is recent and, indeed, unfinished, it has existed for considerably longer, and the establishment of a technical and theoretical corpus took place progressively over several centuries.

The chronology presented here does not pretend to be exhaustive; we have had to be selective, but we have tried to illustrate certain important steps in technical evolution and uses of simulation. It is in the past that we find the keys of the present: in the words of William Shakespeare, “what’s past is prologue”.

1.2.1. Antiquity: strategy games

The first models can be considered to date from prehistoric times. Cave paintings, for example, were effectively idealized representations of reality. Their aim, however, was seemingly not simulation, so we shall concentrate on the first simulations, developed in a context which was far from peaceful.

Even before the golden age of Pericles in Greece and the conquests of Alexander the Great, a sophisticated art of war developed in the East. Primitive battle simulations were developed as tools for teaching military strategy and to stimulate the imagination of officers. Thus, in China, in the 6th Century BC, the general and war philosopher Sun Tzu, author of a work on the art of war which remains a work of reference today, supposedly created the first strategy game, Wei Hei. The game consisted of capturing territories, as in the game of go, which appeared around 1200 BC. The game Chaturanga, which appeared in India around 500 BC, evolved in Persia to become the game of chess, in which pieces represent different military categories4 (infantry, cavalry, elephants, and so on). According to the Mahâbhârata, a traditional Indian text written around 2,000 years ago, a large-scale battle simulation took place during this period with the aim of evaluating different tactics.

In around 30 AD, the first “sand pits” appeared for use in teaching and developing tactics. Using a model of terrain and more or less complicated rules, these tables allowed combat simulations. Intervisibility was calculated using a string. This technique remained in use for two millennia and is still used in a number of military academies and in ever-popular figurine-based war games.

1.2.2. The modern era: theoretical bases

The Renaissance in Europe was a time of great scientific advances and a new understanding of the real world. Scholars became aware that the world is governed by physical laws; little by little, philosophy and theology gave way to mathematics and physics in the quest to understand the workings of the universe. Knowledge finally advanced beyond the discoveries of Greek philosophers of the 5th Century BC. Mathematics, in particular, developed rapidly over the course of three centuries. Integral calculus, matrices, algebra, and so on provided the mathematical basis for the construction of a theoretical model of the universe. Copernicus, Galileo, Descartes, Kepler, Newton, Euler, Poincaré, and others provided the essential elements for the development of equations to express physical phenomena, such as the movement of planets, the trajectory of a projectile or the flow of fluids, and chaotic phenomena. Philosophy and theology gave way to mathematics and physics in explaining the workings of the universe, which thus became predictable and could be modeled. In 1855, the French astronomer Urbain le Verrier, already famous for having discovered the existence of the planet Neptune by calculation, demonstrated that, with access to the corresponding meteorological data, the storm which caused the Franco-British naval debacle in the Black Sea on November 14, 1854 (during the Crimean War) could have been predicted. Following this, Le Verrier created the first network of weather stations and thus founded modern meteorology. Today, weather forecasting is one of the most publicly visible uses of simulation.

In parallel, the military began to go above and beyond games of chess and foresee more concrete operational applications of simulation. From 1664, the German Christopher Weikhmann modified the game of chess to include around 30 pieces more representative of contemporary troops, thus inventing the Koenigsspiel. In 1780, another German, Helvig, invented a game in which each player would have 120 pieces (infantry, cavalry, and artillery), played on a board with 1,666 squares of various colors, each representing a different type of terrain. The game enjoyed a certain success in Germany, Austria, France, and Italy, where it was used to teach basic tactical principles to young noblemen. In 1811, Baron von Reisswitz, advisor at the Prussian Court, developed a battle simulation game known as Kriegsspiel. Prince Friedrich Wilhelm III was so impressed that he had the game adopted by the Prussian army. Kriegsspiel was played on a table covered with sand, using wooden figurines to represent different units. There were rules governing movements and the effects of terrain, and the results of engagements were calculated using resolution tables. Two centuries later, table-based war-games using figurines operate using the same principles. The computer-based war-games that dominate the field nowadays have rendered the sandpit virtual, but once again, the basic principles are the same. These games allowed different tactical hypotheses to be explored with ease and permitted greater innovation in matters of doctrine. The German Emperor Wilhelm I believed that Kriegsspiel played a significant role in the Prussian victory against France in 1870. During the same period, the game was also used in Italy, Russia, the United States, and Japan, mainly to attempt to predict the result of battles. The results, however, were not always reliable. Thus, before World War I, the Germans used a variant of Kriegsspiel to predict the outcome of a war in Europe, but they did not consider several decisive actions by the allied forces which took place in the “real” war. The game was also used by the Third Reich and the Japanese in World War II. Note, however, that the results of simulations were not always considered by the general staff. Kriegsspiel predicted that the battle of Midway would cost the Japanese two aircraft carriers. The Japanese continued and actually lost four, after which their defeat became inevitable. It is said that, during the simulation, the Japanese player refused an aircraft carrier maneuver from the player representing America, canceling their movements; the real battle was, in fact, played out around the use of aircraft carriers and on-board aircraft. This anecdote illustrates a problem that may be encountered in the development of scenarios: the human factor, where operators may refuse to accept certain hypotheses. In the 1950s and 1960s, the US Navy “played” a number of simulations of conflicts with the Soviet forces and their allies. In these simulations, it was virtually forbidden to sink an American aircraft carrier; this would show their vulnerability and provide arguments for the numerous members of Congress who considered these mastodons of the sea to be too expensive and were reluctant to finance their development as the spearhead of US maritime strategy. It is, however, clear that in case of conflict, the NATO aircraft carriers would have been a principal target for the aero-naval forces of the Warsaw Pact.

In 1916, the fields of war games and mathematics met when F.W. Lanchester published a dynamic combat theory, which allowed, significantly, the calculation of losses during engagement (attrition). Almost a century later, Lanchester’s laws are still widely used in war games (Figure 1.2).

Figure 1.2. Image from Making and Collecting Military Miniatures by Bob Bard (1957), with an original illustration published in the Illustrated London News [DOR 08]

ch1-fig1.2.jpg

Before the movement toward computerization, combat simulation became increasingly popular. In 1952, Charles S. Roberts invented a war game based on a terrain divided using a rectangular grid. Cardboard tokens carried the symbols of various units (infantry, artillery, armor, and so on), and movements and combats were controlled by rules. This game, called Tactics, was a great success, and Roberts founded the Avalon Hill games company. From its first appearance, the commercial war board game enjoyed growing success, and large-scale expansion took place in the 1970s. Over 200,000 copies of Panzerblitz, an Avalon Hill game, were sold.

1.2.3. Contemporary era: the IT revolution

1.2.3.1. Computers

Scientific advances allowed considerable progress in the modeling of natural phenomena in the 17th and 18th Centuries, but a major obstacle remained: the difficulty of calculation, particularly in producing the logarithm and trigonometry tables necessary for scientific work. At the time, the finite difference method was used (convergence to the result using expansion), involving large amounts of addition using decimal numbers. Whole services, each with dozens of workers, were affected to this task in numerous organizations. Furthermore, each calculation was usually carried out twice to detect inevitable errors. Automating calculation would therefore be of considerable scientific and economic interest. The work of Pascal and Leibniz, among others, led to the construction of calculating machines, but these required fastidious manual interventions and remained prone to error. A means of calculating a full algorithm automatically was needed.

In 1833, Charles Babbage invented an “analytical engine”, which contained most of the elements present in a modern computer: a calculating unit, memory, registers, control unit, and mass memory (in the form of a perforated card reader). Although attempts at construction failed, it was later proved (a century later) that the invention would work, despite some questionable design choices. Babbage’s collaborator, Ada Lovelace, invented the first programming language. A few years later, George Boole invented the algebra, which carries his name, the basis of modern computer logic. The seeds had been sown, but the technique took around a hundred years to bear fruit.

The first calculators were not built until the 1940s, with, for example, John Atanasoff’s (University of Iowa) electronic calculator. Although the calculator contained memory and logic circuits, it could not be programmed. There was also the Colossus, built in the United Kingdom used by cryptologists under the direction of Alan Turing, whose theories predated the foundations of computer programs or the Harvard Mark II by 10 years.

The first true modern computer, however, was the ENIAC (Figure 1.3). The ENIAC, with 200,000 tubes, could carry out an addition in 200 µs and a multiplication in 2,800 µs (i.e. approximately 1,000 operations per second). When it was launched in 1946, it broke down every 6 h, but despite these problems, the ENIAC represented a major advance, not only in terms of calculating speed — incomparably higher than that of electromechanical machines — but also by the fact of being easily programmable. This power was initially used to calculate range tables for the US Navy, then rapidly made available for simulation, notably in the development of nuclear warheads, a very demanding activity in terms of repetitive complex calculations, based on the work of mathematicians such as Ulam, von Neumann, and Metropolis who, in the course of the Manhattan Project, formalized the statistical techniques for the study of random phenomena by simulation (Monte-Carlo method). Note that the Society for Computer Simulation, an international not-for-profit association dedicated to the promotion of simulation techniques, was founded in 1952.

Figure 1.3. The ENIAC computer (US Army)

ch1-fig1.3.gif

In the first few years, the future of the computer was unclear. Some had doubts about its potential for commercial success. For simulation, notably the integration of systems of differential equations, another path was opening up, that of the analog computer. Analog computers were used to model physical phenomena by analogy with electric circuits, physical magnitude being represented by voltages. Groups of components could carry out operations such as multiplication by a constant, integration in relation to time, or multiplication or division of one variable by another. Analog computers reached the height of their success in the 1950s and 1960s, but became totally obsolete in the 1970s, pushed out by digital. In France, in the 1950s, the Ordnance Aerodynamics and Ballistics Simulator (Simulateur Aérodynamique et Balistique de l’Armement, SABA) analog computer was used at the Ballistics and Aerodynamics Research Laboratory (Laboratoire de Recherches Balistiques et Aérodynamiques, LRBA), in Vernon, to develop the first French land-to-air missile, the self-propelled radio-guided projectile against aircrafts (projectile autopropulsé radioguidé contre avions, PARCA).

Digital computers, as we know, emerged victorious, but their beginnings were difficult. On top of their cost and large size, programming them was a delicate matter; at best, a (not particularly user-friendly) teleprinter could be used, and at worst, perforated cards or even, for the oldest computers, switches or selector switches could be used to enter information coded in binary or decimal notation. In the mid-1950s, major progress was made with the appearance of the first high-level languages, including FORTRAN in 1957. Oriented toward scientific calculations, FORTRAN was extremely popular in simulation and remains, 50 years later, widely used for scientific simulations. One year later, in 1958, the first algorithmic language, ALGOL, appeared. This language, and its other versions (Algol60, Algol68), is important as, even if it was never as popular as more basic languages such as COBOL or FORTRAN, it inspired very popular languages such as Pascal and Ada. The latter was widely used by the military and by the aerospace industry from the mid-1980s. In the 1990s, object-oriented languages (Java, C#, and so on) took over from the descendants of Algol, while retaining a number of their characteristics. Smalltalk is often cited as the first object-oriented language, but this is not the case: the very first was Simula, in 1967, an extension of Algol60 designed for discrete event simulation.

The 1960s saw a veritable explosion in digital simulation, which continues to this day. 1965 was an important year, with the commercialization of the PDP-8 mini-computer by the Digital Equipment Company. Because of its “reasonable” size and cost, thousands of machines were sold, marking a step toward the democratization of computing and of means of scientific calculation, expanding into universities and businesses. This tendency was reinforced by the unrolling of workstations across research departments in the 1980s, followed by personal computers (PCs). This democratization not only meant an increase in access to means of calculation but also allowed greater numbers of researchers and engineers to work on and advance simulation techniques.

These evolutions also took place at application level. In 1979, the first spreadsheet program, Visicalc, was largely responsible for the success of the first micro-computers in businesses. This electronic calculation sheet carried out simple simulations and data analysis (by iteration), notably in the world of finance. Visicalc has long disappeared from the market, but its successors, such as Microsoft Excel, have become basic office tools and, seemingly, the most widespread simulation tool of our time.

1.2.3.2. Flight simulators and image generation

During the 1960s–1970s, although computers became more widespread and more accessible, simulations retained the form of strings of data churned out by programs, lacking concrete physical and visual expression.

Two domains were the main drivers behind the visual revolution in computing: piloted simulation, initially, followed by video games (which, incidentally, are essentially simulations in many cases). Learning to fly an airplane has always been difficult and dangerous, especially in the case of single-seat aircraft. How can the pupil have sufficient mastery of piloting before their first flight without risk of crashing?

The first flight “simulators” were invented in response to this problem. The “Tonneau” was invented by the Antoinette company in 1910 (Figure 1.4). A cradle on a platform with a few cables allowed students to learn basic principles. In the United States, the Sanders Teacher used an airplane on the ground. In 1928, in the course of his work on pilotage of airplanes without visibility, Lucien Rougerie developed a “ground training bench” for learning to fly using instruments. This system is one of two candidates for the title of first flight simulator. The other invention was developed by the American Edward Link in the late 1920s. Link, an organ manufacturer with a passion for aviation, patented and then commercialized the Link Trainer, a simulator based on electro-pneumatic technology. The main innovation was that the cabin was made to move using four bellows for the roll and an electric motor for the yaw, of which the logic followed the reactions of the aircraft. The instructor could follow the student’s progress using a replica of the control panel and a plotter. The instructor could also set parameters for wind influence. The first functional copies were delivered to the American air force in 1934. The Link Trainer is considered to be the first true piloted simulator. The company still exists and continues to produce training systems.

These training techniques using simulation became more important during WWII, a period with constant pressure to train large numbers of pilots. From 1940 to 1945, US Navy pilots trained on Link ANT-18 Blue Box simulators, of which no less than 10,000 copies were produced.

Afterwards, these simulators became more realistic due to electronics. In the 1940s, analog computers were used to resolve equations concerning the flight of the airplane. In 1948, a B377 Stratocruiser simulator, built by the company Curtiss-Wright, was delivered to the PanAm company. The first of its kind to be used by a civil aviation company, it allowed a whole team to be trained in a cockpit with fully functional instruments, but two fundamental elements were absent: the reproduction of the movement of the airplane and exterior vision, cockpits of the time still being blind. In the 1950s, mechanical elements were added to certain simulators to make the cabin mobile and thus reproduce, at least to some degree, the sensations of flight. The visual aspect was first resolved by using a video camera flying over a model of the terrain and transmitting the image to a video screen in the cockpit. In France, the company Le Matériel Téléphonique (LMT) commercialized systems derived from the Link Trainer from the 1940s. The first French-designed electronic simulator was produced by the company in 1961 (Mirage III simulator). LMT has continued its activities in the field, currently operating under the name Thales Training & Simulation.

Figure 1.4. The Antoinette “Tonneau”

ch1-fig1.4.gif

1.2.3.3. From simulation to synthetic environments

The bases of simulation were defined during the period 1940–1960. These fundamental principles remain the same today. Nevertheless, the technical resources used have evolved, and not just in terms of brute processing power: new technologies have emerged, creating new possibilities in the field of simulation. Particular examples of this include networks and image generators.

The computers in 1950s had nothing like the multi-media capacities of modern computers. At the time, operators communicated with the machine using teleprinters. Video screens and keyboards eventually replaced teleprinters, but until the end of the 1970s, the bulk of man-machine interfaces were purely textual, and data were recreated in the form of lists of figures.

Some, however, perceived the visual potential of computers very early on. In 1963, William Fetter, a Boeing employee, created a three-dimensional (3D) wire sculpture of an airplane to study take-off and landings. He also created First Man, a virtual pilot for use in ergonomic studies. Fetter invented the term “computer graphics” to denote his work. In 1967, General Electric produced an airplane simulator in color. The graphics and animation were relatively crude, and it required considerable resources to function. Full industrial use of computer-generated images did not really begin until the early 1970s, with David Evans and Ivan Sutherland’s Novaview simulator, which made use of one of the first image generators. The graphics were, admittedly, basic, in monochrome wire, but a visual aspect had finally been added to a simulator. The Novaview company was to become a global leader in the field of image generation and visual simulation. In 1974, one of their collaborators, Ed Catmull, invented Z-buffering, a technique which improves the management of 3D visualization, notably in the calculation of hidden parts. Two years later, this technique was combined with another new technique, texture mapping, invented by Jim Blinn. This enabled considerable progress to be made in terms of realism of computer-generated images by pinning a bitmap image (i.e. a photo) onto the surface of a 3D object, giving the impression that the object is much more detailed than it really is. For example, by pinning a photo of a real building onto a simple parallelepiped, all the windows and architectural features of the building can be seen, but all that needs to be managed is a simple 3D volume. Although processing power remained insufficient and software were very limited, the bases of computer-generated images were already present. At the beginning of the 1980s, NASA developed the first experimental virtual reality systems; interactive visual simulation began to become more widespread. Virtual reality, although not the commercial success anticipated, had a significant impact on research on the man-machine interface.

During the 1980s and 1990s, another technology emerged, so revolutionary that some spoke of a “new industrial revolution”: networks. Arpanet, the ancestor of the Internet, had been launched at the end of the 1960s, but it was confined to a limited number of sites. Local Access Networks and the Internet made networks more popular, linking humans and simulation systems.

In 1987, a distributed simulation network (SIMNET) was created to respond to a need for collective training (tanks and helicopters) by the US Army. This experimental system rapidly aroused interest, and led, in 1989, to the first drafts of the DIS (Distributed Interactive Simulation) standard protocol for simulation interoperability, which became the IEEE 1278 standard in 1995 (see Chapter 7). Oriented in real time and at low level, DIS was far from universal, but it was extremely successful and is still used today. DIS allowed cooperation between simulations. The community of aggregated constructive simulations, such as, war games, created its own non-real-time standard, ALSP (Aggregate Level Simulation Protocol), developed by the Mitre Corporation. Later, the American Department of Defense revised its strategy and imposed a new standard, HLA (High Level Architecture), in 1995, which became the IEEE 1516 standard in 2000. Distributed simulation found applications rapidly. In 1991, simulation (including distributed simulation through SIMNET) was used for the first time on a massive scale in preparing a military operation: the first Gulf War. France was not far behind: in 1994, two Mirage 2000 flight simulators, one in Orange and the other in Cambrai, were able to work together via DIS and a simple Numeris link (digital link with integrated service (réseau numérique à intégration de service, RNIS). The following year, several constructive Janus simulations were connected via the Internet between France and the United States. 1996 saw the first large multi-national simulation federation, with the Pathfinder experiment, repeated several times since. This capacity to make means of simulation, real materials, and information systems interoperate via networks opens the way for new capacities, used, for example, in battle-labs such as the technico-operational laboratory of the French Ministry of Defense.

An entire book could be written on the history of simulation, a fascinating meeting of multiple technologies and innovations, but such is not our plan for this work. The essential points to remember are that simulation is the result of this meeting, that the subject is still in its youth, and that it will certainly evolve a great deal over the coming years and decades. Effectively, we are only just beginning to pass from the era of “artisan” simulations to that of industrial simulation. Simulation, as described in this book, is, therefore, far from having achieved its full potential.

1.3. Real-world examples of simulation

1.3.1. Airbus

Airbus was created in 1970 as a consortium. Something of a wild bet in the beginning, on the back of the success (technical, at least) of Concorde, it became a commercial success story, even — something no-one would have dared imagine at the time — overtaking Boeing in 2003. This success is linked to strong political willpower combined with solid technical competence, but these elements do not explain everything. Despite the inherent handicaps of a complex multi-national structure, the consortium’s first airplane, the Airbus A300, entered into service from 1974; its performance and low operating costs attracted large numbers of clients. The A300 incorporated numerous technological innovations, including the following:

– just-in-time manufacturing;

– revolutionary wing design, with a supercritical profile and particularly accurate flight controls from an aerodynamic point of view, one of the factors responsible for significant reductions in fuel consumption (the A300 consumed 30% less than a Lockheed Tristar, produced during the same period);

– auto-pilot possible from the first ascent to the final descent;

– wind shear protection;

– electronically controlled braking.

These innovations, to which others were added in later models (including the highly mediatized two-person piloting system, where the tasks of the flight engineer were made automatic) led to the commercial success of the aircraft, but also made the system more complex.

Figure 1.5. Continuous atmospheric wind tunnel from Mach 0.05 to Mach 1, the largest wind tunnel of its type in the world — A380 tests — (ONERA)

ch1-fig1.5.jpg

The engineers who created the Airbus consortium at that time thus made use of significant simulation, both digital, which was becoming more widespread in spite of the cost of computers, still very high, and by using models in wind tunnels (see Figure 1.5). This use of simulation, to a degree never before seen in commercial aviation, enabled Airbus to develop and build an innovative, competitive, and reliable product, and thus to achieve a decisive advantage over the competition. Without simulation, Airbus would probably not have experienced this level of commercial success.

It would be difficult to provide an exhaustive list of all uses of simulation by Airbus and its sub-contractors. Nevertheless, the following list provides some examples:

– global performance simulation: aerodynamics, in-flight behavior of a design;

– technical simulation of functions: hydraulics, electricity, integrated modular avionics, flight controls;

– simulation of mechanical behavior of components: landing gear, fuselage, wings, pylon rigging structure;

– simulation of global possession costs;

– production chain simulation;

– interior layout simulation (for engineering, but also for marketing purposes);

– maintenance training simulator;

– training simulator for pilots.

The use of simulation by Airbus is unlikely to be reduced in the near future, especially in the current context of ever-increasing design constraints. The Advisory Council for Aeronautics Research in Europe (ACARE) defines major strategic objectives for 2020 in [EUR 01], with the aim of providing “cheaper, cleaner, and safer air transport”:

– more pleasant and better value journeys for passengers;

– reduction in atmospheric emissions;

– noise reduction;

– improvements in flight safety and security;

– increased capacity and efficiency of the air transport system.

We clearly see that the aims of these objectives go beyond the engineering of a “simple” aircraft, but necessitate the combined and integrated engineering of a system of systems (that for air transport, [LUZ 10]). To achieve this, large-scale investment will be needed, over 100 billion euros over 20 years (according to [EUR 01]), and modeling and simulation, as well as systems engineering as a whole, play an important role.

1.3.2. French defense procurement directorate

The DGA provides an interesting example of simulation use. This organism, part of the French Ministry of Defense, exists principally for the acquisition of systems destined for the armed forces. The DGA’s engineers therefore deal with technical or technico-operational issues throughout the lifecycle of a system. We find examples of simulation use at every step of system acquisition (which corresponds to the lifecycle of the system from the point of view of the contracting authority). Furthermore, the DGA uses a very wide variety of simulations (constructive, virtual, hybrid, and so on).

Figure 1.6. Use of simulation in armament programs

ch1-fig1.6.jpg

Figure 1.6 illustrates the different applications of simulation in the system acquisition cycle:

Determine operational concepts: it allows “customers” and final users (the armed forces) to better express their high-level needs, for example, in terms of capacity. Simulation allows concepts to be illustrated and tested virtually (see Chapter 8).

Provide levels of performance: it would be difficult to issue a contract for a system from a concept; a first level of dimensioning is required to transform an operational need, for example, “resist the impact of a modern military charge”, into “resist an explosion on the surface of the system with a charge of 50 kg of explosives of type SEMTEX”. Simulation can give an idea of the levels of performance necessary to achieve the desired effect.

Measure operational efficiency: the next step is to re-insert these performance levels into the framework of an operational mission by “playing out” operational scenarios in which the new system can be evaluated virtually.

Mitigate risk and specify: the system specifications must be refined, for example, by deciding what material to use for armoring the system to obtain the required resistance. The potential risks involved with these choices can then be evaluated: What will the mass of the system be? Will it be possible to motorize it? Will the system remain within the limits of maneuverability? Note that this activity usually falls under the authority of the system project manager.

Mitigate risk regarding human factors: evaluate the impact of the operators on the performance of the system: Is there a risk of overtaxing the operator in terms of tasks or information? Might the operator reduce the performance of the system? For example, it would be useless to develop an airplane capable of accelerating at 20 g if the pilot can only withstand half of that.

Evaluate the feasibility of technological solutions: this is an essential step in risk mitigation. It ensures that a technology can be integrated and performs well in the system framework. At this stage, we might notice, for example, that a network architecture based on a civil technology, 3G mobile telephony, poses problems due to the existence of zones of non-coverage, gives very variable bandwidth with the possibility of disconnections, and so on, or, on the contrary, check that the technology in question responds well to the expressed need in the envisaged framework of use.

Optimize an architecture or function: once the general principle of an architecture has been validated, we can study means of optimizing it. In the case of “intelligent” munitions, that is, munitions capable of autonomous or assisted direction toward a target, we might ask at what stage in flight target detection should occur. If the sensor is used too early, there may be problems identifying or locking onto the correct target; if it is too late, the necessary margin for correcting the trajectory to hit the target will be absent.

Facilitate the sharing and understanding of results of calculations: the results of studies and simulations can be very difficult to interpret, for example, lists of several million numerical values recording dozens of different variables. Illustrating these results using simulations, for example, a virtual prototype of the system, more visually representative, allows the analyst to better understand the dynamic evolution of the system.

Study system production and maintenance conditions: implemented by the project manager, manufacturing process simulations are used to specify, optimize, and validate manufacturing conditions (organization of the production line, work station design, incident management, and so on). Simulation can be applied to the whole of the production line, the entire logistics system (including supply fluxes for components and materials and delivery of the finished product), or a work station (ergonomic and safety improvements). This activity also includes studies of maintenance operations. In this case, virtual reality is used to check on the accessibility of certain components during maintenance operations.

Prepare tests: tests of complex systems are themselves complicated or even complex to design and costly to implement. Simulation provides precious help in optimizing tests, improving, for example, the determination of the domain of qualification to cover, the finalization and validation of corresponding test scenarios, the test architecture, and the evaluation of potential risks. Simulation can also be used during the interpretation of test results, providing a dynamic 3D representation of the behavior of the system, based on the data captured during the test, which is substantial and difficult to analyze.

Supplement tests: the execution of tests allows us to acquire knowledge of a system for a given scenario, for example, a flight envelope. Simulation allows the system to be tested (virtually) over and beyond what can be tested in reality, whether this may be due to budget or environmental constraints (e.g. pollution from a nuclear explosion) or safety (risk to human life) or even confidentiality (emissions from an electronic war system or from a radar can be captured from a distance and analyzed; moreover, they constitute a source of electromagnetic pollution). By carrying out multiple tests in simulation, the domain in which the system has been tested can be extended, or the same scenario can be repeated several times to evaluate reproducibility and make test results more statistically representative, thus increasing confidence in the results of qualification.

Specify user-system interface: the use of piloted study simulators representing the system allows the ergonomics of the user-system interface for the initial system to be analyzed and improvements to be suggested to increase efficiency or adapt it to a system evolution (addition of functionalities).

Beyond its role as direct support during the various phases of the program acquisition cycle, simulation is also used by the DGA for the development of technical expertise, particularly in the context of project management, where technical mastery of the operation of a system is indispensable to work on the specifications and qualification of a system. This mastery, however, is difficult to obtain without being the project manager or the final user. Simulation is also a tool used for dialog between experts in the context of collaborative engineering processes using a “board” of multi-disciplinary teams, the idea being, eventually, to maintain and share virtual system prototypes between the various participants in the acquisition process (the SBA principle). Finally, modeling and simulation are seen as the main tools for mastering system complexity; at the DGA, simulation is attached to the “systems of systems” technical department.

1.4. Basic principles

In any scientific discipline, a precise taxonomy of the field of work is essential, and precision is critical in the choice of vocabulary. Unfortunately, simulation is spread across several scientific “cultures”, generally considered a tool and not as a separate domain such that the same term will not always mean the same thing depending on the person using it. Furthermore, a large number of typologies of simulation exist. Although simulation is becoming increasingly organized, a certain amount of effort is still required to guarantee coherence between different areas and between different nationalities. In France, the absence of a national norm concerning simulation does not help, as the translation of English terms is not always the same from one person to another.

There are, however, a number of documents that can be considered to be references in the field, and upon which the terminology used in this work will be based. Most are in English, as the United States is both the largest market for simulation and the principle source of innovations in the field. This driving role gives the United States a certain predominance in the orientation of the domain.

Before going any further, we shall explain what exactly is meant by system simulation.

1.4.1. Definitions

1.4.1.1. System

A system is a group of resources and elements, whether hardware or software, natural or artificial, arranged so as to achieve a given objective. Examples of systems include communication networks, cars, weapons, production lines, databases, and so on.

A system is characterized by

– its component parts;

– the relationships and interactions between these components;

– its environment;

– the constraints to which it is subjected;

– its evolution over time.

As an example, we shall consider the system of a billiard ball falling in the Earth’s gravitational field. This system is composed of a ball, and its environment is Earth’s atmosphere and gravity. The ball is subjected to a force that pulls it downwards and an opposing force in the form of air resistance.

1.4.1.2. Model

First and foremost, simulating, as we shall repeat throughout this work, consists of building a model, that is, an abstract construction that represents reality. This process is known as modeling. Note that, in English, the phrase “modeling and simulation” is encountered more often than the word “simulation” on its own.

A model can be defined as “any physical, mathematical, or other logical representation (abstraction) of a system, an entity, a process or a physical phenomenon [constructed for a given objective]” [DOD 95].

Note that this definition includes numerous types of models, not only computer ones. A model can take many forms, for example, a system of equations describing the trajectory of a planet, a group of rules governing the treatment of an information flux, or a plastic model of an airplane. An interesting element of the definition is the fact that a model is “constructed for a given objective”: there is no single unique model for a system. A system model that is entirely appropriate for one purpose may be completely useless for another purpose.

To illustrate this notion of relevance of the model, to which we will return later, we shall use the example of the billiard ball falling in the Earth’s gravitational field. Suppose that we want to know how long the fall will last, with moderate precision, to the nearest second, that the altitude of release z0 is low (not more than a few hundred meters), and that the ball is subject to no other influence than gravity,g, which we can treat as a constant because of the low variation in altitude and the low level of precision required.

The kinematic equations of the center of gravity of the ball give a(t) =g for the acceleration and images for the velocity, from which we deduce that images.

The equations giving a(t),v(t), and z(t) constitute a mathematical abstraction of the “billiard ball (falling in the Earth’s gravitational field)” system; this is therefore a model of the system. But are there other possible models? Without going too far, let us consider our aims and our hypotheses. Imagine now that we want to obtain the fall time correct to within 50 ms. To arrive at this level of precision, we must consider other factors, notably air resistance, in the form of a term representing fluid friction as a force opposing the fall f (v) = −av. We thus have a new expression for altitude depending on the time z(t), as follows: images.

This expression is clearly more complicated and demanding in terms of calculation power than the former. Moreover, our hypothesis was relatively simple, the forces of friction being closer to f(v) = −av2, especially at high velocities. Nevertheless, our second expression reproduces the movement of the ball more realistically over long periods of falling, as the velocity is no longer linear, but tends to a limit.

If we represent the velocity of the ball as a function of time, following both hypotheses (simple and with fluid friction), we notice that both curves are close at the beginning and then diverge. For short durations starting when the ball is released and assuming that the model with friction is close enough to reality for our purposes, we notice that the simple model is also valid for our needs in this zone, known as the domain of validity of the model.

If our needs are such that even the model with friction is not representative enough of reality, other influences can be considered, such as wind. If our ball is released from a high altitude, we must consider the variation in the attraction of weight depending on the altitude, introducing a new term g(z) into the model equations, making matters considerably more complex. We could go even further, considering the Coriolis effect, the non-homogeneity of the ball, and so on. We therefore see that models of the same system can vary greatly depending on our aims. Of the two models we have constructed, we cannot choose the best model to use without knowing what it is to be used for. Some might say that, in case of doubt, the most precise model should be used. In our simple example, this is certainly feasible, but the case would be different in a more complicated system, governed by hundreds of equations, as is often the case with aerodynamic modeling for aeronautics: solving these equations is costly in terms of time and technical resources, and the use of a model too “heavy” for the objective required involves the use of resources over and beyond what would be strictly necessary, leading to cost overrun, a sign of poor quality.

1.4.1.3. Simulation

The definition provided by the US Department of Defense, seen above [DOD 95], is interesting as it introduces the notion of subjectivity of a model. However, it concerns models in general; in simulation, we use more specific models.

The IEEE, the civil standardization association, gives two other definitions in the document IEEE 610.3-1989:

– an approximation, representation, or idealization of certain aspects of the structure, behavior, operation, or other characteristics of a real-world process, concept, or system;

– a mathematical or logical representation of a system or the behavior of a system over time.

If we add “for a given objective” to the first definition, it corresponds closely to that given by the US DoD. The second definition is interesting, as it introduces an essential element: time. The model under consideration is not fixed: it evolves over time, as in the case of our falling billiard ball model. It is this temporal aspect that gives life to a model in a simulation, as simulation is the implementation of a model in time.

To simulate, therefore, is to make a model “live” in time. The action of simulation can also refer to the complete process which, from a need for modeling, allows the desired results to be obtained. In general, this process can be split into three major steps, which we will cover in more detail later:

– design and development of the model;

– execution of the model;

– analysis of the execution.

Strictly speaking, the definition given above refers to the second step, which is misleading, as the most important part is the design of the model, on which the results and the interpretation of results depend. The last step is not always easy: when the results consist of several thousands of pages of figures which the computer “spits out”, it can take weeks of work to reach the core substance. Use of the full term “modeling and simulation” is therefore helpful, as it emphasizes the importance of the preliminary modeling work.

Note that simulation also designates the computer software used to build and execute a model.

Figure 1.7. General principle of simulation

ch1-fig1.7.jpg

Figure 1.7, based on [ZEI 76], resumes the general principle of simulation: we take as point of departure a real system, with its environment and a usage scenario, and we construct an abstraction, the model, which is then made concrete by a simulation application. The quality of the model and simulation in relation to the aims is assured by a process of validation and verification which will be discussed in detail later.

To return to the example of the falling billiard ball, the simulation program using the simple model could be as follows, in the language C:

CODE:

images
images
images

The execution of the simulation gives the following results for a billiard ball dropped from the top of the Eiffel Tower:

EXECUTION:

    Simulation of falling billiard ball:

    Initial altitude and velocity? 320.0 0.0

    Integration step? 1.0

    Initial altitude = 320.000000 m, Initial velocity = 0.000000 m/s

    t = 0.000 s --> z = 320.00 m, v = 0.00 m/s

    t = 1.000 s --> z = 315.10 m, v = -9.81 m/s

    t = 2.000 s --> z = 300.38 m, v = -19.62 m/s

    t = 3.000 s --> z = 275.86 m, v = -29.43 m/s

    t = 4.000 s --> z = 241.52 m, v = -39.24 m/s

    t = 5.000 s --> z = 197.38 m, v = -49.05 m/s

    t = 6.000 s --> z = 143.42 m, v = -58.86 m/s

    t = 7.000 s --> z = 79.65 m, v = -68.67 m/s

    t = 8.000 s --> z = 6.08 m, v = -78.48 m/s

    t = 9.000 s --> z = -77.31 m, v = -88.29 m/s

    Collision with the ground!

This represents the raw results of the execution of the simulation. Here, we only have a small quantity of data, but imagine a simulation of the flight of a rocket with hundreds of variables and millions of values …; the interpretation and analysis phase should not be underestimated, as it is sometimes the longest of the three phases. Using our example, we can study the “profile” of the fall of the ball and evaluate the exact moment of impact and the speed at that moment. The analytical calculations are simple, but once again, our example is a trivial textbook scenario; the case would be different for a more complex system, including systems of equations with tens, even hundreds, of variables.

In summary, we have seen, using the example of the falling ball, a system model, a simulation application, and the execution of a simulation.

1.4.2. Typology

Simulations can be classified in various ways. We shall discuss three possibilities, which represent the main divisions in the field of simulation:

– levels of granularity;

– architecture;

– uses.

1.4.2.1. Levels of granularity

The level of granularity of a system refers to the physical size of the entities it manipulates; it is an indicator of the scale of the system that can be simulated with the simulation in question. This notion should not be confused with the refinement of a model, that is, its temporal (calculation step) and spatial precision (e.g. resolution of the terrain), as even if granularity and refinement are, clearly, often linked, this is not always the case: we could design a simulation of a system of systems, for example, in which we would model very low-level physical phenomena. Of course, considerable work would be required to construct and validate the simulation, with considerable processing power, but it is possible and, without going as far as our extreme example, can be relevant, particularly when reusing a model which is too refined, but valid and available.

In order of increasing granularity, we typically distinguish between the following levels:

– Physical phenomena: example, propagation of a radar wave, airflow around a turbine blade, and the constraints on a ship’s hull exposed to swell. Most often, simulations of physical phenomena use modeling based on differential equations. This was the case for our example of the falling billiard ball.

– Components: a component carries out a function within the system, but without real autonomy, for example, the homing head on a missile, a gearbox, or a fire detector.

– Sub-systems: these are groups of several components, which carry out a function within a system with a certain degree of autonomy, for example, guiding a missile or fire protection on a ship.

– System: as defined above. A simulation at system level may contain several systems, examples: aircraft, missile, combat between aircraft and missile.

– Multi-systems: groups of significant numbers of systems carrying out a common mission, examples: a factory, an airport, and a car racetrack.

– System of systems: see definitions given earlier. Examples: logistics system of the A380 (production, delivery, implementation, and so on), telephone network, international banking system.

In defense simulation, the following levels are considered, which correspond to the last two categories above:

– Tactical corresponds to combat at multi-system level, for example, an air squadron or a company of infantry.

– Operational corresponds to a battle, at regiment level, joint tactical force (groupement tactique interarmes, GTIA) or above, for example, a reconstruction of the Battle of Austerlitz (1805). In the civilian domain, the logistics system of the A380 (production, delivery, implementation, and so on) would be an example at this level.

– Strategic (military) refers to a theater of war or a campaign, for example, a reconstruction of Napoleon’s Russian campaign of 1812. The American synthetic theater of war (STOW) deployed 8,000 entities during an exercise in October 1997. An example at this level in the civilian sector would be an economic simulation at planetary level.

It is generally fairly easy to define the level of granularity of a simulation very early in the development process as the granularity is linked to what is being simulated. This allows a first approach to be made concerning the level of refinement, reusable models, and even the architecture. Typically, simulations of physical phenomena are based on non-interactive calculation languages (FORTRAN, Matlab, Scilab, and so on), while at tactical and operational level, visual and interactive simulation products are used instead (Janus, JTLS, OneSAF, and so on).

1.4.2.2. Architecture

Taking a step back, we observe that simulation systems usually have the same general architecture:

– models of systems, to which environment models can be added, or models of human behavior if the system operators need to be simulated;

– parameters giving the dimensions of models (e.g. mass, maximum flight altitude of an airplane, capacity of a fuel tank, maximum acceleration, maximum load supported by a road bridge);

– a scenario describing how the simulation progresses (e.g. for a road traffic simulation, the scenario indicates, among other things, the initial state of traffic, the time at which a junction will be closed for road works, and bad weather at the end of the day);

– if the simulation is interactive, a user-machine interface allows control input to influence the course of the simulation (e.g. the control stick of a flight simulator);

– a simulation engine that allows these components to be animated over time (in the following chapters, we shall discuss this key component of simulations in greater depth);

– simulation output: a reproduction of the dynamic behavior of the system (saved state variables and/or statistics for a technical simulation, visualization through an image generator of the state of the system for a flight simulator).

Figure 1.8 illustrates the simulation architecture, which we shall discuss on several occasions later in this book.

Figure 1.8. Typical simulation architecture

ch1-fig1.8.jpg

Although, seen from afar, all simulations more or less conform to this model, a closer look shows there is considerable variation in the way it is set out. This differentiation is such that it provokes divisions in the simulation community; it is rare to find a simulation engineer with expertise in working with more than one of these architectures. We can distinguish the following distinct architectures, or forms, of simulation: closed-loop digital simulation, hardware-in-the-loop (hybrid) simulation, constructive simulation, and simulation with instruments.

Closed-loop digital (or “scientific”) simulation is a purely computerized form of simulation that operates without human intervention, so it is non-interactive. It provides modeling, often very refined, of a physical phenomenon or a system component, rarely more. It does not operate in real time and sometimes requires considerable processing power. We should highlight, at this point, the fact that the main application of the world’s most powerful supercomputers is simulation, in particular in developing nuclear weapons, meteorological calculations, aerodynamic calculations for aircraft, and simulations of vehicle collisions. The CEA’s TERA-10 computer, built for the “Simulation” program, was the most powerful in Europe when it was launched in 2006, with a calculating power of 50,000 billion operations per second and a storage capacity of 1 million billion bytes.

In this type of simulation, the phenomenon under the study is typically modeled using differential equations applied to elementary portions of the system (to make it discrete, i.e. the finite element method). This resolution is obtained using calculations on matrices of tens, even hundreds, of lines/columns (as many as there are variables), hence the need for considerable processing power, especially as calculations are iterated every step in time.

Using the TERA-10, the simulation of a nuclear fusion reaction lasting less than one thousandth of a second can require hours of calculations. On a simpler level, our simulation of the billiard ball falling is a closed-loop digital simulation. Figure 1.9 shows an intermediate example of this kind of simulation.

Hardware-in-the-loop (hybrid) simulation adds real material into the loop, combing a model (i.e. virtual version) of part of the system with a real part of the system, which is therefore only partially simulated. This type of simulation is particularly important when testing and determining the properties of components. The simulation aims to stimulate the real component or components being tested by generating the environment of the component (entries, reactions to actions, and so on).

Figure 1.9. Digital simulation: evaluation of damage to a concrete slab caused by a detonation on contact (OURANOS simulation code, by M. Cauret, DGA/CEG)

ch1-fig1.9.gif

It means that the system generally needs to work in strict real time. Figure 1.10 gives an example of a hardware-in-the-loop simulation system, the real-time simulator for pulsed Doppler homing heads (simulateur temps réel pour autodirecteurs doppler à impulsions, STRADI) used by the Ballistics and Aerodynamics Research Laboratory of the DGA. To test an air-to-air missile homing head of type Mica IR, the component is mounted on a three-axis table, and then infrared scenes are generated containing targets so that the homing head “believes” it is in use as part of a missile in flight. By stimulating the homing head using simulation, its reactions to different scenarios can be evaluated, determining its performance and its conformity to the specifications.

Constructive simulations are purely computer based. They include high-level models, that is, systems and systems of systems, with their operators. The human factor is therefore involved, but in simulated form. These simulations can be interactive, but the operator in the loop only takes general decisions (missions), and the model remains the main source of results. To summarize, a constructive simulation uses simulated human beings, operation simulated systems in a simulated environment. War games and technico-operational simulations fall into this category.

Figure 1.10. STRADI hybrid simulation for missile testing (DGA/LRBA)

ch1-fig1.10.jpg

Figure 1.11 is a screen capture of the Ductor naval aviation simulator, used by the National Navy to “play” naval operations.

Virtual simulations are interactive, real-time simulations at system (platform) level with human operators, using user-machine interfaces which are more or less faithful representations of the real equipment. The user is not only in the loop but constitutes the main source of events and stimuli for the simulation. The bulk of this category is made up of piloted simulations (flight simulators and trainers), but training simulations for information system operators and military commanders also fall into this group. A virtual simulation may use a simple PC to reproduce the interface of a simulated platform, as in Microsoft’s Flight Simulator piloting program. On the other hand, it may use a very realistic and sophisticated model of the platform, or even components of the real system, as in the Rafale simulation centers, where each simulator contains an on-board computer as used in the real aircraft.

Figure 1.11. Ductor constructive simulation (EMM/ANPROS)

ch1-fig1.11.jpg

Figure 1.12 shows a “full flight” helicopter simulator, including a reproduction of the cabin of the aircraft (Puma or Cougar), mounted on a three-axis table which allows the crew to feel very realistic movements, and a projection sphere giving immersive vision of the 3D environment.

Simulation with instruments uses real material operated by human users in the field. Simulation is used to supply a tactical environment and model the effects of weapons. Simulation with instruments is often referred to as “live simulation”, which designates its field of application. This kind of simulation is used for on-the-ground training of operators, for example, soldiers, in very realistic conditions; they use their equipment (guns), specially fitted with laser transmitters, data links, recorders, and so on. In the civilian world, the game of paintball could be considered as belonging to the domain of live simulation. In the military sphere, a typical example would be the French Army’s Combat Firing Simulator for Light Weapon (simulateur de tir de combat pour armeslégères, STC/AL), a small weapons combat simulator, used to train operators of the FAMAS, MAS 49/56, AA52, ANF1, and MINIMI weapons (Figure 1.13). Infantry soldiers wear a gilet with infrared and laser sensors, which detect shots hitting the wearer and produce a signal. The STC/AL is not limited to firing devices and also contains a control and arbitration system for starting a combat simulation exercise, studying the exercise for after action analysis and the lessons learned, penalties for “shot” participants, and so on. An equivalent for tank training exists in the form of the CENTAURE system, set up by the French Army at their Mailly camp.

Figure 1.12. SHERPA flight simulator (SOGITEC)

ch1-fig1.12.gif

The American Department of Defense uses a similar classification, with three main categories of defense simulation:

– constructive simulations: simulated systems implemented by simulated operators (with or without the intervention of a real operator in the course of the simulation), for example, war games;

– virtual simulations: simulated systems implemented by real operators. This group includes piloted simulations but also simulations of information and command systems, for example, flight simulators or a simulation of the control center of a nuclear power station;

– live simulation: real systems are implemented by real operators, for example, in an infantry combat shooting simulator.

Digital simulations and hardware-in-the-loop simulations are not included in this classification, which focuses on operational and technico-operational simulations used by the US Department of Defense.

Figure 1.13. STC/AL system on a FAMAS (EADS/GDI Simulation)

ch1-fig1.13.jpg

1.4.2.3. Uses

Simulation has innumerable uses. Within the Ministry of Defense and civilian industry, we can list a number of examples: initial training, continuous training, mission rehearsal, assistance to decision making, justification of choices, concept studies, planning, operation management, forecasting, systems acquisition, marketing, and leisure.

Initial training aims to equip a student with basic skills in using a procedure or a system. Gaps in the students’ knowledge and their lack of experience often make immediate use of the real system dangerous; simulation provides a useful halfway step, often using a simplified, targeted version of the system to reduce costs and complexity. In this way, the pupil masters the necessary cognitive processes (e.g. use of a weapons system) and/or psychomotor reflexes (e.g. reaction to a breakdown).

Continuous training is used to maintain and develop the capacities of system users by increasing the overall time spent using the system or by exposing them to specific situations (e.g. landing without visibility and system failures). Training activities are the best-recognized use of simulation by the general public, via flight simulators; as spectacular as these are, however, they only represent part of a larger group of training systems. Training simulation systems constitute a whole family which covers a far wider area than learning to pilot a system, including learning procedures, deepening knowledge of the system, maintenance training, and crisis management training. As well as piloted trainers and simulators (not just aircraft simulators but also nuclear power station control center simulators, air traffic control simulators, mechanical excavator simulators, laparoscopic surgical equipment simulators, and so on), we also find constructive simulations (war games, economic strategy games, and so on.). Note that simulation may be the only tool used in training (this is generally the case with piloted simulation) or one of many (as with computer-assisted exercises (CAX), where a simulation provides more or less automatically the environment needed to stimulate those in training, thus reducing the human resources needed for the exercise while maintaining, or even enhancing, realism).

Mission rehearsal falls somewhere between training and assistance in carrying out operations; for the military, this consists of playing out a mission in simulation before real-world execution. For example, a pilot may practice bombarding a target in a theater of operations, familiarizing himself or herself with the terrain and the dangers of the zone in question, so that when carrying out the real mission, he or she already possesses the necessary reflexes to achieve his or her objectives and for self-defense. Mission rehearsal by simulation is also used by civilian bodies, for example, in preparing for a sensitive operation (such as repairing a pipe in a nuclear power plant or an innovative surgical intervention).

In a world where organizations are becoming more and more complex, simulation has become a key tool for decision-makers, be they engineers, involved with the military, politicians, or financiers. Simulation can provide assistance while making decisions. By simulating the possible consequences of a decision, it becomes possible to choose the best strategy when faced with a given problem. In broad outline, the method usually consists of simulating the evolution of a given system several times, varying certain parameters (those implicated in the choices of the decision-maker) each time, or of validating a choice by simulating the impact of the decision on the situation. This “decision aid” aspect can be found to greater or lesser degree in other domains, such as planning and systems engineering.

Choice justification is the next step for the decision maker, or the person recommending the decision: once a decision has been made, simulation can be used to provide certain elements that establish the degree of pertinence of the decision. In this way, technico-operational simulations can be used to measure the conformity of an architecture to the initial specification.

A concept study consists of establishing the way in which a system should be used: for example, how many main battle tanks should there be in an armed platoon? How should vehicles move along a route where improvised explosive devices (IEDs) may be present, a current problem for troops in Afghanistan? How should tanks react in engagements with an enemy equipped with anti-tank weapons? In the civilian domain, similar questions can be applied to any system; for example, how should an airline use an Airbus A380? How should the time be divided between use and maintenance periods?

Planning is an activity which involves scheduling actions and operations to be carried out in a domain, with given aims, means, and duration. It allows the definition of objectives and intermediary steps required for the achievement of the overall aims of an operation or the implementation of a system or system of systems. For the Army, this may involve the development of plans for troop engagements; for civilians, this may supply plans for a factory or company. This planning may be carried out without particular time constraints. In a military context, this could involve, in peacetime, a study of scenarios based on hypotheses laid out in the defense department’s White Paper. Planning may also need to be carried out within a strict time period, to assist in the management of a crisis. In this case, planning and decision-making uses of simulation come together. Simulation allows several action plans to be tested and assists in choosing the optimal solution. The distinction between planning with and without time constraints is essential, as the demands made on tools are very different: with time constraints, the tools must be able to produce the necessary scenarios and environment very quickly, sometimes in a matter of hours.

Operation management demands evaluation of a situation and measurement of deviation from what was anticipated in the plans, with a possibility of rapid revision of plans for the operation. Although this activity is essentially based around information and command systems, simulation may be used to visualize the situation and its possible short-term developments.

Forecasting involves preparing for the future by evaluating different possible scenarios, most often in the long term. How will the global economy evolve? In 20 years’ time, what will be the effects of the diminution of crude oil reserves? What increase in demand will spark the development of China and India? What might be, over 10 years, the impact of a massive introduction of vehicles running on alternative power sources (hybrid, electric, hydrogen, biofuel, solar, and so on)?

Systems acquisition is at the core of the work of the DGA, the main project owner of complex systems in France and a major user of simulations. Simulation effectively accompanies a system throughout the acquisition cycle, that is, at every step from the expression of a need by the client to the commissioning of the system, from a project ownership or project management point of view. We find here the main theme of this study, so the use of simulation will be discussed in detail in later chapters.

The appearance of 3D scene and simulation authoring systems has led to a significant development in the use of virtual reality for marketing purposes. It is not always easy to demonstrate the interest of a system or a system of systems via a brochure. It is often possible to give live demonstrations, but these are expensive to organize and will not necessarily be able to show all the performances or operation of the system. Using simulations, the system can be implemented virtually in an environment representative of the needs of the client, making simulation a powerful tool in securing sales, whether for complex systems (e.g. implementation of air defenses) or more standard systems (e.g. a demonstration of the anti-pollution technology in a particular car).

We shall finish by touching on a very widespread but little understood use of simulation: leisure activities. This is, in fact, by far and away the main use of simulation. In 2007, video games alone represented a global market of almost 40 billion euros, approximately 10 times the size of the defense simulation market. The contributions of this domain to the progress of simulation are not purely financial; although defense drove technological innovation for simulation until the beginning of the 1990s, this is no longer the case. Video games are responsible for the fact that work stations are now commonplace and therefore cheap and equipped with multi-core processors capable of millions of operations per second, and graphics sub-systems are capable to render, in real time, complex 3D scenes in high definition. We also have access to richer and more user-friendly software, allowing unequalled productivity in the generation of virtual environments. There will always be a place for expensive, specific high-end systems, but their market is shrinking, and visual simulation is becoming much more widespread. Certain parts of the military have even begun to use video games directly, with more or less modifications; these applications have gained the name of “serious games”. The French Army has developed a cheap system, INSTINCT, used for basic infantry combat training, based on Ubisoft’s Ghost Recon game. SCIPIO (Simulation de Combat Interarmes pour la Préparation Interactive des Opérations, interweapon combat simulation for interactive preparation of operations), the constructive simulation used in the training system for command posts in the French Army, was developed specifically for the Ministry of Defense, but the core uses innovative technology developed by MASA SCI, which was initially used for video games. Its efficiency in terms of modeling human behavior, a major technological challenge in the SCIPIO project, is particularly advanced compared with the technology used before.

We could give a number of other examples, such as therapy (simulation is used in combination with virtual reality to cure phobias, such as fear of flying), but we shall stop here, having already discussed the most common categories of simulation use.

The list of uses of simulation given above was general in nature and presented without following a particular structure. In practice, various bodies have attempted to provide a structure for uses of simulation. Figure 1.14 show the typology of simulations used by the French Ministry of Defence, which was first defined in 1999 and underwent a few evolutions since. This typology defines five areas for M&S, based on its purpose:

Préparation des forces (PDF): Training & education of armed forces.

Appui aux opérations (AAO): Support to military operations, that is to say planning, rehearsal, execution and analysis of operations, but also keeping know-how level of deployed troops on location.

Préparation de l’avenir (PDA): Support to design and evaluation of capabilities models, concepts and doctrines, systems specifications, while taking into account risks and innovations.

Acquisition (ACQ): Support to systems acquisition process in armament programs (see section 1.3.2 and Figure 1.6).

Soutien à la réalisation des outils (SRO): support to tools design. This is a transverse area, contributing to the others through evaluation of technologies and innovations, and definition of common tools, standards and requirements.

Figure 1.14. Typology of defense simulations

ch1-fig1.14.jpg

Analysis simulation to support design of defence capabilities (simulation pour l’aide à la conception de l’outil de defense, SACOD) is simulation used to assist in the design of defense tools. It consists of defining future defense systems in terms of need, means (national and allied, present and future), and threats (also present and future), and it provides a means of arbitrating between different possibilities according to capacitive and economic criteria. In practice, SACOD is the step before a system acquisition process, but it was initially distinguished from acquisition support for practical reasons (different teams using different tools). Nowadays, with the increased importance of capability and system-of-systems approaches, this distinction is less clear cut. In Figure 1.14, the SACQ is the technical and functional simulation used to support the acquisition process after SACOD. In general, simulation for acquisition should also cover SACOD, but the division was made for practical reasons: as a general rule, SACOD and SACQ are not run by the same individuals and do not use the same types of models (SACQ uses lower-level technical and technico-operational simulations than SACOD). Note that, all the same, there is considerable overlap between the two.

In passing, note that the limits of each domain are not, in reality, precise; a simulation can be used for studying concepts, training, or defense analysis, with or without adapting the application.

Finally, a new domain has appeared in defense simulation in the past few years: Concept Development & Experimentation (CD&E). CD&E is an approach launched by NATO and a number of countries with the aim of implementing processes to study (for forecasting purposes) and evaluate new concepts before trying them. In this way, investment in a concept can be delayed until it has been consolidated, analyzed, and validated by the various partners involved (the military, the project owners, and the industrial project managers) following capacitive logic, looking at doctrinal, organizational, training, financial, and logistical aspects. Of course, the CD&E approach strongly recommends the use of simulation. This approach has structured reflections on capacity and systems of systems in relation to defense and was one of the key factors in the implementation of battle labs in several NATO member states and in parts of the defense industry, such as the technico-operational laboratory of the French Ministry of Defense. We shall return to this subject in more detail in Chapter 8.

1.4.2.4. Other classifications

Many different typologies of simulation exist and can be found in various works and articles. The following sections cover some further elements of taxonomy, which can be used to characterize a simulation system.

– Interactivity: a simulation can be called open, or interactive, if an operator can intervene in the course of the simulation, that is, by generating events. A simulation which does not allow this is referred to as closed or non-interactive

– Entity/events: the simulation that operates on the principle of entities or objects that exchange messages among themselves and engage in actions or through events that modify the state of the system and its components. This has a major impact on the architecture: as in the first case, we can, for example, use agent modeling techniques, whereas in the other, we are in the framework of a discrete event simulation engine.

– Determinism: the simulation integrating random phenomena is known as a probabilistic or stochastic simulation. This feature is often attached to another factor, that of reproducibility: a stochastic simulation can be reproducible if an execution can be replayed identically, despite the presence of random processes. This characteristic is important for study simulations, to which we will return in greater depth later.

– Time management: the time management strategy is a key element in the architecture of a simulation, and so also an item of taxonomy. A simulation may or may not be real time. Real time may be “hard”, that is, it must be imperatively respected, for example, if there is hardware in the loop, or “soft”, if the simulation is required to operate in real time, but respect of timing is not critical for the validity and proper operation of the system. Among non-real time simulations, we find simulations by time step, with discrete and mixed events. There are also simulations that operate in N times real time; this is usually a trick to avoid the time management question, based on the clock of the host computer.

– Distribution and parallelism: a simulation may be monolithic and destined for use in a single processor in a single machine. But it may be optimized for implementation on several processors or a processor with multiple cores to improve performance. The architecture of a simulation may also be distributed, with the application being shared across several machines in the same place (local network or cluster) or at a distance (geographic distribution). This very restrictive distribution is a design choice which can be justified by performance requirements or the possibility of reusing simulation components (see Chapter 7).

1.5. Conclusion

We have seen that simulation is a fundamental and almost essential tool for complex system and system-of-systems engineering. Widespread in most corresponding activities, at all stages in the life-cycle of the system, the field of simulation is diversified, a fact which can pose coherence and usage problems; we shall come back to this subject later.

Simulation can be complex and demands to be used methodically and with prudence to avoid costs spiraling out of control, thus reducing the profitability of the approach, and to avoid producing unusable results, which can happen when a model is invalid. As with any system, simulation requires a rigorous approach and appropriate use. If these conditions are respected, considerable gains can be made, which benefit all involved in the life-cycle of the system.

1.6. Bibliography

[ADE 00] ADELANTADO M., BEL G., SIRON P.,Cours de modélisation et simulation de systèmes complexes, Société des Amis de Sup’Aero (SAE), Toulouse, 2000.

[BAN 98] BANKS J.,Handbook of Simulation, Wiley-Interscience, Malden, 1998.

[BER 98] BERNSTEIN R., A road map for simulation based acquisition, report of the Joint Simulation Based Acquisition Task Force, US Under Secretary for Acquisition and Technology, Washington DC, 1998.

[BOS 02] BOSHER P., “UK MOD update” (on Synthetic Environment Co-ordination Office), presented at the Simulation Interoperability Workshop (02F-SIW-187), SISO, Orlando, 2002.

[BRA 00] BRAIN C., “Lies, damn lies and computer simulation”,Proceedings of the 14th European Simulation Multiconference on Simulation and Modelling, Ghent, 2000.

[CAN 00] CANTOT P., DROPSY P., “Modélisation et simulation: une nouvelle ère commence”,Revue scientifique et technique de la défense, DGA, Paris, October 2000.

[CAN 01] CANTOT P., Cours sur les techniques et technologies de modélisation et simulation, ENSIETA, Brest, 2002.

[CAN 08] CANTOT P., Introduction aux techniques et technologies de modélisation et simulation, ENSIETA, Brest, 2008.

[CAN 09] CANTOT P.,Simulation-Based Acquisition: Systems of Systems Engineering, Classes, Master of Systems Engineering, ISAE, Toulouse, 2009.

[CAR 98] CARPENTIER J., “La recherche aéronautique et les progrès de l’aviation”,Revue scientifique et technique de la défense, n 40, DGA, Paris, 1998.

[CAS 97] CASTI J.L.,Would-Be Worlds: How Simulation Is Changing the Frontiers of Science, John Wiley & Sons, New York, 1997.

[CEA 05] COMMISSARIAT À L’ENERGIE ATOMIQUE, Le tsunami de Sumatra, press release, CEA, December 2005.

[CEA 06] COMMISSARIAT À L’ENERGIE ATOMIQUE, Le supercalculateur TERA-10, press release, January 2006.

[CHE 02] CHEDMAIL P.,CAO et simulation mécanique, Hermès, Paris, 2002.

[CCO 04] COUR DES COMPTES, Maintien en condition opérationnelle du matériel des armées, The French Government Accounting Office, http://www.ccomptes.fr, Paris, 2004.

[DAH 99] DAHMANN J., KUHL F., WEATHERLY R.,Creating Computer Simulation Systems, Prentice Hall, Old Tappan, 1999.

[DGA 94] DELEGATION GENERALE POUR L’ARMEMENT, L’armement numéro 45, special edition on simulation, DGA, Paris, 1994.

[DGA 97] DELEGATION GENERALE POUR L’ARMEMENT,Guide dictionnaire de la simulation de défense, DGA, Paris, 1997.

[DOC 93] DOCKERY J.T., WOODCOCK A.E.R.,The Military Landscape: Mathematical Models of Combat, Woodhead Publishing Ltd, Cambridge, 1993.

[DOD 94] US DEPARTMENT OF DEFENSE,Directive 5000.59-M: DoD M&S Glossary, DoD, Washington DC, 1994.

[DOD 95] US DEPARTMENT OF DEFENSE,U.S. Modeling and Simulation Master Plan, Under Secretary of Defense for Acquisition and Technology, Washington DC, 1995.

[DOD 96] US DEPARTMENT OF DEFENSE,U.S. Instruction 5000.61:U.S. Modeling and Simulation Verification, Validation and Accreditation, Under Secretary of Defense for Acquisition and Technology, Washington DC, 2003.

[DOD 97] US DEPARTMENT OF DEFENSE,U.S. DoD Modeling and Simulation Glossary, Under Secretary of Defense for Acquisition and Technology, Washington DC, 1997.

[DOR 08] DOROSH M.,The Origins of Wargaming, Calgary, Canada, www.tactical gamer.com, 2008.

[EUR 01] EUROPEAN COMMISSION, European Aeronautics: A Vision For 2020 — Report of the Group of Personalities, Office for Official Publications of the European Communities, Luxembourg, 2001.

[GIB 74] GIBBS G.I.,Handbook of Games and Simulation Exercises, Routledge, Florence, 1974.

[GIB 96] GIBBs G.I.,Jim Blinns Corner: A Trip Down The Graphic Pipeline, Morgan Kaufmann, San Francisco, 1996.

[IST 94] INSTITUTE FOR SIMULATION AND TRAINING, The DIS vision: A map to the future of distributed simulation, technical report, IEEE, Orlando, 1994.

[IST 98] INSTITUTE FOR SIMULATION AND TRAINING, Standard 610.3: Modeling and Simulation (M&S) Terminology, IEEE, Orlando, 1998.

[JAM 05] JAMSHIDI M., “System-of-Systems Engineering — a Definition”,IEEE International Conference on Systems, Man and Cybernetics, Hawaii, October 2005.

[JAN 01] JANE’S, JANE’S Simulation & Training Systems, Jane’s Information Group, IHS Global Limited, 2001.

[KAM 08] KAM L. LUZEAUX D., “Conception dirigée par les modèles et simulations”, in D. LUZEAUX (ed.)Ingénierie des systèmes de systèmes: méthodes et outils, Hermès, Paris, 2008.

[KON 01] KONWIN C., “Simulation based acquisition: the way DoD will do business”, presentation at the National Summit on Defense Policy, Acquisition, Research, Test & Evaluation Conference, Long Beach, March 2001.

[KRO 07] KROB D., PRINTZ J., “Tutorial: concepts, terminologie, ingénierie”, presentation, Ecole systèmes de systèmes, SEE, Paris, 2007.

[LEM 90] LE MOIGNE J.-L.,Modélisation des systèmes complexes, Dunod, Paris, 1990.

[LIA 97] LIARDET J.-P., Les wargames commerciaux américains, des années soixante à nos jours, entre histoire militaire et simulation, une contribution à l’étude de la décision, Doctoral Thesis, University of Paul Valéry, Montpellier, 1997.

[LUZ 03a] LUZEAUX D., “Cost-efficiency of simulation-based acquisition”,Proceedings of SPIE Aerosense ‘03, Conference on Simulation, Orlando, USA, April 2003.

[LUZ 03b] LUZEAUX D., “La complémentarité simulation-essais pour l’acquisition de systèmes complexes”,Revue de l’Electricité et de l’Electronique, vol. 6, pp. 1–6, 2003.

[LUZ 10] LUZEAUX D., RUAULT J.-R.,Systems of Systems, ISTE, London, John Wiley & Sons, New York, 2010.

[MAI 98] MAIER M.W., “Architecturing principles for systems of systems”,Systems Engineering, vol. 1, no. 4, pp. 267–284, 1998.

[MEI 98] MEINADIER J.-P.,Ingénierie et intégration des systèmes, Hermès, Paris, 1998.

[MON 96] MONSEF Y.,Modélisation et simulation des systèmes complexes, Lavoisier, Paris, 1996.

[NAT 98] NORTH ATLANTIC TREATY ORGANISATION,NATO Modeling and Simulation Master Plan version 1.0., RTA (NATO Research & Technology Agency), AC/323 (SGMS)D/2, 1998.

[NRC 97] NATIONAL RESEARCH COUNCIL, Modeling and simulation: linking entertainment and defense, report of the Committee on Modeling and Simulation: Opportunities for Collaboration Between the Defense and Entertainment Research Communities Computer Science and Telecommunications Board Commission on Physical Sciences, Mathematics, and Applications, NRC, Washington, 1997.

[QUE 86] QUÉAU P.,Eloge de la simulation, Champ Vallon, Seyssel, 1986.

[REC 91] RECHTIN E.,Systems Architecting: Creating and Building Complex Systems, Prentice-Hall, Englewood Cliffs, 1991.

[RTO 03] NATO RESEARCH AND TECHNOLOGY ORGANISATION, RTO SAS Lecture Series on “Simulation of and for Military Decision Making” (RTO-EN-017), RTA (NATO Research & Technology Agency), The Hague, 2003.

[SIM 62] SIMON H.A., “The architecture of complexity: hierarchic systems”,Proceedings of the American Philosophical Society, vol. 106, no. 6, pp. 467–482, 1962.

[SIM 69] SIMON H.A.,The Sciences of the Artificial, MIT Press, Cambridge, 1969.

[SIM 76] SIMON H.A., “How complex are complex systems?”,Proceedings of the 1976 Biennial Meeting of the Philosophy of Science Association, American Philosophical Society, Philadelphia, vol. 2, pp. 507–522, 1977.

[STA 95] STANDISH GROUP INTERNATIONAL, Chaos, investigation report, West Yarmouth, March 1995.

[ZEI 76] ZEIGLER B.,Theory of Modeling and Simulation, John Wiley & Sons, Hoboken, 1976.


1 Chapter written by Pascal CANTOT.

1 [LUZ 10] defines a system of systems as follows: “a system of systems is an assembly of systems which could potentially be acquired and/or used independently, for which the developer, buyer and/or the user aim to maximize performance of the global value chain, at a given moment and for a conceivable group of assemblies”. We shall consider another definition, supplied by Maier, later on, which approaches the concept by extension rather than intension.

2 A very basic trainer for a precise task, for example, learning to use navigation instruments, can cost less (tens of thousands of euros), but for our purposes, we shall refer to complete systems (full flight and/or full mission).

3 The cost per hour of flight of the Rafale aircraft is a subject of controversy. From Cour des Comptes [CCO 04], we estimate the figure to be 35,000 euros per hour for the first aircraft, but the figure is certainly lower nowadays.

4 Note that until recently, the Queen was often known as the General, or by another highranking military title. The capacity to command the best troops is the reason for the power of the corresponding chessman.

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset