1 Introduction and Overview

Acronym

DER Designated Engineering Representative
EASA European Aviation Safety Agency
FAA Federal Aviation Administration
IEEE Institute of Electrical and Electronic Engineers
IMA Integrated Modular Avionics
NASA National Aeronautics and Space Administration

1.1 Defining Safety-Critical Software

A general definition for safety is the “freedom from those conditions that can cause death, injury, illness, damage to or loss of equipment or property, or environmental harm” [1]. The definition of safety-critical software is more subjective. The Institute of Electrical and Electronic Engineers (IEEE) defines safety-critical software as: “software whose use in a system can result in unacceptable risk. Safety-critical software includes software whose operation or failure to operate can lead to a hazardous state, software intended to recover from hazardous states, and software intended to mitigate the severity of an accident” [2]. The Software Safety Standard published by the U.S. National Aeronautics and Space Administration (NASA) identifies software as safety-critical if at least one of the following criteria is satisfied [3,4]:

  1. It resides in a safety-critical system (as determined by a hazard analysis) AND at least one of the following:

    • Causes or contributes to a hazard.

    • Provides control or mitigation for hazards.

    • Controls safety-critical functions.

    • Processes safety-critical commands or data.

    • Detects and reports, or takes corrective action, if the system reaches a specific hazardous state.

    • Mitigates damage if a hazard occurs.

    • Resides on the same system (processor) as safety-critical software.

  2. It processes data or analyzes trends that lead directly to safety decisions.

  3. It provides full or partial verification or validation of safety-critical systems, including hardware or software systems.

From these definitions, it can be concluded that software by itself is neither safe nor unsafe; however, when it is part of a safety-critical system, it can cause or contribute to unsafe conditions. Such software is considered safety critical and is the primary theme of this book.

1.2 Importance of Safety Focus

In 1993, Ruth Wiener wrote in her book Digital Woes: Why We Should Not Depend Upon Software: “Software products—even programs of modest size— are among the most complex artifacts that humans produce, and software development projects are among our most complex undertakings. They soak up however much time or money, however many people we throw at them. The results are only modestly reliable. Even after the most thorough and rigorous testing some bugs remain. We can never test all threads through the system with all possible inputs” [5]. Since that time, society has become more and more reliant on software. We are past the point of returning to purely analog systems. Therefore, we must make every effort to ensure that softwareintensive systems are reliable and safe. The aviation industry has a good track record, but as complexity and criticality increase, care must be taken.

It is not unusual to hear statements similar to: “Software has not caused any major aircraft accidents, so why all the fuss?” The contribution of software to aircraft accidents is debatable, because the software is part of a larger system and most investigations focus on the system-level aspects. However, in other domains (e.g., nuclear, medical, and space) software errors have definitely led to loss of life or mission. The Ariane Five rocket explosion, the Therac-25 radiation overdoses, and a Patriot missile system shutdown during the Gulf War are some of the more well-known examples. The historical record for safety-critical software in civil aviation has been quite respectable. However, now is not the time to sit back and marvel at our past. The future is riskier for the following reasons:

  • Increased lines of code: The number of lines of code used in safetycritical systems is increasing. For example, the lines of code from the Boeing 777 to the recently certified Boeing 787 have increased eightfold to tenfold, and future aircraft will have even more software.

  • Increased complexity: The complexity of systems and software is increasing. For example, Integrated Modular Avionics (IMA) provides weight savings, easier installation, efficient maintenance, and reduced cost of change. However, IMA also increases the system’s complexity and results in functions previously isolated to physically different federated hardware being combined onto a single hardware platform with the associated loss of multiple functions should the hardware fail. This increased complexity makes it more difficult to thoroughly analyze safety impact and to prove that intended functionality is met without unintended consequences.

  • Increased criticality: At the same time that size and complexity of software are increasing, so is the criticality. For example, flight control surface interfaces were almost exclusively mechanical 10 years ago. Now, many aircraft manufacturers are transitioning to fly-by-wire software to control the flight control surfaces, since the fly-by-wire software includes algorithms to improve aircraft performance and stability.

  • Technology changes: Electronics and software technology are changing at a rapid rate. It is challenging to ensure the maturity of a technology before it becomes obsolete. For example, safety domains require robust and proven microprocessors; however, because a new aircraft development takes around 5 years, the microprocessors are often nearly obsolete before the product is flight tested. Additionally, the changes in software technology make it challenging to hire programmers who know Assembly, C, or Ada (the most common languages used for airborne software). These real-time languages are not taught in many universities. Software developers are also getting further away from the actual machine code generated.

  • More with less: Because of economic drivers and the pressure to be profitable, many (maybe even most) engineering organizations are being requested to do more with less. I’ve heard it described as follows: “First, they took away our secretaries; next, they took away our technical writers; and then, they took away our colleagues.” Most good engineers are doing the work of what was previously done by two or more people. They are spread thin and exhausted. People are less effective after months of overtime. A colleague of mine puts it this way: “Overtime is for sprints, not for marathons.” Yet, many engineers are working overtime for months or even years.

  • Increased outsourcing and offshoring: Because of the market demands and the shortage of engineers, more and more safety-critical software is being outsourced and offshored. While not always an issue, oftentimes, outsourced and offshored teams do not have the systems domain knowledge and the safety background needed to effectively find and remove critical errors. In fact, without appropriate oversight, they may even inject errors.

  • Attrition of experienced engineers: A number of engineers responsible for the current safety record are retiring. Without a rigorous training and mentoring program, younger engineers and managers do not understand why key decisions and practices were put into place. So they either don’t follow them or they have them removed altogether. A colleague recently expressed his dismay after his team gave a presentation on speed brakes to their certification authority’s new systems engineer who is serving as the systems specialist. After the 2-hour presentation, the new engineer asked: “You aren’t talking about the wheels, are you?”*

  • Lack of available training: There are very few safety-focused degree programs. Additionally, there is little formal education available for system and software validation and verification.

Because of these and other risk drivers, it is essential to focus even more on safety than ever before.

1.3 Book Purpose and Important Caveats

I am truly honored and humbled that you have decided to read this book. The purpose of this text is to equip practicing engineers and managers with the information they need to develop safety-critical software for aviation. Over the last 20 years, I’ve had the privilege of working as an avionics developer, aircraft developer, certification authority, Federal Aviation Administration (FAA) Designated Engineering Representative (DER), consultant, and instructor. I’ve evaluated safety-critical software on dozens of systems, such as flight controls, IMA, landing gear, flaps, ice protection, nose wheel steering, flight management, battery management, displays, navigation, terrain awareness and warning, traffic collision and avoidance, real-time operating systems, and many more. I’ve worked with teams of 500 and teams of 3. It has been and continues to be an exciting adventure!

The variety of positions and systems has allowed me to experience and observe common issues, as well as effective solutions, for developing safetycritical software. This book is written using these experiences to help practicing aircraft systems engineers, avionics and electronic systems engineers, software managers, software development and verification engineers, certification authorities and their designees, quality assurance engineers, and others who have a desire to implement and assure safe software.

As a practical guide for developing and verifying safety-critical software, the text provides concrete guidelines and recommendations based on real-life projects. However, it is important to note the following:

  • The information herein represents one person’s view. It is written based on personal experiences and observations, as well as research and interactions with some of the brightest and most committed people in the world. Every effort has been made to present accurate and complete advice; however, every day I learn new things that clarify and expand my thinking. Throughout the book terms like typically, usually, normally, most of the time, many times, oftentimes, etc. are used. These generalizations are made based on the numerous projects I’ve been involved in; however, there are many projects and billions of lines of code that I have not seen. Your experience may differ and I welcome that insight. If you wish to clarify or debate a topic, ask a question, or share your thoughts, feel free to send an email to both of the following addresses: [email protected] and [email protected].

  • I have used a personal and somewhat informal tone. Having taught DO-178B (and now DO-178C) to hundreds of students, this book is intended to be an interaction between me (the instructor) and you (the engineer). Since I do not know your background, I have attempted to write in such a way that the text is useful regardless of your experience. I have integrated war stories and some occasional humor, as I do in the classroom. At the same time, professionalism has been a constant goal.

  • Because this book focuses on safety-critical software, the text is written with higher criticality software (such as DO-178C levels A and B) in mind. For lower levels of criticality, some of the activities may not be required.

  • While the focus of this book is on aviation software, DO-178C compliance, and aircraft certification, many of the concepts and best practices apply to other safety-critical or mission-critical domains, such as medical, nuclear, military, automotive, and space.

  • Because of my background as a certification authority and as a DER, reading this book may at times be a little like crawling into the mind of a certification authority. (Don’t be afraid.) Both FAA and the European Aviation Safety Agency (EASA) policy and guidance materials are discussed; however, the FAA guidance is primarily used unless there is a significant difference. While the content of this book is intended to explain and be consistent with the certification authorities’ policy and guidance as they exist at the time of this writing, this book does not constitute certification authority policy or guidance. Please consult with your local authority for the policy and guidance applicable to your specific project.

  • Having traveled the globe and interacted with engineers on six continents, I have made every effort to present an international perspective on safety-critical software and certification. However, because the majority of my work has taken place in the United States and on FAA certified projects, that perspective is presented.

  • Several RTCA documents, including DO-178C, are referenced throughout this document. It has been exciting to serve in leadership roles on three RTCA committees and to have a key role in developing the referenced documents. As noted in the foreword, RTCA has been gracious to allow me to reference and quote portions of their documents. However, this book is not a replacement for those documents. I’ve attempted to cover the highlights and guide you through some of the nuances of DO-178C and related documents. However, if you are developing software that must comply with the RTCA documents, be sure to read the full document. You can learn more about RTCA, Inc. or purchase their documents by contacting them at the following:

    RTCA, Inc.

    1150 18th Street NW Suite 910

    Washington, DC 20036

    Phone: (202) 833-9339

    Fax: (202) 833-9434

    Web: www.rtca.org

  • This book is written so that you can read it from beginning to end, or you can read selected chapters as needed. You will find occasional repetition between some chapters. This is intentional, since some readers may choose to use this book as a reference rather than read it straight through. References to related chapters are included throughout to help those who may not read the text cover to cover.

1.4 Book Overview

This book is divided into five parts. Part I (this part) provides the introduction and sets the foundation. Part II explains the role of software in the overall system and provides a summary of the system and safety assessment processes used for aviation. Part III starts with an overview of RTCA’s DO-178C, entitled Software Considerations in Airborne Systems and Equipment Certification, and the six other documents that were published with DO-178C. The section then goes through the DO-178C processes—providing insight into the guidance and suggestions for how to effectively apply it. Part IV explores four RTCA guidance documents that were released with DO-178C. The topics covered are software tool qualification (DO-330), model-based development and verification (DO-331), object-oriented technology and related techniques (DO-332), and formal methods (DO-333). Part V covers special topics related to DO-178C and safety-critical software development. These special topics are focused on aviation but may also be applicable to other domains and include noncovered code (extraneous, dead, and deactivated code), field-loadable software, user-modifiable software, real-time operating systems, partitioning, configuration data, aeronautical databases, software reuse, previously developed software, reverse engineering, and outsourcing and offshoring.

There are many other subjects that are not covered, including aircraft electronic hardware, electronic flight bags, and software security. These topics are related to software and are occasionally referenced in the text. However, due to space and time limitations, they are not covered. Some of these topics are, however, slated to be covered in the new edition of CRC Press’s Digital Avionics Handbook.

References

1. K. S. Mendis, Software safety and its relation to software quality assurance, ed. G. G. Schulmeyer, Software Quality Assurance Handbook, 4th edn., Chapter 9 (Norwood, MA: Artech House, 2008).

2. IEEE, IEEE Standard Glossary of Software Engineering Terminology, IEEE Std-610-1990 (Los Alamitos, CA: IEEE Computer Society Press, 1990).

3. National Aeronautics and Space Administration, Software Safety Standard, NASA-STD-8719.13B (Washington, DC: NASA, July 2004).

4. B. A. O’Connell, Achieving fault tolerance via robust partitioning and N-modular redundancy, Master’s thesis (Cambridge, MA: Massachusetts Institute of Technology, February 2007).

5. L. R. Wiener, Digital Woes: Why We Should Not Depend on Software (Reading, MA: Addison-Wesley, 1993).

*For those of you who don’t know, speed brakes are on the wings.

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset