10

Additional Considerations

PREVIOUSLY DEVELOPED HARDWARE

Previously developed hardware, like the other topics in chapter 11, may or may not apply to a project depending upon the selected certification strategy.

PDH is invoked if there is an intention to reuse hardware that was developed at some earlier time. There are numerous reasons for reusing previously developed hardware, but for the purposes of this book the most relevant reason is to simplify and shorten the path (and cost) to DO-254 compliance and the eventual approval of the hardware.

Most PDH will come from one or more of the following sources:

•  Commercial off-the-shelf (COTS) hardware or component

•  Airborne hardware developed to other standards (e.g., a military or company standard)

•  Airborne hardware that predates DO-254

•  Airborne hardware previously developed at the lower design assurance level (DAL)

•  Airborne hardware previously developed for a different aircraft

•  Airborne hardware previously developed and approved, and then subsequently changed

The PDH itself can be from any level of hardware ranging from entire systems down to fragments of HDL code. Most PDH items will be one of the following:

•  An entire system

•  An LRU or box (including SW)

•  An LRU or box (hardware only)t

•  An entire circuit card assembly (including SW)

•  An entire circuit card assembly (hardware only)

•  Part of a circuit card assembly (CCA)

•  An entire PLD (device and HDL

•  An entire HDL design

•  Part of an HDL design

Any of these items may have to be modified, retargeted, upgraded to a higher DAL, or used in a new way when used as PDH. In addition, modifying PDH may require the use of new (or newer) design tools.

Some common scenarios for PDH reuse are:

•  A hardware item (LRU/box, CCA, or PLD) that is being reused without changes.

•  A hardware item (LRU/box, CCA, or PLD) that is being modified for a new use.

•  A hardware item (LRU/box, CCA, or PLD) that is being updated but will remain in its original system and application.

•  An HDL design that is being retargeted to a new PLD device.

•  Increasing the DAL for any of the above scenarios.

Section 11.1 of DO-254 defines four scenarios for PDH, all discussed in sections 11.1.1 through 11.1.4: modifications to previously developed hardware (11.1.1), change of aircraft installation (11.1.2), change of application or design environment (11.1.3), and upgrading a design baseline (11.1.4). While each of these sections address different aspects of PDH that may be applied alone, it is more common for more than one of them to apply to any given use of PDH. For example, if a PLD was developed several years previously for aircraft model X, using it on aircraft model Y would be a change of aircraft installation (section 11.1.2) that could require that the PLD be modified for its new installation (section 11.1.1), use a later version of design tools to make the changes (section 11.1.3), be upgraded to a higher DAL (section 11.1.4), and interface to a different circuit card assembly (section 11.1.3).

A key point in reusing previously developed hardware is to maximize the reuse of the approved compliance data from the previous development program. The amount of data that can be reused will depend on the level or type of hardware being reused: the more complete the reused hardware, the more data can be reused.

Reusing an LRU as-is in a new installation is optimal since all of the compliance data from the previous development program can be reused. If the LRU had to be modified for use in the new installation, the amount of reusable data would go down while the amount of new development and verification effort would go up.

For any use of PDH the specific data items and activities that have to be augmented or redone must be determined on a case-by-case basis. It is impractical to provide comprehensive guidance that can apply to all potential uses of PDH, and this is one of the reasons that the guidance in DO-254 does not attempt to provide any more detail than it does. However, it is possible to provide some additional summary guidance that may assist the reader in understanding what DO-254 alludes to when it discusses the use of PDH.

The strategy for the use of PDH should address the hardware item itself, its data, and its interfaces to its parent hardware item. All AEH items are ultimately components within a higher level piece of hardware (commonly called its parent hardware): HDL code is a component in a PLD device; a PLD or other electronic component is itself a component in a CCA; a CCA is a component within an LRU or box; and an LRU or box is a component in its overlying system or the aircraft itself. As such, when any AEH item is used as PDH, its performance and functionality must be addressed as a singular item and also with respect to its context as a component in its parent hardware item. Thus the PDH item’s design data (requirements and design) must be complete in expressing the PDH item, but must also be integrated into its parent hardware by requirements traceability. Verification must confirm not only that the PDH functions as intended, but that it fulfills all functionality allocated to it from the parent hardware, and that it functions correctly when installed in the parent hardware.

When determining what must be done to the PDH and its data, the existing hardware and data must be assessed against the PDH scenarios defined in sections 11.1.1 to 11.1.4 of DO-254, along with the specific needs of the parent system.

Table 10.1 summarizes the data that can typically be reused, along with any additional effort that may have to be conducted, for some common uses of previously developed hardware. The information in Table 10.1 covers just a handful of common possibilities and is not intended to be comprehensive, nor does it provide guidance on how any use of PDH should or must be conducted. The data and activities in Table 10.1 are instead just intended to provide some additional insight into the PDH activities that DO-254 describes.

The intended strategy for PDH must be stated and described in the PHAC. An analysis should be conducted to identify any gaps between the existing hardware and data and what the new use requires. Any gaps that are identified should be filled with new data, which may include data from service history, additional verification, or reverse engineering. A change impact analysis should be performed; it should consider the effect on all aspects of DO-254 and the hardware life cycle, which may include planning, requirements, design, validation, verification, implementation, transition to production, configuration management, and tool qualification. The change impact analysis may need to be included with the PHAC.

Any changes to PDH must be assessed for their effect on the system safety assessment, as should the use of PDH in a new application or installation. If the system safety assessment indicates an increase in DAL for the PDH, the PDH and its data must be assessed against the requirements for the new DAL. The higher DAL may require additional changes to all aspects of the PDH’s compliance to DO-254, which again may include planning, requirements, design, validation, verification, implementation, transition to production, configuration management, and tool qualification.

Service history can be used (but is not required) to support the use of PDH.

When discussing PDH with the FAA, be sure to consider that FAA definition of PDH from Order 8110.105. FAA Order 8110.105 states that PDH is airborne electronic hardware (simple or complex PLDs) that was approved before Advisory Circular AC 20-152 (June 2005). PDH also includes projects for simple or complex PLDs where the hardware management plans were approved before Advisory Circular AC 20-152 (June 2005). The FAA uses the term “legacy” systems for systems approved before AC 20-152.

COMMERCIAL OFF-THE-SHELF COMPNENTS USAGE

The majority of airborne electronic hardware is composed of commercial off-the-shelf parts ranging from the simplest passive components to the most complex integrated circuits. In fact, very little of most systems can be considered custom built. The concept of COTS also extends to assemblages of these components, such as sub-assemblies, programmed PLDs, commercially obtained intellectual property (IP) and library functions for PLDs, and even entire systems. However, while each component in a system may be COTS components, DO-254 section 11.2 states that the certification process does not address individual components, modules, or sub-assemblies because those COTS components are addressed when the function they belong to is verified. Or in other words, if the AEH item is verified, then its constituent components are assumed to be verified as well. Thus for the most part individual COTS components do not have to be specifically addressed.

TABLE 10.1

Common Uses of PDH and Typical Levels of Reuse

Image

However, DO-254 also states that the basis for using COTS components is the use of an electronic component management plan (ECMP) in conjunction with the design process. Or in other words, using an ECMP is essential for establishing the pedigree and authenticity of all of the COTS components that are used in the AEH, thus providing supporting substantiation for the assumption that verifying the function or system will also verify the veracity of the COTS components.

The ECMP should satisfy items 1 through 7 of DO-254 section 11.2.1, which can be considered the ECMP’s objectives or goals. These seven goals can be paraphrased as follows:

1.  Ensure that all components were manufactured by companies that have a proven track record of producing high quality parts, and that none of the components are of questionable origin. For example, counterfeit components are often virtually indistinguishable from authentic parts but will often exhibit lower reliability or performance, so the ECMP should ensure that components are only purchased from original manufacturers through reliable sources.

2.  As part of their overall reputation for excellence, the component manufacturers have a high integrity quality control program to ensure that all components are of consistently high quality and will always meet their rated specifications.

3.  Each component that was selected for the AEH has successfully established its quality and reliability through actual service experience. In other words, the ECMP should ensure that new and novel components not be used until they have established their reliability in actual use.

4.  Each component has been qualified by the manufacturer to establish its reliability, or else the AEH manufacturer has conducted additional testing on the component to establish its reliability after procurement.

5.  The quality level of the components is controlled by the manufacturer, or if this cannot be established, additional testing is conducted by the component manufacturer or the AEH manufacturer to ensure that the components have adequate quality. In other words, the manufacturer should test all components to positively establish that every one will meet or exceed its specifications, and if they do not then the AEH manufacturer should.

6.  Each component in the AEH has been selected for its ability to meet or exceed the requirements of its intended function in the AEH, including environmental, electrical, and performance parameters, or were screened through additional testing to ensure that all components will meet or exceed the needs of the AEH. So while it is best to use only components that are rated by the manufacturer to be in excess of what the AEH needs, components with lesser specifications can also be tested to identify individual components that exceed their specifications to where they meet the needs of the AEH.

7.  All components are continuously monitored to quickly identify failures or other anomalous behavior, and if any are encountered, the deficiencies will be fed back to the component manufacturer to effect corrective action. In other words, keep track of component performance and pay special attention to failures or other unwanted behavior so components that are deficient or do not meet their rated specifications can be quickly identified and corrected by the manufacturer.

In addition to the component management process, DO-254 section 11.2.2 discusses areas of potential concern with respect to procurement issues. Procurement issues are not limited to simple availability and obsolescence, but include such issues as variations in quality between production runs, manufacturing improvements that may affect component performance in a way that can undermine design assurance, and the availability of design assurance data for COTS components that are considered complex or otherwise require compliance to the guidance in DO-254.

Obsolescence is one of the more serious and prevalent issues with regard to component procurement. When a component manufacturer discontinues a component used in an AEH item, the ripple effects can be challenging, expensive, time-consuming, and even affect the design assurance of the system. Reputable manufacturers will generally provide enough advance notice to allow customers to plan for component obsolescence, but even with plenty of notice the effects can still be significant. There is little official guidance on how it should be managed, but common sense indicates that the ECMP should (in addition to items 1 through 7 in DO-254 section 11.2.1) be vigilant for obsolescence and query manufacturers for long-term availability of the component both before it is selected and after the AEH has entered production.

Integrated circuits are well known for exhibiting measurable variations in performance depending not only on when they were manufactured, but even according to the location of each die on the silicon wafer. However, manufacturers should sort each component die by its performance to ensure that any part that is labeled with a specific part number will still meet the guaranteed performance specified in the component data sheet. Thus while variations in performance is listed as a concern for component management, mitigating this phenomenon may not precisely be a components management issue, but more a design issue in which designers should ensure that the electronic design in which the component is used will work properly regardless of where the individual components fall in the performance ranges specified in the component data sheet.

It is also common for integrated circuits with the same part number but different manufacture dates to show significant variations in performance, often toward improvements in performance. A common cause of such a variation is the shrinking of the feature sizes in the semiconductor device, which can often result in improved speed and performance. It is also common, however, for the improvements in performance to result in devices that perform too well for the design and cause instability or other anomalous behavior that does not exist when using the original devices.

Some of the more complex COTS components, such as assemblies or pre-programmed PLDs, may raise concerns with certification authorities and/or result in inadequate design assurance data if the manufacturer does not or will not provide the data with the hardware. AEH designers should consider this possibility when using such COTS devices in their designs. The same issue applies to COTS IP cores that are popular with HDL-based digital components. IP cores can be an attractive way to buy pre-designed complex functionality for PLDs, but when obtaining IP cores the availability or generation of design assurance data should be carefully considered before committing to their use. If the design assurance data is not available for what-ever reason, it may have to be reverse engineered to meet the design assurance goals of DO-254 and other certification guidance.

DO-254 does not provide complete guidance on the use of COTS components. If further guidance is needed on preparing an ECMP, the International Electrotechnical Commission document IEC TS 62239, Process management for avionics—Preparation of an electronic components management plan, can be used as a guide.

PRODUCT SERVICE EXPERIENCE

Product service experience takes advantage of time that a component or AEH hardware has accumulated in actual operation by documenting the time in service and using that data to substantiate a claim that the hardware is safe and satisfies the objectives of DO-254. Service experience can be used to supplement or even fully satisfy the objectives of DO-254 for COTS devices or previously developed hardware. In both cases the use of service experience is optional; it can be used if it is available, but is not required to satisfy the objectives of DO-254. If service experience will be used, its use must be coordinated with the certification authorities and stated in the PHAC.

The use of service experience requires that current and/or previous use of the hardware be documented. How much time must be documented is not entirely standardized, and like many other aspects of certification it is evaluated per case by the certification authorities, so it is important to communicate early and often with the certification authorities about its use. The service experience does not have to be from an aerospace application, but logically the closer the service experience is to the intended application, the more relevant that experience will be and the more confidence that experience will inspire. The relevance of the service experience is one of the factors that will be assessed when determining its acceptability.

The relevance and acceptability of the service experience will depend on four criteria:

1.  The relevance of the service experience to the intended use, as defined by its application, function, operating environment, and design assurance level. The more similar the service experience is to the intended use, the more relevant it is, and therefore the more acceptable the data is likely to be.

2.  Whether the hardware that accumulated the service experience is the same version and configuration that is proposed for the intended application. If they are different, then it may be more difficult to justify the use of the service experience, or additional analysis may be needed to justify it.

3.  Whether there were design errors that surfaced during the period of service history, and if so, whether those errors were satisfactorily dispositioned. The disposition of each error can be effected through elimination of the error, mitigation of the error’s effects, or through an analysis that shows that the error has no safety impact.

4.  The failure rate of the item during the service history.

These criteria are satisfied through four activities that are used to assess service experience data, or in other words, the following four activities should be conducted to determine whether the service experience adequately meets the four criteria:

1.  Conduct an engineering analysis of the service history with respect to the item’s application, installation, and environment to assess how relevant the service history is to the intended use. This analysis may look at a wide variety of data, including specifications, data sheets, application notes, service bulletins, user correspondence, and errata notices, to determine how relevant the service history is.

2.  Evaluate the intended use of the item for its effect on the safety assessment process for the new application. If there were any design errors discovered during the service history, this evaluation should include approaches that can be employed to mitigate the effects of the errors, if applicable.

3.  If there were design errors, the statistical data for the errors should be evaluated for their effect on the safety assessment process. If there are no statistics, the errors can be qualitatively assessed for their impact.

4.  Evaluate the problem reports for the item during the service experience to identify all errors that were discovered and how those errors were dispositioned. While correcting any errors during that period is generally preferred, it is not required to enable the service experience to be used as long as the remaining errors are mitigated and/or shown to not affect the design assurance of the item. The mitigation can be implemented through architectural means in the new application, or through additional verification to show that the error will not be an issue in the new application.

Once the service history data has been evaluated and deemed to be sufficient to establish or supplement the design assurance for the item for its intended use, the data and its assessment should be documented to substantiate the claim for design assurance. The service experience assessment data should include the following:

1.  Identify the item and its intended function in the new application, including its DAL. If the item is a component in a Level A or B function, there should be a description of how the additional design assurance strategies in DO-254 Appendix A will be satisfied, such as the use of architectural mitigation and additional or advanced verification, to establish the requisite design assurance.

2.  The process used to collect and evaluate the service experience data, and the criteria that were used to assess whether that data was adequate and valid, should be described.

3.  The service experience data should be documented. This data should include the service history data that was evaluated, any applicable change history, assumptions that were used in the analysis of the data, and a summary of the analysis results.

4.  A justification for why the service history is adequate for establishing or supplementing (as applicable) the design assurance for the item in its new intended use and DAL.

In practical terms, service history should be based on the same component, with the same part number and version. The service hours should be commensurate with the failure probability required by the design assurance level. PLDs with millions of hours of service experience are suitable to propose for design assurance level A and B applications. Service experience with thousands of hours will not yield much credit for DO-254 compliance. Service experience from flight test programs are not suitable for demonstrating compliance to DO-254 since the aircraft is not yet certified.

TOOL ASSESSMENT AND QUALIFICATION

Tool assessment and qualification is one of the more misunderstood aspects of DO-254. It is not uncommon for the uninitiated to assume that the design tools used to produce Level A DO-254 compliant hardware have to be “special.” The news that no special tools are required is generally greeted with relief, but on the other hand the news that other things must be done to avoid the use of “special” tools is greeted with a resurgence of concern.

The fundamental tenet of DO-254’s approach to tools (in general, “tools” refers to computer-based design and verification software, or to electronic measurement tools used in an electronics lab, not to mechanical hand tools found in a shop) is that no tool can be trusted unless it has been proven in some way to always produce the correct output when given the correct inputs. There are a number of ways to satisfy this need to prove the correctness of a tool’s output, and fortunately most of them do not require that the tool be “special.”

There are two basic approaches to proving that a tool produces the correct output: the first approach is to comprehensively test and analyze a tool before it is used to prove that the tool will generate the correct outputs under the conditions under which it will be used, otherwise known as qualifying the tool; the second approach is to first use the tool and then test its outputs to independently prove that the outputs it produced were correct for the inputs it was given, otherwise known as verification. Both approaches will produce the same result in the long run, but have different considerations for their use.

Qualifying a design tool requires that the confidence in the tool be commensurate with the design assurance level of the hardware it is producing. In general, this means that the assurance for the output of the tool is at least as high as the end-item hardware, so for Level A hardware the design tool would have to have the same level of integrity as the Level A hardware that it generates. For most hardware design tools this would be a monumental task, especially for PLD design tools such as synthesis tools given their immense complexity and known propensity for introducing failure modes by altering the logic in the HDL (see the chapter on Design Assurance Through Design Practice).

The advantage of qualifying a tool is that the tool output can be trusted and is thus exempt from further scrutiny. For a design tool it means that the output of the tool will not have to be verified because the qualification process essentially verifies the tool. Qualifying (i.e., verifying) the tool or verifying the tool’s output both require significant effort and time; the path that is taken depends on the needs of the program and should be preceded by a thorough analysis of the comparative benefits, both immediate and long term. Since tool qualification is perpetual—once qualified, the specific version and configuration of the tool that was qualified can be used from that point on and for multiple projects—future uses of the tool should be considered when assessing the long-term benefits of qualification.

The disadvantage of qualifying a tool is that for some tools the cost of qualification can be enormous, often exceeding the cost and time of verifying its outputs. For tools that will not be used beyond the current project, verification of its outputs can be the most efficient and least painful option. An additional consideration when weighing the cost versus benefits of tool qualification is that verifying the tool’s output, if properly managed and documented, can be leveraged in the future as a means of satisfying some or all of the qualification criteria for the tool for future uses. Thus if it is known that the tool will be used for future projects, documenting the independent assessment of its outputs for one project can be used to support qualification of the tool for future projects or even future evolutions of the original project.

Tools are divided into two types: design tools, which are used to generate the hardware design; and verification tools, which are used to verify the design. There is also a verification tool sub-type commonly known as verification coverage tools, which are used with elemental analysis to assess the completion of verification testing, or in other words to measure the extent to which the verification process tested the individual elements of the design.

Design tool examples would be the synthesis and place and route tools used to convert an HDL design into the programming file for a PLD, or the schematic capture and circuit card layout tools used for circuit card designs. Verification tool examples would include HDL simulators used for conducting formal simulation verification on the source code and post-layout models for PLDs, or an automated test stand that is used for conducting hardware tests on an LRU, or a logic analyzer and oscilloscope used for open-box verification and testing. An example of a verification coverage tool would be a code coverage analysis tool that is used during simulation to measure how thoroughly each line of HDL source code was exercised during simulations.

Design tools are treated most stringently with respect to tool qualification because a design tool has the capability to introduce an error into the design, so a flaw in the tool has a high probability of causing an error in the design and thus a reduction in design assurance. Verification tools are treated more leniently because the worst they can do is fail to detect an error in the design; they cannot introduce an error into the design, and failing to detect a design error would require the alignment of the verification tool’s flaw with a specific type of design error, so the likelihood that a flaw in the tool can reduce design assurance is much lower than for a design tool.

Coverage tools are treated most leniently—they require no assessment or qualification—because they cannot materially reduce the design assurance of the hardware, since all they do is evaluate how thoroughly the design has been covered by the verification process.

The essentials of tool qualification can be summarized as follows:

•  A design tool does not have to be qualified if:

•  Its outputs are independently assessed, or

•  It is used for Level D or E design.

•  A verification tool does not have to be qualified if:

•  Its outputs are independently assessed, or

•  It is used for Level C, D, or E verification, or

•  It is only used to measure verification completion.

Note that laboratory verification tools, such as logic analyzers, oscilloscopes, function generators, and the ubiquitous handheld meters, are generally exempt from formal qualification considerations as long as their calibration is up to date and they have been tested to confirm that they can correctly measure the signal types that they will be encountering. Calibration and testing, in conjunction with the tools’ widespread use, will generally suffice as independent assessment of their outputs.

Automated test stands may or may not require qualification depending upon how much they do on their own. A test stand that simply measures and records input and output signals, which are then manually reviewed, can normally get by with limited testing to show that they correctly measure and record representative signals of the types that will be measured, as would be done for other laboratory verification tools. An automated test stand that also evaluates the signals it measures and independently determines the pass/fail result will have to be qualified.

Figure 11-1 in DO-254 is a flow chart for the tool assessment and qualification process. Each block in the flow chart has a corresponding paragraph in section 11.4.1 that describes the activities that should be conducted for its corresponding block. Most of the blocks and their text descriptions are easily understood and are more or less self-explanatory, but it is still worthwhile to discuss the activities associated with each stage of the process. All of the information from the tool assessment and qualification process should be documented in the tool assessment and qualification data, which can then be recorded in a report if desired.

The first step in the process is to identify the tool, meaning that the tool name, model, version, manufacturer, and host environment (type of computer and operating system) should be documented in the tool assessment and qualification data.

The second step is to identify the process that the tool supports, which is another way of saying that the specific application of the tool should be used to identify the tool as either a design tool or a verification tool. For example, a synthesis tool is used only during the synthesis stage of the PLD design process, and a place and route tool is used only in the layout stage of the PLD design process, so both will be classified as design tools. In some instances a tool may be used for both design and verification; in that situation, the tool should be addressed separately for each type of use. Part of this activity is to define how the tool will be used and not used, such as identifying the scope of use for the tool; for example, if a single tool is capable of both synthesizing HDL code and also placing and routing the design, but the intent is to use only the place and route function in conjunction with a third-party synthesis tool, it must be made clear which of these functions will be used. Likewise, if the tool has limited capabilities and will therefore only perform certain functions, those limitations should also be documented. Finally, the output that the tool will produce (examples: an HDL netlist from a synthesis tool, a programming file from a place and route tool, a schematic diagram from a schematic capture tool, a waveform from a simulation tool, a text log file from a simulator running self-checking testbenches, etc.) should be identified and documented. All of this information should be documented in the tool assessment and qualification data.

The information gathered in this step has a direct bearing on the activities that need to be conducted for the tool qualification process. Identifying the way the tool will be used will identify the tool as either a design or verification tool, which will affect how the tool must be addressed during the tool assessment and qualification process. Identifying the role and scope of the tool will identify the scope of any required assessment and qualification that occurs during the process. Identifying the tool’s output will define the specific activities that must be conducted to either verify the tool’s output or qualify the tool.

Once this information is documented, the third step asks whether the output of the tool will be independently assessed. For most projects, this is the most important step in the tool qualification process because qualifying a tool can be avoided by independently assessing the tool’s output. For a design tool, assessing the tool’s output can be accomplished through independent verification of the hardware. For verification tools, the assessment can be accomplished through a manual review of the outputs or by comparing the tool’s output (also done manually) to a similar tool’s output.

For PLD designs, the HDL text editor (it sometimes comes as a surprise to engineers that a text editor is considered a design tool, but as the primary means of creating an HDL design it definitely qualifies) output consists of HDL text files that are independently assessed during the HDL code review, so no further action on that tool is required. The synthesis tool generates an HDL netlist that describes the reduced and optimized logic, and when manually reviewed this netlist is essentially unintelligible, so assessing the output of a synthesis tool makes no real sense. Likewise, the programming file that is the output of the place and route tool has no human-readable content, and it has no meaning other than as the input to the PLD device programmer, so assessing its output is meaningless as well. There are design equivalency comparison tools that can compare the HDL to a netlist and verify that they express the same functionality, but if those tools are used as part of the formal assessment process they too will have to be subjected to the tool qualification process as well, so their use may not be cost-effective in the long run. The inability to independently assess the outputs of the synthesis tool and the place and route tool would seem to present an obstacle to assessing their outputs and relieving them of the need for qualification. However, DO-254 settles this quandary by stating that if independent verification conducted on the finished design indicates that the design is true to its intended functionality, then the tool chain that created the design can be assumed to be functioning correctly. So in effect all of the tools in the PLD design tool chain (the synthesis tool, place and route tool, and device programmer, and even the text editor as well) are simultaneously covered by the verification process even though the output of each tool is not assessed separately. In this situation the design equivalency comparison tool is not even needed, since the equivalency between the HDL and its synthesized netlist is actually irrelevant if the finished product (the programmed PLD) is verified and proven to be correct. Such a tool can be used informally to provide added confidence in the performance of the synthesis tool, and its use for that purpose should be encouraged, but as a formal step in the tool assessment and qualification process its use is unnecessary. When design equivalency comparison tools are used for rehosting designs or converting an FPGA into an ASIC, they should be assessed for that purpose.

The design and verification processes defined in DO-254 include all the necessary processes and activities to eliminate the need to qualify any design tool, so for almost all projects, qualifying a design tool will be an elective task that is driven by extra-project considerations such as the future use of the tool (as discussed previously).

Verification tools, on the other hand, are not so easily assessed. Verification and verification tools normally play a large part in eliminating the need to qualify a design tool, but while verification follows design and provides the assessment for design tools, there are no processes that follow verification that can provide the same service for verification tools. The assessment of verification tools must be conducted as part of the verification process, or be conducted as a separate activity that is independent of both the design and verification processes.

DO-254 provides two suggestions on how to assess the outputs of verification tools: manually review the tool outputs, or compare the output of the tool against the output of a comparable (but dissimilar) tool. For the first suggestion, that of manually reviewing the tool’s outputs, an example would be to manually review the waveforms from a PLD simulation against waveforms captured during in-circuit hardware tests. For the second suggestion, that of comparing the tool’s output to the equivalent output from a comparable but dissimilar tool, an example would be to run the same set of PLD simulations with two dissimilar simulators and compare the waveforms.

If the outputs of a tool are independently assessed as discussed in the previous paragraphs, no further action is necessary. The assessment of the tool, including the rationale and results of the assessment, should be documented as part of the tool assessment and qualification data.

If a tool’s output is not independently assessed, the fourth step of the tool assessment and qualification process identifies the design assurance level of the function that the tool will support. If the tool is a Level D or E design tool, a Level C, D, or E verification tool, or if the tool is used to measure verification completion, then no assessment or qualification is needed. If the tool is a Level A, B, or C design tool, or a Level A or B verification tool, then additional assessment is necessary. The designation of the tool according to its function and DAL should be documented in the tool assessment and qualification data.

In the fifth step of the process, Level A, B, and C design tools, and Level A and B verification tools, whose outputs are not independently assessed, can avoid further assessment and qualification if they have significant relevant history. Relevant history is an attractive alternative to tool qualification, or in some cases and depending on the tool, can be used as part or all of a tool’s qualification. However, at the same time the criteria for invoking relevant history is not clearly defined in DO-254. DO-254 states that no further assessment is necessary (or, given that any tool that has reached this part of the process has not been assessed, DO-254 is actually stating that no assessment is necessary) if it is possible to show that the tool has been used previously and produced acceptable results. This guidance is somewhat (but justifiably) ambiguous in that it does not include quantifiable criteria for what constitutes relevant history, which leaves the determination to the certification authorities. In turn, FAA Order 8110.105 CHG 1, Section 4-6 provides guidance to certification authorities on how relevant history should be justified, and while this guidance is more specific than DO-254, it still leaves the determination of acceptability to the discretion of the certification authorities.

According to DO-254 and FAA Order 8110.105 CHG1, relevant history, if it is going to be used, should meet the following criteria:

•  The tool history can be from airborne or non-airborne use.

•  The tool history must be documented with data that substantiates its relevance and credibility.

•  The justification for the tool history should include a discussion of the relevance of the tool history to the proposed use of the tool. “Relevance” in this context refers to the way in which the tool was used, the DAL of the hardware it designed or verified, the type of data it produced or measured, and the specific functionality of the tool that was used.

•  The tool history should prove that the tool generates the correct result.

•  The use of tool history should be documented in the project PHAC.

•  The use of tool history should be justified early in the project.

If relevant history is claimed, the tool assessment and qualification data should include a thorough discussion of the history, including how the history satisfies the criteria listed above. Any use of a tool can be used in the future for relevant history, so if it is possible that a tool will be used again for a similar application, the details of the current use should be carefully documented as if its use as relevant history has already been decided. Creating the documentation will take little time and could save a great deal of effort for future programs, or even for a subsequent evolution of the same program to implement a change in the design.

If relevant history is not applicable or will not be claimed, the tool must be subjected to the qualification process, starting with establishing a baseline for the tool and setting up a problem reporting system for the tool and its qualification, as described in step 6 of the tool assessment and qualification process. This means the specific version and configuration of the tool should be placed in a configuration management system and treated like any other piece of configuration managed data, including the assignment of an unambiguous configuration identity. This information should be documented in the tool assessment and qualification data.

Once the tool has been baselined, a “basic” tool qualification should be performed. This activity essentially tests the tool against its documented (as in a user’s manual) performance and functionality. It requires that a basic tool qualification plan and procedure be generated and executed, using the tool’s documented functionality and performance as requirements to be verified. This information should be documented in the tool assessment and qualification data. In addition, if the tool that is used for the actual design or verification activities differs from the tool that was baselined and then qualified, the tool assessment and qualification data must include a justification for using the different version and substantiating data for why it was acceptable to do so. For example, if an HDL simulator was qualified prior to the start of verification, and it was then discovered that the version that was qualified could not be used for formal simulations but that the next incremental version could, the use of the newer version without repeating the qualification must be justified. An example of a justification might be that the changes between the two versions fixed the problem that prevented the earlier version from being used but did not affect any other aspect of the tool’s capabilities and functionality, including errata sheets from the tool manufacturer that pinpoint all of the changes that were made. The tool assessment and qualification data should include the basic tool qualification plan, the tool requirements that were verified and the test procedures that verified them, the qualification results, how independence was maintained during the qualification, and an interpretation of the results that support the qualification result.

Basic qualification constitutes the entire tool qualification effort for all verification tools and for Level C design tools, whereas Level A and B design tools must be subjected to a “full” tool qualification program. DO-254 provides no guidance on this type of tool qualification because of its variability and because it will be unique for each tool, providing instead some generic guidance to point the applicant in the general direction of how the process should be conducted. As starting points it suggests the use of the guidance in Appendix B of DO-254 and the tool qualification guidance of DO-178, plus any other means acceptable to certification authorities. So in practical terms, if an applicant wants to try qualifying a design tool, it is pretty much an open field where all terms, conditions, and requirements must be negotiated with the applicable certification authorities.

This type of tool qualification is very rarely attempted, and in most cases it is not a cost-effective route. However, if a design tool is to be qualified, it will require a highly formalized and structured qualification effort with its attendant plans, procedures, reports, analyses, traceability, problem reports, etc.—essentially the same as what would be conducted and documented for a formal verification effort. The rigor of the qualification process will be determined in part by the nature of the tool, and proportional to the design assurance level of the hardware. Any design tool qualification effort may also require significant participation from the tool manufacturer, including access to proprietary design information that the manufacturer may not be willing to provide. A tool accomplishment summary can be used to document the results of the tool qualification effort.

Both basic and full tool qualification need not verify the tool for all of its functionality, but can be limited in scope to just the functionality that will be used. For example, if a synthesis tool is going to be qualified and it will only be used to synthesize VHDL, then the Verilog functionality of the tool does not have to be qualified. Likewise, if a simulator can simulate both VHDL and Verilog, and only Verilog will be simulated, then the basic qualification (or the relevant tool history) need only cover its Verilog capabilities.

Overall, the most common approach to design tool qualification is to conduct formal verification instead of qualifying the tool, and the most common options for assessing the outputs of a verification simulation tool (for Level A and B hardware) are:

•  Perform basic tool qualification

•  Use relevant history if it exists

•  Run the simulations on two dissimilar simulators, compare the results, and justify any discrepancies

•  Use the same test cases (inputs and expected results) for both simulations and in-circuit hardware tests

•  Compare the results from simulations and electrical tests, and justify any discrepancies

•  And as usual, other methods can be proposed in the PHAC for consideration by certification authorities

Table 10.2 provides an overview of the possible tool qualification outcomes according to tool type, DAL, independent output assessment, and acceptable relevant history.

TABLE 10.2

Tool Qualification Outcomes

Tool

DAL

Output Assessed

Relevant History

Qualification

Design

All

Yes

 N/A

Not Required

Verification

All

Yes

 N/A

Not Required

Design

A,B

No

No

Design Tool Qualification

Design

A,B

No

Yes

Not Required

Design

C  

No

No

Basic Qualification

Design

C  

No

Yes

Not Required

Design

D,E

 N/A

 N/A

Not Required

Verification

A,B

No

No

Basic Qualification

Verification

A,B

No

Yes

Not Required

Verification

C,D,E

 N/A

 N/A

Not Required

Verification Completion

N/A

 N/A

 N/A

Not Required

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset