7

Verification

Verification is performed to ensure that the requirements are met by the hardware. Verification is performed for all requirements, regardless of their origin as allocated, decomposed, or derived. The design implementation (the hardware) is evaluated against its requirements to demonstrate that the hardware performs its intended function. Since the verification is requirements based, the higher quality the requirements, the easier it will be to perform the associated verification.

Verification activities include review, analysis, and test. Level A and B hardware require independent verification. Independence can be achieved through a technical review of data by a person other than the author. A tool can also be used to achieve independence when the tool evaluates data for correctness, such as a test results checker.

A review is a qualitative assessment while an analysis is a quantitative assessment. A review is used to check a document or data against a set of criteria. Analysis assesses the hardware design or circuits for measurable or calculated performance or behavior.

Testing is the singular method by which the behavior of the actual hardware can be observed and measured.

Verification activities, methods, and transition criteria are described in the Hardware Verification Plan. Figure 7.1 highlights the verification aspects of the hardware development life cycle.

In this chapter,“testbench” will be used as a synonym for a test procedure used in simulation because in the world of HDL (in particular VHDL) they serve a similar purpose.“Test procedure” will be used in its literal sense, i.e., when referring to procedures used for hardware test.

An effective technique is to start with a plan to organize the verification strategy. Each hardware requirement should be assessed as to whether it is suitable for review, analysis, and/or test. The optimal method for verifying each requirement is then selected and documented. For PLD requirements, the functional simulation and post-layout timing model is best suited to the analysis environment. Timing analysis can be targeted to post-layout timing models in the simulation tool, using the static timing analysis report from the layout tool or a combination thereof. An example test plan matrix is shown in Table 7.1.

Image

FIGURE 7.1 Verification Aspects of the Hardware Development Life Cycle

FUNCTIONAL FAILURE PATH ANALYSIS

A functional failure path analysis is performed prior to the verification process. The functional failure path analysis is used to determine which parts of the design could cause a failure that has been categorized as catastrophic (Level A) or hazardous (Level B). The circuits in the Level A and Level B functional paths are subject to additional verification methods as described in Appendix B of DO-254.

The functional failure path analysis is a top-down analysis starting from system level functional failure paths. The functional failure paths are identified for each system level function classified as catastrophic or hazardous in the functional hazard assessment. An example of a system level functional failure could be hazardously misleading data on a primary flight display or uncommanded movement of a primary flight control surface. The idea of the analysis is to identify the electronic hardware in the system level functional failure path, then to decompose the electronic hardware functional failure path into the circuits constituting that path, the circuit functional failure path into the components constituting that circuit, and finally to identify the elements within the circuits.

TABLE 7.1

Example Test Plan

Image

A circuit element can be thought of as a small function that can be comprehensively analyzed and tested without further decomposition and is at the lowest level of design abstraction. In other words, the smallest functional building blocks that the design engineer assembled to create the design. For example, a circuit card designer will design a filter, not the internals of the operational amplifier used in a filter. For a circuit card, elements are circuit functions with well-bounded and well-characterized functionality that can be tested. Examples of circuit elements on a circuit card would include amplifiers, comparators, filters, analog to digital or digital to analog convertors, or discrete logic.

The size of an element can also be defined by the capabilities of the test and verification tools. For a PLD, a working definition of an element would be logic functions with 12 or fewer inputs (to ensure compatibility with code coverage tool operation). Examples of PLD elements would include RTL-level building-block functions such as counters, decoders, comparators, multiplexers, registers, or finite state machines.

An example of the decomposition of a functional failure path is shown in the following figures. Figure 7.2 shows the system level functional failure path for the loss of motion hazard in a primary flight control system. This example identifies the contribution from the sensors on the input to the functional failure path.

Figure 7.3 shows the decomposition from the sensors functional failure path to the functional elements in the hardware on the circuit card.

Image

FIGURE 7.2 Functional Failure Path for Loss of Motion

Image

FIGURE 7.3 Functional Elements within the Functional Failure Path

Image

FIGURE 7.4 Elements That Implement a Function

Figure 7.4 shows the decomposition of one of the functional elements on the circuit card to the elemental functions or circuits that implement the function.

And finally, Figure 7.5 shows the circuits for each elemental function.

Another example is provided to show how a PLD works into the decomposition of functional failure paths. This is a fictitious system for illustration purposes only. This example has two functional failure paths: one for the control actuator output and the second for the position feedback and monitoring. Figure 7.6 shows the block diagram of the system.

The decomposition starts at the top and works down to the elemental functions:

•  System level to hardware level

•  Hardware level to circuit level

•  Circuit level to component level

•  Component level to elemental level

Figure 7.7 shows the functional failure paths (control and feedback), the electronic hardware in each FFP, and the functional elements in each FFP as indicated by the dashed boxes.

The next diagram, Figure 7.8 shows the functional elements in each functional failure path, the elemental functions in each functional element, and the components in each elemental function.

Figure 7.9 then shows the elements within each elemental function. In this example, some of the circuit functional elements have one circuit element while others have more than one. The controller for the analog to digital convertor was chosen to be a PLD. Since the PLD is entirely in the actuator feedback functional failure path, the design assurance level of the PLD will be the same as the design assurance level of the actuator feedback functional failure path. In this case it is Level A. The PLD is this example also shows multiple elements within the device.

Image

FIGURE 7.5 Circuits within an Element

Image

FIGURE 7.6 Flight Control with Two Functional Failure Paths

Image

FIGURE 7.7 Functional Elements within Each Functional Failure Path

When a PLD has functions that are in multiple functional failure paths, the design assurance level for the PLD is the highest level associated with the respective functional failure paths. The reason for this is that it is not possible to demonstrate that the gates, clocks, and power within the PLD are completely isolated. Even if it was possible to show the separation between Level A, B, C, and D functions within one PLD, they still share the same physical packaging and connection to power and ground.

Figure 7.10 shows the same concept for a PLD. The functional failure comes from an output with erroneous behavior. Following the suggestions in this book on writing requirements, the functional failure path will traverse one or more outputs in one or more functional elements within the PLD.

Once the functional failure path analysis is performed, an additional verification method is selected for Level A and Level B circuits or elements.

APPENDIX B ADDITIONAL VERIFICATION

Appendix B of DO-254 identifies several methods of advanced verification that can be applied to Level A and Level B hardware. At least one of these methods needs to be selected and applied. DO-254 does permit other methods to be proposed for additional verification, subject to certification authority agreement. The advanced verification methods identified in DO-254 include elemental analysis, safety-specific analysis, and formal methods. This book will concentrate on elemental analysis since it is by far the most commonly used and is supported by commercially available tools.

Elemental analysis provides metrics on how much of the design (elemental functions) was covered through the requirements-based verification of the associated functional elements. In other words, the test cases or collections of inputs and expected outputs that verify the requirements are analyzed to determine whether the associated hardware design and circuits are also fully verified. Since elemental analysis is based on requirements-based verification, it is of paramount importance that the requirements be constructed such that they are eminently verifiable. Well-formed requirements and trace data that show the connection between requirements and the design also support elemental analysis.

Image

FIGURE 7.8 Functional Elemenys Overlaid on the Design

Image

FIGURE 7.9 Elemental Functions

Image

FIGURE 7.10 Functional Failure Path for a PLD

Code coverage is often portrayed as elemental analysis, but this may not always be the case. While code coverage can measure the level of design coverage from simulations, it cannot assess the correctness of the coverage, or in other words, whether the elements in the design were exercised according to their functionality (through requirements-based tests) or in a random or non-functional manner. Code coverage can be artificially boosted through the addition of test vectors that are solely designed to increase coverage without regard to whether they actually verify anything; while this will generate complete coverage, it does nothing to support design assurance.

Similarly, code coverage conducted on a post-layout timing model of a PLD can return artificially high levels of toggle coverage of the internal design because the timing delays in the model combined with the often complex and circuitous signal routing can combine to create race conditions and glitches that imitate legitimate logic combinations. There is also little correlation between the elements (lines of code) in the design (the HDL source code) and the nets and nodes in the post-layout timing model, so code coverage of the inside of the post-layout timing model has little to no real utility. In addition, DO-254 states that elemental analysis should be conducted at the level at which the designer expressed the design (in this case the HDL elements, which are nominally lines of code), whereas synthesized and routed netlists express the design at a significantly lower level of expression than the actual design. Thus code coverage is only useful when measured during functional simulations (simulations of the HDL source code as opposed to the post-layout timing model or post-synthesis netlist), or when toggle coverage is measured for inputs and outputs during simulations on the post-layout timing model. Measuring toggle coverage during post-layout simulations may be unnecessary if a review of the test cases indicates that all inputs and outputs were toggled appropriately during the normal course of post-layout simulations; however, since code coverage metrics are measured in the background during simulations, they can normally be gathered during simulations with less trouble and more accuracy.

For analog circuits and circuits not implemented in a PLD, no amount of code coverage will reveal how much of the design was covered by the requirements-based tests. While not frequently used these days, PLDs specified with schematic capture would need an analysis of the design as specified by the designer, not code coverage. Code that is written as behavioral code or translated from a higher level language such as SystemC can also be problematic since the designer specifies functionality, not the design itself. Code coverage for elemental analysis for a PLD works best when the requirements are specified functionally and written at the pin level and the associated design is expressed at the register transfer level (RTL). An RTL design will have very close alignment with pin level requirements.

Using the definition for elemental analysis in DO-254 and a PLD that has functional requirements written at the pin level and valid trace data to a RTL level definitions, the analysis is complete when the requirements-based verification is complete. How is this possible? If the requirements are comprehensively verified, all the outputs will have been covered and all the inputs will have been covered. Since the design is closely related to the requirements and there are no extraneous elements in the design—as shown through top-down and bottom-up traceability—comprehensive requirements-based verification can be shown to also be comprehensive verification coverage of the design. This alignment is a natural byproduct of writing requirements from output back to input with a template that covers power on conditions, response to reset conditions, and how the output(s) asserts and deasserts in response to the associated input(s). Figure 7.11 shows how requirements and the related design align.

The test cases to verify the requirements along with the trace data that shows the connection between requirements and design will demonstrate that the design is fully covered through the requirements-based verification. Code coverage should still be performed to satisfy FAA Order 8110.105. Alternatively, code coverage for elemental analysis can be proposed when there is suitable alignment and traceability between the requirements, the design, and the test cases.

Elemental analysis, in particular code coverage, can reveal elements that were not fully exercised by the requirements-based test cases. When a coverage deficiency is revealed, the first impulse may be to analyze the deficiency to identify the missing input combinations and then add those combinations to the test case. While this approach may eliminate the deficiency and increase coverage, care must be exercised to ensure that this is actually the correct response.

Image

FIGURE 7.11 Relationship between Functional Elements and the Elemental Functions in a HDL Design

The goal of elemental analysis is not to get full coverage of the design. In fact, it can be argued that elemental analysis has no goal; it just fulfills the function of measuring the verification coverage of the design—full coverage of the design is the goal of the requirements-based verification. Therefore, when elemental analysis indicates less than full coverage of the design, it should not be interpreted as some kind of failure.

Coverage deficiencies can be caused by inadequate test cases, design errors, unused design features, extraneous functionality, defensive design practices, or requirements issues such as excess, missing, erroneous, or inadequate requirements. Each deficiency in coverage should be analyzed to identify the cause of the deficiency, and only then should the deficiency be addressed.

The first impulse for correcting a deficiency—adding input combinations solely to eliminate deficiencies—will not directly address any of the possible causes, so jumping to that solution without an analysis is not a sound approach. If additional input combinations are desired, they should be added methodically and deliberately through an analysis of the applicable test case to identify how the applicable requirement was not fully verified. Only when the analysis reveals a deficiency in the test case (i.e., an aspect of the requirement that was not verified by the test case) should additional inputs be added, and even then the additional inputs should be based on verifying the unaddressed aspect of the requirement, not the inputs that would eliminate the deficiency. If an analysis indicates that the test case completely verified all aspects of the requirements, additional analyses should be conducted to determine whether the coverage deficiency is a symptom of a more serious issue. On the other hand, if an analysis shows that the deficiency was just a matter of selecting a different data pattern on a data bus (and where the exact data pattern had no functional significance), then it would be appropriate to change or add the input data to eliminate the deficiency.

INDEPENDENCE

DO-254 requires independent verification of all Level A and B functions. Independence is achieved when the verification is performed by a person, tool, or process that is different than the designer or author of the data. Test cases and analysis may be created by the hardware designer as long as the subsequent review of the test cases and analysis is performed by someone other than the designer. Since the designer has intimate knowledge of the design of the hardware, there is a potential that their verification efforts are biased toward checking that the hardware operates as designed. In other words, a designer can induce a common mode error by verifying that the hardware is verified against its intended design rather than its intended functionality. Independence is used to ensure that hardware is verified for intended function in accordance with the requirements.

Independent reviewers can work in the same team. DO-254 does not specify that designers have to be in the design team and reviewers in the verification team. Tools can also be used to achieve independence. For example, a suitably assessed or qualified simulation tool can use automatic results checking rather than requiring a person to check each and every simulation result.

The above discussion of independence, as well as the definition used in DO-254, relies solely on functional or role-based independence, i.e., independence that is defined by whether a reviewer is independent of the material being reviewed. There is also a more subtle component of independence that is not addressed in DO-254 and may be overlooked due to its subtlety, but can undermine independence if it is not recognized and mitigated. That component is cultural independence. In the context of this discussion, “cultural” refers to the prevailing engineering culture of the organization or even of the corporation. The means by which an engineering culture can undermine independence is not always apparent and can be difficult to recognize, let alone mitigate, but its effects can be real and detrimental to the integrity of design assurance.

An organization’s engineering culture can (and often does) influence or even dominate peer reviews, or for that matter, virtually all aspects of engineering. In most cases, especially when the culture stresses integrity and thoroughness, the effect of the culture is a positive one. In some cases, however, a strong engineering culture can make it difficult to ensure that peer reviews are actually independent despite compliance to the independence criteria in DO-254. If an engineering department has institutionalized practices and standards that are less than adequate for the required design assurance, a reviewer that comes from that same environment may not be able to recognize deficiencies even though they conscientiously look for them. Or in other words, if a designer creates an unsafe design feature in the hardware because the engineering standards and culture incorrectly endorse that unsafe feature as being acceptable, and the reviewer is a product of that same culture and standards, then despite being independent of the design the reviewer will incorrectly interpret the unsafe design feature as being acceptable.

Another (and arguably more common) example of this phenomenon can occur when writing and reviewing requirements. If the prevailing engineering culture has adopted poor requirements practices and has de-emphasized the generation and use of high quality requirements to the point where no one in the organization can even recognize a poorly written requirement, and both the requirements author and reviewer are products of that culture, then even an independent requirements review will only confirm that the poorly written requirements were “correctly” written in compliance with the prevailing standards. This effect can occur even if the author and reviewer are from different departments (such as design versus verification departments).

Thus if the cultural aspect of independence is not accounted for, there is a real possibility that effective independence cannot be achieved even when complying with the DO-254 definition of independence. Mitigating this phenomenon may be difficult, but if the selection of reviewers takes this cultural influence into consideration, or the review checklist uses review criteria that are detailed and reflect known high integrity concepts, its effect can be reduced if not actually eliminated.

REVIEW

Reviews are performed on all DO-254 life cycle data. Level A and B hardware require an independent review in accordance with Appendix A of DO-254. Reviews are typically documented with a checklist that lists the reviewer, item reviewed, and the criteria used for the review. The review checklist should be referenced from the Hardware Verification Plan or Hardware Verification Standards and stored in the configuration management system in accordance with the Hardware Configuration Management Plan. The review procedures or checklists should be HC1 controlled for Level A and B hardware. The completed checklist is stored as HC2 controlled data in the configuration management system in accordance with the Hardware Configuration Management Plan.

Reviews of hardware documents and data are accomplished by initially performing a full review. Once the full review of the data or document has been performed, incremental reviews can be used for any subsequent changes. That is, the review criteria only need to be applied to changes made to the data or document.

When performing the review, all checklist questions should be answered. Any comments should be noted on the checklist and discussed with the document author or technical authority. Each comment should document an agreed corrective action. Once the agreed comments and actions are resolved, the reviewer can verify that the document has been updated appropriately.

Checklists present a series of questions that are selected to ensure that the hardware design life cycle data has the following characteristics:

•  Unambiguous—the information and/or data is documented in such a manner that it only permits a single interpretation.

•  Complete—the information and/or data includes all the necessary and relevant descriptive material that is consistent with the standards. The figures are clearly labeled, all terms are defined, and units of measure are specified.

•  Verifiable—the information and/or data can be reviewed, analyzed, or tested for correctness.

•  Consistent—the information and/or data has no conflicts within the document or with other documents.

•  Modifiable—the information and/or data is structured in such a way that changes, updates, or modifications can be completely, consistently, and correctly made within the existing structure.

•  Traceable—the origin or derivation of the information and/or data can be demonstrated.

Peer review checklists should also include criteria to check all aspects of hardware standards for reviews of requirements, design, verification, and validation data.

Table 7.2 lists the reviews to be performed and each associated review checklist. If the data item is not required for the design assurance level, then the associated review is not needed.

Reviews of hardware management plans and standards are accomplished by a full document review. Document reviews are also used for the review of the Hardware Configuration Index, Hardware Environment Configuration Index, and the Hardware Accomplishment Summary. Incremental reviews can be used for subsequent changes.

Reviews of requirements should apply the checklist questions or criteria to each and every requirement. For convenience, the reviewer can create a spreadsheet to log the checklist response on a requirement-by-requirement basis. Reviews of the design apply the checklist questions to each and every schematic (or file for HDL-based designs). The reviewer can create multiple tabs on a spreadsheet to log the checklist response on a file-by-file basis.

TABLE 7.2

Life Cycle Data Item and Associated Review Checklist

Life Cycle Artifact Reviewed

Review Checklist

Plan for Hardware Aspects of Certification

Plan for Hardware Aspects of Certification Review Checklist

Hardware Design Plan

Hardware Design Plan Review Checklist

Hardware Verification Plan

Hardware Verification Plan Review Checklist

Hardware Process Assurance Plan

Hardware Process Assurance Plan Review Checklist

Hardware Configuration Management Plan

Hardware Configuration Management Plan Review Checklist

Hardware Requirements Standards

Hardware Requirements Standards Review Checklist

Hardware Design/Code Standards

Hardware Design/Code Standards Review Checklist

Hardware Requirements Document

Hardware Requirements Document Review Checklist

Hardware Design Data

Hardware Design Data Review Checklist

Test Cases

Test Case Review Checklist

Test Procedure/Testbench

Test Procedure/Testbench Review Checklist

Test Results

Test Results Review Checklist

Hardware Verification Report

Hardware Verification Report Review Checklist

Hardware Configuration Index

Hardware Configuration Index Review Checklist

Hardware Environment Configuration Index

Hardware Environment Configuration Index Review Checklist

Hardware Accomplishment Summary

Hardware Accomplishment Summary Review Checklist

Reviews of test cases apply the checklist questions to each and every test case within a test case file. The reviewer can create a spreadsheet for each test case file to log the checklist response on a test case basis.

Reviews of the test procedures and testbenches apply the checklist questions to each and every test procedure or testbench file. The reviewer can create a spreadsheet to log the checklist response on a file-by-file basis.

Reviews of the test results or simulation results apply the checklist questions to each and every test or simulation results file. The reviewer can create a spreadsheet to record the checklist response on a file-by-file basis.

Reviews should clearly list the findings, discrepancies, and any items that do not satisfy the review criteria. The author should make the necessary updates to resolve the review comments, or provide a response to clarify any misunderstandings on the part of the reviewer. Once the document or data has been updated, the reviewer should check the updates and close out the review comments. It is helpful for the review checklist to specify the configuration identifier and version of the document initially reviewed and the final corrected version of the document.

ANALYSIS

As stated above, analysis is used for quantitative assessment of the hardware design. Circuit designs typically use thermal analysis to ensure that the design works within specified tolerances at extreme high and low temperatures. Circuits are also analyzed to ensure that the design works as specified when component tolerances vary. Reliability analysis is used to determine whether the actual implementation of the design meets reliability requirements. Designs can be compared to previously approved designs with a similarity analysis.

PLD designs use an analysis typically referred to as simulation to verify that the design meets its functional requirements. Simulations are computing resource intensive, so for best performance it is recommended that simulations be run on the most capable computing platforms available. In general, Linux-hosted systems can provide the best performance for execution of simulations. Workstations or servers running the simulation should be configured with adequate random access memory, a high speed or solid state disk, and high speed Ethernet controller. Faster workstation or server clock speed and multicore processors will also accelerate simulation performance. Running simulations on laptop computers will reduce performance and can interfere with staff productivity if the laptop is also used for office applications such as electronic mail and document editing. Tool vendors can provide information on the optimal environment for hosting simulation tools.

Once the test cases are selected for comprehensive coverage of the requirements, the testbenches are created to execute the simulation. These testbenches can be run in multiple scenarios or configurations to collect various types of data and metrics. Figure 7.12 shows the scenarios used for simulation of a PLD design.

Functional simulations are conducted on the HDL and apply the input stimuli defined in the test cases and implemented in the testbenches. These simulations run quicker than timing simulations and allow debug and design tuning. Functional simulations can also use code coverage tools to collect coverage metrics. These coverage metrics can be used to support the elemental analysis.

Post-layout simulations are conducted on the post-layout (sometimes called a post-route) timing model, which is the netlist output of the place and route tool that has been annotated with the representative timing delays of the target PLD device. This timing model is the most realistic representation possible of the programmed PLD and allows simulation results to be conducted on a model that is as faithful as possible to the programmed device. Post-layout simulations with timing data allow for worst-case timing analysis, typical timing analysis, and best-case timing analysis. These simulations are used to demonstrate that the device requirements are met under all timing variations. Typical timing simulations are useful when comparisons with device tests are needed. Post-layout simulation can also verify toggle coverage—this shows whether all inputs and outputs have transitioned high and low during the testbench execution. Toggle coverage verifies that device signals are present and used, and that static inputs and unused outputs do not toggle.

Image

FIGURE 7.12 PLD Simulation

Hardware testing for DO-254 requirements-based tests are typically performed at room temperature. The hardware is built with the actual circuit components, but it would not be practical to select components with the most extreme tolerances. Further, testing would require multiple runs with components selected for best case timing or tolerance and again for worst case timing or tolerance. Post-layout simulations allow efficient evaluation of PLD behavior at minimum and maximum propagation delays. This allows hardware testing to use standard rather than screened or selected components. Timing simulations also provide evidence that the PLD will meet its requirements and provide the correct signals to peripheral devices for any combination of temperature, clock, or voltage variations, or variations of the PLD or external components that the PLD interfaces with.

An analysis performed for verification credit should state the procedure used, the configuration identification of the data analyzed, the person who performed the review, the results of the analysis, any actions needed to correct the design or the data, and the conclusion or summary of the results.

When simulation is used for PLD analysis, all raw data files from the simulation tool should be collected. Waveforms should be exported to a file and, at a minimum, archived for the project. Test cases can be indexed and the index included in the waveform to allow quick lookup within the waveform file. A test case counter is an effective way to index through a waveform file. Some engineers use “do” files to quickly index timelines in waveform files. The “do” files should be listed in the verification report and archived with the verification data.

If the simulation waveforms will not be manually analyzed to compare actual timing and logic levels to the expected results in the test cases, or in other words if the testbenches will evaluate the data to derive the pass/fail results, the testbenches should be written to output text log files that document the waveform data. These log files should be verbose and include the following:

•  The date, time, name of tester, and configuration identifier of the testbench.

•  The configuration identifier of the design under test (being simulated).

•  Every stimulus applied to the design under test, along with the timestamp for the stimuli.

•  Every output from the design under test, along with the timestamps for the outputs.

•  For each pass/fail evaluation, the expected result, actual result, pass/fail evaluation, and timestamp.

Simulation log files that simply state PASS or FAIL (whether for each expected result or for the entire test case) are not sufficient evidence for compliance to DO-254 verification objectives.

In addition, the testbench code that evaluates the data and generates the pass/fail result should be designed to be easily understood and characterized such that a code review will determine to a high level of confidence that the code will correctly evaluate the results. Obviously the level of confidence in the testbench code should be commensurate with the design assurance level of the design. There is no official guidance on how such testbench code should be written, nor for how it should work, but the wise approach is to make it as simple and as easily understood as possible to minimize the probability of errors and thus of concerns from the certification authorities.

The summary of the analysis will summarize the verification highlights and results. The conclusion of the analysis will provide a conclusion based on the data. A statement as to whether the PLD met its operational requirements for the specified timing, voltage, clock, and temperature should be included.

DO-254 Section 6.2.2 also mentions that analysis should be performed to assess whether the verification is complete. The verification coverage analysis determines whether all requirements have been verified by a review, analysis, or test. The verification can be performed at the most suitable hierarchy of the design. PLDs can be electrically tested with circuit card level tests, in conjunction with software tests, or in the course of system or higher level tests. Note that the higher level tests still require that the PLD inputs and outputs be controlled, observed, and recorded. The verification coverage analysis also determines whether the review, analysis, or test was appropriate for the requirement. System level tests conducted in a closed box manner would not be suitable for instrumenting a PLD for requirements-based testing. Output signals from a PLD that go directly to a box level output, and which allow the PLD timing to be observed, could conceivably be used for the corresponding PLD requirements. An examination of the circuits between the PLD and the box output would be needed to determine how those circuits affect the signal level and timing. Finally, the verification coverage analysis determines whether the results are correct. Any differences between actual results and expected results are captured and explained. Particular attention should be paid to verification results for requirements designated as safety related, especially if the verification results are not exactly as expected.

TEST

Testing is the verification of the actual hardware to show that the implementation meets the requirements. Testing is typically performed in-circuit with production or production equivalent hardware. Production equivalent means that the circuits and card layouts are the same as those that will be used in the aircraft. Any differences should be assessed for impact. Conformal coating on the circuit cards would interfere with electrical testing and is one of the typical differences between production and production equivalent electronic hardware. Test headers or sockets for devices to allow access to pins are also typical changes to the board under test. It is essential that PLDs be the same device type and programmed with the same content and same programming methods as those used in the manufacturing environment.

Testing can be performed at the system level, circuit card level, circuit level, or component level. DO-254 recommends that testing be performed at the highest level of integration possible within the system. Testing in a more integrated manner allows for easier evaluation of interfaces and detection of unintended side effects. Hardware testing for a processor card can take advantage of software tests performed for DO-178C credit. Hardware testing can also use existing production, environmental, and acceptance tests as long as the correlation and traceability to requirements is possible. Regardless of the type of test bed used, the test should instrument the circuit and design to allow the inputs and outputs to be recorded during the course of a test. Some instrumentation of circuit cards and PLDs is not possible during closed box system tests. Closed box tests would not be a suitable test environment for many requirements-based tests necessary for DO-254 compliance.

Once the test cases are selected for comprehensive coverage of the requirements, the test procedures are created to execute the test. These test procedures can be run on circuit cards, at the perimeter of a PLD mounted on a circuit card, or on a stand-alone device tester for a PLD. Test results are collected with oscilloscopes, voltage meters, logic analyzers, and other test equipment. Figure 7.13 shows the scenarios used for testing a PLD design.

TEST CASE SELECTION CRITERIA

While DO-254 does not explicitly require test cases as a separate data item, experience has shown that there are advantages to creating test cases that are separate from test procedures. The test cases are the set of inputs applied to the hardware or design with a corresponding set of predicted expected results and pass/fail criteria. The test cases are used to select the set of inputs that will be applied in a hardware test or a simulation.

With requirements that express pin level behavior, the process for verification can be optimized and more efficient. The first opportunity for improving processes is to create the test cases against the requirements as soon as the requirements are mature and have been reviewed. This allows the verification activities for testing and simulation to start even before the design is created.

Image

FIGURE 7.13 Hardware Test

The second opportunity for improving processes is to base the test cases entirely on the requirements, and where possible, write them to be independent of any particular verification method (in particular simulation and hardware test). This will allow the same test cases to be used for hardware test, simulation, or where appropriate, both. For hardware test, the test procedures are written from the test cases to implement and apply the input stimuli in a physical hardware test environment. For simulation, testbenches are written from the test cases to implement and apply the input stimuli in a simulated or virtual environment.

A third area for efficiency with test cases is to format and organize the test cases to allow the reviewer to more quickly assess whether the requirement(s) have been comprehensively verified. Test cases can be formatted in tables with the inputs and outputs in columns and the set of input values and expected results across each row. The use of partitioning to separate individual test cases, allowing each test case to be more easily identified and correlated to individual requirements or even individual parameters within requirements, will aid in assessing and reviewing test cases and test procedures. Attaching some means of identification to steps or operations within test cases (such as step numbers) will also allow for easier tracking of the testing process and permit more accurate references to individual features of the test case.

Separating test cases from the procedures and benches also allows the analytical aspects of the effort to be concentrated on the test case design. Writing the test procedures and testbenches becomes a more rote task of translating the input stimulus to the respective environment.

Image

FIGURE 7.14 Common Test Cases

Figure 7.14 depicts the test case paradigm described above.

With requirements formulated as described in the requirements chapter, another optimization occurs: writing requirements that state an output in response to an input ensures that each requirement is verifiable—an input can be applied and an output can be observed. The inputs in the statement of the requirement become the inputs for the test case. The outputs in the statement of the requirement become the outputs for the test case. The transfer function or behavior expressed in the requirement allow the test case author to readily predict the output and expected results in the presence of the inputs. The structure and correlation of requirements and test cases is shown in Figure 7.15.

When translating test cases to simulation testbenches, the test case expected result can be evaluated against the simulation waveform through the use of self-checking code. The simulation log file should also state the expected result, the actual measured result, timestamp, and the pass/fail of the comparison, as described previously. For hardware testing, the event in the test case from which the timing for the expected result is measured, or the expected result itself if appropriate, can be translated into the trigger conditions for a logic analyzer or oscilloscope. Also note that simulation waveforms and waveforms from hardware test could be compared side by side with the expected results to make the results review more efficient.

While inputs in a simulation can be created as specified in the testbench, inputs for hardware test may have to use the signals already occurring in the hardware. The idea is for the requirements to describe waveforms or signals that will exist. When this style of requirements is used, hardware testing can take advantage of testing with the signals already present in the circuit. Signals will not have to be injected in order to achieve requirements-based tests. This style of testing works well for normal range or nominal conditions.

Image

FIGURE 7.15 Requirements and Test Cases

Testing for robustness—with invalid or unexpected inputs—will need signals created and applied to the circuit under test. Hardware tests with test equipment applying inputs to card edge connectors can be configured or programmed to generate invalid inputs. If the signals of interest are on the interior of a circuit, the invalid signals may have to be applied directly to the circuit under test. For PLDs, a stand-alone chip tested allows full access to all input pins and application of any combination of valid and invalid signals to the input pins.

Test cases should be selected for comprehensive verification of the requirement or requirements associated with the test case. The typical sequence for testing is to apply initial conditions and signals, apply power, apply and release the reset, apply the inputs of interest for the test case, and collect the outputs of interest at each step along the way.

Consider a requirements template constructed with the following aspects:

•  Power on behavior

•  Reset behavior

•  Assert conditions

•  Deassert conditions

•  Response to invalid inputs

The test case will be constructed as follows:

•  Apply power, measure output, and compare to expected results.

•  Apply power, allow circuit to enter operation, apply a reset, measure output, and compare to expected results.

•  Apply power, allow circuit to enter operation, apply a reset, allow circuit to enter operation, apply inputs described as necessary for the output(s) to assert or become active/true. Measure output and compare to expected results.

•  Apply power, allow circuit to enter operation, apply a reset, allow circuit to enter operation, apply inputs described as necessary for the output(s) to assert or become active/true, apply inputs described as necessary for the output(s) to deassert or become inactive/false. Measure output and compare to expected results.

•  Apply power, allow circuit to enter operation, apply a reset, allow circuit to enter operation, apply invalid inputs. Measure output and compare to expected results.

Depending on the functionality being verified, it may be possible for the fourth test to satisfy (or incorporate) the three preceding tests in the above sequence. Combining tests in this way can help optimize the verification process.

In the step with invalid inputs, there may not always be an easily predictable expected result. If the input clock is varied outside of the expected tolerance of the frequency or duty cycle, there is no guarantee that the circuit will continue to behave in a deterministic manner. In this case, robustness tests can be used to explore and characterize the hardware for how far out of tolerance the inputs can become before the expected behavior no longer occurs. In instances where invalid inputs can be described and applied but the expected result is not known, the test can still be performed. In these instances, the behavior of the outputs should be collected and analyzed by the systems engineer and/or safety engineer to determine whether the behavior is acceptable or if the requirements and/or design need to be updated.

Test cases will test a functional element consisting of a single requirement or a logically related group of requirements. The test case strategy may elect to group coverage for requirements describing a particular output. The test cases will be solely determined from an inspection of the hardware requirements. Design-based tests will not be permitted since they only confirm that the design is the design, and not whether the design meets its intended functionality. Note that if the requirements specify the design’s implementation (implementation requirements) rather than its intended functionality, the resulting requirements-based test cases will have the same outcome (and deficiencies) as design-based tests. As noted in the chapter on requirements, this is one of the critical weaknesses of implementation requirements and why they should be avoided.

The effect of design information in requirements on the integrity of the verification process is illustrated in Figure 7.16 and Figure 7.17. As seen in Figure 7.16, requirements that express functionality, particularly those written according to the guidance in this book, are equally appropriate for both design and verification, and allow both processes to proceed independently. Since independence is maintained between the two processes, when the hardware is verified through requirements-based verification, the design is independently proven to comply with the requirements and by extension with the intended functionality as defined by the upper level system.

Image

FIGURE 7.16 Functional Requirements and Effective Verification

Image

FIGURE 7.17 Implementation Requirements and Ineffective Verification

In contrast, when design information finds its way into the requirements, there is no longer a connection to the system level functionality, no capture of intended functionality, and therefore no way for verification to determine that the design is functioning properly and according to the intended functionality as defined by the upper level system. In addition, if the design implementation in the requirements contains any errors, they will not be detected by verification because verification will be based on the same erroneous requirements as the design. This relationship is shown in Figure 7.17.

If implementation requirements are encountered, the burden of proving the correctness of the design shifts from verification to validation. Validation must then prove that the design implementation in the requirements will correctly implement the intended functionality. The methods used to do this may parallel those of verification—such as simulation and test conducted against the higher level functionality that would normally define the intended functionality of the design. While this approach can result in the same outcome of proving that the design meets its intended functionality, it is the hard way of getting to the goal. Note also that this approach may require that requirements validation, which should occur early in the design process, be conducted instead near the end of the design process when the necessary design and hardware are available to validate the requirements. Overall it is not the recommended approach to design assurance.

The test cases are limited to stimulating device or signal inputs and predicting the expected pin level response. Restricting test input and output to the pin or signal level helps ensure that the requirements (i.e., functionality) are verified, as opposed to verifying the design. This method also allows use of the same test cases for functional simulation and in-circuit device tests.

Normal test cases are designed with the following criteria:

•  Test coverage of a requirement by directly testing the requirement, or in combination with another requirement

•  All combinations of valid inputs, as necessary, to demonstrate that the circuit or output behavior meets the requirement(s)

•  All assert conditions tested

•  All deassert conditions tested

•  Inputs are varied to provide comprehensive coverage of conditions and decisions expressed in the requirement

•  Comparisons (e.g., less than, greater than, less than or equal to, equal to, etc.) expressed as conditions in the requirements use inputs just below, equal to, and just above the comparison value

•  The effect of the inputs should be observable as a change on the output

•  Sequence of tests is considered to show the effect of the inputs on the outputs

•  All combinations of valid inputs, as necessary, to demonstrate that the circuit or output behavior meets the requirement(s) and does not have unintended side effects

•  Typical input signal timing tolerance

•  Typical clock timing tolerance

•  Equivalence class of values for input address bus

•  Minimum and maximum values for input address bus

•  Equivalence class of values for input data bus

•  Minimum and maximum values for input data bus

•  Input changing before, during, and after clock transition to show registration of input on rising or falling edge as specified in the requirements or

design standards

•  All possible state transitions are covered when state machines are employed in the design—this will by definition be a design-based test

•  Use varying data when the test cases are repetitive

•  Use a structured sequence for test cases

•  Apply power

•  Apply reset

•  Allow device to start normal operations

•  Apply desired stimulus

•  Alternate steps in a test case between valid and invalid conditions, assert and deassert behaviors, and always end with a return to valid inputs and outputs in normal operating conditions

Robustness test cases are designed with the following considerations:

•  Incorrect inputs or combinations of inputs

•  Unexpected inputs or combinations of inputs

•  Toggling inputs that are not listed in the associated requirement(s)

•  Invalid input timing (e.g., setup and hold violations)

•  Invalid state transitions

•  Variation of clock duty cycle and/or frequency outside the specified tolerance

•  Asserting and deasserting input signals between clock edges

•  Applying reset during a test

•  Device select/deselect during a test

•  Asynchronous or glitch timing

•  Inserting additional clock cycles

•  Removing clock cycles

Trace tags, such as text with a unique identifier, should be embedded in the test cases so traceability to the requirement associated with the test case can be listed in a trace matrix. Embedding the requirement identifier in the test case can cause an update to the test case when the requirements change, regardless of whether the test cases need to change. The test case author should make every attempt to provide full test coverage of a requirement with a set of test cases in a test case file. Only when absolutely necessary should test coverage of a requirement be spread across multiple test case files (test case groups). The rationale for this grouping is to keep the data together and make the review easier.

Test cases can also use naming conventions for the file names to make it easy to find the data in the configuration management system. An example of the naming conventions follows.

For traceability and ease of comprehension, the following file naming convention can be used.

•  Requirement

•  Test case file

•  Test case or step in test case file

•  Testbench (simulation)

•  Testbench (simulation)

•  Test procedure (hardware test)

•  Test results log

HRD-XXX-YYY-NNN

TC-HRD-XXX-YYY-NNN.xls

TC-HRD-XXX-YYY-NNN_001

TC-HRD-XXX-YYY-NNN_002

TC-HRD-XXX-YYY-NNN_003

TB-HRD-XXX-YYY-NNN.vhd

TB-HRD-XXX-YYY-NNN.vhd

TP-HRD-XXX-YYY-NNN.txt

TR-HRD-XXX-YYY-NNN.log or

TR-HRD-XXX-YYY-NNN.wav
(waveform capture)

An example format for a test case is shown in Table 7.3.

TABLE 7.3

Test Case Example

STEP

ACTION/INPUT

EXPECTED RESULT

Initial Setup: Steps 1 and 2 initialize the device by setting all inputs to their default states, then toggling the RESET input to clear and initialize the logic.

1

Set the inputs to their following defult states:

CLK_50M = 50MHz square wave, 50% duty cycle.

AVAL_L = 1.

RW = 1.

P_ADR(15:0) = 0x0000

RESET = 0

P_DAT(31:0) = 0xZZZZ_ZZZZ

2

Set RESET high for 100usec and then back low to clear and initialize all internal logic to a known state, then wait for 100usec.

P_DAT(31:0) = OxZZZZ ZZZZ

Load SDLR_DAT with a known value and then verify that it is cleared to 0x0000_0000 within 1usec after a RESET (CDBR-60(ii)).

3

Write 0xFFFF_FFFF to address 0x8001 to pre-load the SDLR_DAT register.

None

4

Read address 0x8001 to confirm that the SDLR_DAT register is initialized for this test.

P_DAT(31:0) = OxFFFF_FFFF

5

Set RESET high for 100usec and then back low. Read address 0x8001 1usec after the falling edge of RESET to confirm that the SDLR_DAT register cleared to 0x0000_0000.

P_DAT(31:0) = 0x0000_0000

Summary of Test Case: This test case confirms that the SDLR_DAT(31:0) data value clears to 0x0000_0000 within 1usec after the RESET input deasserts low or an SDLR clear command is generated. It reads address 0x8001 both before and after each reset method to confirm that P _DAT(31:0) outputs 0xFFFF_FFFF before the reset, and then 0x0000_0000 afterward.

Type of Test Case: Normal

Test Case Trace Tag: TC_CDBR-50_and _55 _and _60_001

Verification Methods: Simulation, Test

Steps for Test Case:

TEST CASES AND REQUIRMENTS

Now to tie it all together. The requirements express the behavior of the output when the inputs satisfy certain conditions. Using the constructs described in the requirements chapter, the test cases can be readily constructed from the form of the requirement.

EXAMPLE 7.1

An output that asserts when a set of input conditions must be met all at the same time, or the logical AND of the input conditions is expressed:

out1 shall assert high within 50 nanoseconds when the following conditions are satisfied:

•  Condition1

•  Condition2

•  Condition3

The test cases to verify these conditions are shown in Table 7.4. The test cases in Table 7.4 are comprehensive. An exhaustive set of all eight possible combinations of the inputs could also be used. Condition1 through Condition3 could each be a logic one on an input or a more complex expression such as Input1 has remained high for the last 100 nanoseconds, or count100 is greater than or equal to 0X0064.

TABLE 7.4

Test Cases for AND

Condition1

Condition2

Condition3

out1

True

True

True

True

True

True

False

False

True

False

True

False

False

True

True

False

TABLE 7.5

Test Cases for OR

Condition1

Condition2

Condition3

out1

False

False

False

False

False

False

True

True

True

True

False

True

True

False

False

True

EXAMPLE 7.2

An output that asserts when any one of a set of input conditions is met, or the logical OR of the input conditions is expressed:

out1 shall assert high within 50 nanoseconds when one or more of the following conditions are satisfied:

•  Condition1

•  Condition2

•  Condition3

The test cases to verify these conditions are shown in Table 7.5. The test cases in Table 7.5 are comprehensive. An exhaustive set of all eight possible combinations of the inputs could also be used. As before, the conditions could be a logic input or a more complex expression.

EXAMPLE 7.3

An output that asserts when none of the input conditions are met, or the logical NOR of the input conditions is expressed:

out1 shall assert high within 50 nanoseconds when none of the following conditions are satisfied:

•  Condition1

•  Condition2

•  Condition3

The test cases to verify these conditions are shown in Table 7.6. The test cases in Table 7.6 are comprehensive. An exhaustive set of all eight possible combinations of the inputs could also be used. As before, the conditions could be a logic input or a more complex expression.

TABLE 7.6

Test Cases for NOR

Condition1

Condition2

Condition3

out1

False

False

False

True

False

False

True

False

True

True

False

False

True

False

False

False

EXAMPLE 7.4

An output that asserts when at least one of a set of input conditions are not met, or the logical NAND of the input conditions is expressed:

out1 shall assert high within 50 nanoseconds when at least one of the following conditions are not satisfied:

•  Condition1

•  Condition2

•  Condition3

The test cases to verify these conditions are shown in Table 7.7. The test cases in Table 7.7 are comprehensive. An exhaustive set of all eight possible combinations of the inputs could also be used. As before, the conditions could be a logic input or a more complex expression.

TABLE 7.7

Test Cases for NAND

Condition1

Condition2

Condition3

out1

True

True

True

False

True

True

False

True

True

False

True

True

False

True

True

True

TABLE 7.8

Test cases for XNOR

Condition1

Condition2

Condition3

out1

False

False

False

True

False

False

True

False

False

True

False

False

False

True

True

False

True

False

False

False

True

False

True

False

True

True

False

False

True

True

True

True

EXAMPLE 7.5

An output that asserts when a set of input conditions must be all either be met or all are not met at the same time, or the logical XNOR of the input conditions is expressed:

out1 shall assert high within 50 nanoseconds when either all or none of the following conditions are satisfied:

•  Condition1

•  Condition2

•  Condition3

The test cases to verify these conditions are shown in Table 7.8. This example uses all eight possible combinations of the inputs to disambiguate the requirements from other logic constructs. As before, the conditions could be a logic input or a more complex expression.

EXAMPLE 7.6

An output that asserts when only one of the input conditions are met, or the logical XOR of the input conditions is expressed:

out1 shall assert high within 50 nanoseconds when only one of the following conditions are satisfied:

•  Condition1

•  Condition2

TABLE 7.9

Test Cases for XOR

Condition2

Condition3

out1

False

False

False

False

True

True

True

False

True

True

True

False

The test cases to verify these conditions are shown in Table 7.9. This example uses all four possible combinations of the inputs to disambiguate the requirements from other logic constructs. As before, the conditions could be a logic input or a more complex expression.

In Figure 7.18, the inputs are INPUT1, INPUT2, INPUT3, and RESET. The requirements are constructed as outlined in the requirements section of this book. The power-on behavior is stated in Requirement 1, the reset response is stated in Requirement 2, the assert response is stated in Requirement 3, the deassert response is stated in Requirement 4, and the response to invalid inputs is stated in Requirement 5.

Figure 7.18 shows the requirements for this functional element and lists the test cases associated with each requirement. The test cases are constructed from the requirements. For Requirement 3, INPUT1-INPUT2-INPUT3 all being 1 is verified. Using this test case provides coverage of the “when the following conditions are all satisfied” or logical AND of the inputs. For Requirement 4, INPUT1-INPUT2-INPUT3 all being 0 is verified. Using this test case provides coverage of the “when the following conditions are all satisfied.” The test cases for Requirement 5 verify the other combinations of INPUT1-INPUT2-INPUT3. In this case using all eight combinations was a trivial task and would be easy to put into a testbench or test procedure. By organizing the test cases in a table, with the inputs in the columns and the values for the inputs across the rows, verification coverage of the requirements can be readily assessed. Table 7.10 shows the test cases for the requirements. If one imagines the HDL resulting from this set of requirements, it is also easy to predict that the test coverage of the design from the test cases will yield high coverage. In other words, the elemental analysis will demonstrate that the requirements-based tests provide full coverage of the design.

Note that the test cases can be rearranged and some of the cases reused in order to ensure the visibility of the verification. This sequence is shown in Table 7.11. Test cases already defined have been repeated and inserted between other test cases so that the output alternates between high impedance (Z), 1 and 0.

Image

7.18 Requirements and Associated Test Cases

TABLE 7.10

Test Cases for Example Requirements

Image

TABLE 7.11

Test Cases for Example Requirements

Image

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset