ATDD | Acceptance Test-Driven Development, a software design methodology based on the design of acceptance tests (unit testing level) and the development of the minimum code necessary for the tests to pass successfully. It is related to BDD and TDD. It should be noted that the term “acceptance” is taken here in the sense of acceptance by the developers and not “acceptance” by the users. |
BDD | Behavior-Driven Development, an agile development method combining the principles and techniques of TDD and languages (e.g. gherkin) to define the behavior and the expected results. BDD is considered an effective technique or practice when the problem to be solved is complex. |
BI | Business Intelligence, the term encompassing the strategies and technologies used by companies for data analysis and business information management. In general, this involves the analysis of large volumes of data, structured or not, in order to interpret this data, identify opportunities and implement effective strategies based on their analyses. |
BIT | Built-In Test, an internal component test (hardware or software) allowing the component to ensure its proper technical or functional operation. |
BVA | Boundary Value Analysis, a testing technique focusing on the analysis of boundary values (valid and invalid) of ordered equivalence partitions. |
CAF | Capacité à Faire, team bandwidth, the ability of a team to perform a job or effort. |
CBIT | Continuous Built-In Test, an internal component test (hardware or software) that allows the component to continuously ensure its proper technical or functional operation. Such a test allows the component to warn a supervisor in the event of a drop in performance or other non-operations. |
CCB | Change Control Board, a change management committee. This may concern requirements, but, more generally, configuration changes of the systems under its control. |
CCP | Conformance Creation Plan, a plan defining how the requirements will be demonstrated as compliant, via a variation including Inspections, Analysis, Demonstrations (software tests) and Tests in operational environments (acceptance tests and pilot phases). This CCP is a stage in the design milestones, close to the start of the design and to be included in the PDR milestone. |
CD | Continuous Delivery, a software engineering approach where teams deliver software in short cycles, ensuring that software can be delivered reliably at any time, without doing so manually. This approach tends to reduce cost, time and risk, by enabling incremental application updates. A simple and repeatable process is important for continuous delivery. |
CD | Continuous Deployment, a software engineering approach where features are delivered frequently through automated deployments. In an environment where data-driven microservices provide the functionality, and where microservices may have multiple instances, CD consists of instantiating a new version of a microservice and disposing of an old version. |
CDR | Critical Design Review, a design milestone review, after the PDR and intended to allow the start of the design of the subsystem or system with the least possible risk. |
Certification | A formal attestation or confirmation of certain characteristics through an organization certification process. |
CI/CD | (Combined) Continuous Integration and Continuous Delivery practices, forcing automation in building, testing and deploying applications. DevOps practices involve continuous development, continuous testing, continuous integration, continuous deployment and continuous monitoring of software applications throughout their life cycle. The CI/CD Pipeline underpins DevOps operations. |
COTS | Commercial Off The Shelf, an existing commercial product on the shelf, which can be purchased “as is”. This applies to both software and hardware products. |
Coverage | The rate by which a set of requirements or risks are covered by test cases as well as executions of these test cases, in order to demonstrate that these requirements or risks are correctly verified and validated without forgetting any aspect. |
CRI | Coverage of Risks, a metric for measuring risk levels according to their criticality. |
CTT | Classification Tree Testing, a testing method based on classification trees. |
DCOT | Data COmbination Testing, a test method based on the combination of nominal data and their use. It is often necessary to have a diagram of the data architecture in order to identify when and where this data is used in the application or system. |
DCYT | Data CYcle Testing, a testing technique focusing on the life cycles of data (creation, referencing, use, destruction) within subsystems and systems, requiring an understanding of the architecture of data in systems and subsystems. |
DDE | Defect Detection Efficiency, the metric to determine the efficiency of processes in detecting defects. |
Debugging | The process for finding, analyzing and eliminating the causes of failure in software, mainly carried out by designers and developers. |
DOD | Definition of Done, a checklist defining when a task can be considered completely finished (which should include updating documentation, providing test results, etc.). |
DOM | DOMain testing, a test technique combining the equivalence partitions and the boundary value testing techniques, applied to adjacent ordered equivalence partitions (usually numerical values). The term “domain” is to be understood here as a subset of variables which are processed by the system under test and for which it is possible to determine whether their values are in an interval comprised by limit values. |
DOR | Definition of Ready, a checklist defining all the things necessary for a task to be able to run without having to wait for information not yet available from other activities. |
DTT | Decision Table Testing, a test technique making it possible to limit the combination of tests to be executed when there are combinations of business rules to be processed. |
E2E | End to End, tests that carry out a functional process from the beginning to the end of this process. This type of test can be run on a single system as well as interact with several interconnected systems. |
EAI | Enterprise Application Integration, a middleware that integrates systems and applications across a company. EAIs connect applications in a way that simplifies and automates business processes as much as possible, while avoiding significant changes to enterprise applications. This allows data to be integrated between various systems, eliminates dependencies on a vendor and provides a common interface for users. |
ETL | Extract, Transform, Load, a generic procedure for copying data from one or more sources to one or more destination systems. These software packages include the conversion of data from one format to another and their transfer. |
Fault injection, fault seeding or defect injection | A test technique aimed at verifying that the defined test processes (around automation tools) are capable of identifying the types of anomalies expected. This technique consists of creating known anomalies and checking whether they are detected by the processes and tools implemented. |
GQM | Goal Question Metric, a system measurement approach that defines: |
– a conceptual or managerial level (the Goal); |
– an operational level (set of Questions) to focus on a specific characteristic; |
– a quantitative level (Metric) based on the models and associated with each question. |
IBIT | Initial Built-In Test, an internal test of the component (hardware or software) allowing during its initial launch to ensure its proper technical or functional operation. |
Inspection | A formal review of a work product (deliverable) to identify issues, which uses defined roles and metrics to improve the review process. |
MTBF | Mean Time Between Failures, the time between two failures calculated as the arithmetic mean between two system failures. This measurement depends on the definition of a system failure, usually when the system is “out of order”. The higher the MTBF, the longer the system will operate before failing. |
MTTR | Mean Time to Recover, an average time for the system to recover and restart a normal operation. |
MVP | Minimum Viable Product, the version of a product with enough features for customers to use. |
NRT | Non-Regression Tests, functional tests aimed at ensuring the absence of regression or side effects between a previous version and the current version. |
ODC | Orthogonal Defect Classification, an orthogonal classification of defects. This methodology was developed by IBM in the 1980s, associating defect typologies with design phases/processes and measuring the presence of these defect typologies in order to improve design processes. |
PBI | Product (or Project) Backlog Item, element, requirement, user story, use case to be developed or anomalies to be corrected in the Agile Scrum world, which describes what is required of a product (or project), generally sorted in descending order of priority. |
PBIT | Power-on Built-In Test, a test carried out during the start-up of a subsystem or a system and making it possible to display the state of the subsystem or system. |
PBR | Perspective-Based Reviews. |
PCT | Process Cycle Test, test of the process cycle, test method intended to ensure that administrative actions implemented obtain the expected results. |
PDR | Preliminary Design Review, a design milestone where the main orientations are defined by the design teams and validated by the client teams. |
RCA | Root Cause Analysis, an analysis of the processes in order to identify the initial – root – cause for a defect. |
Requirement | A condition or ability required of a component, product or system to solve a problem or achieve an objective that must be held or possessed by this component, product or system in order to satisfy a contract, standard, specification or other formally or informally imposed document. Requirements can be formalized in various documents, for example: User Stories, Use Cases, Features, etc. or be informal (not formalized). |
Review | An evaluation of the state of a product or project to identify deviations from the expected results and recommend improvements. For example: management review, informal review, technical review, technical inspection and proofreading. |
Root cause | The cause of an anomaly; see also RCA. |
RUN | The activity or phase of testing, after SETUP, that actually executes the test. |
SETUP | The activity or test phase to prepare the data and environment necessary for the execution of a test that will take place in the RUN phase. |
SLA | Service Level Agreement identifies the level of service (e.g. response time, availability, etc.) expected from the system or component. SLAs are often associated with penalties and contractual clauses between suppliers and users. |
SMART | Specific, Measurable, Achievable, Realizable and Traceable (sometimes also considered achievable over time), and applying to goals or requirements. |
TDD | Test-Driven Development, a software development methodology based on the automation of low-level unit tests. It starts with designing a test that – since there is no code it cannot run successfully – followed by designing the minimum code for the test to run successfully. |
TDR | Technical Debt Ratio, a metric to measure technical debt. |
TEARDOWN | A test activity or phase that removes test data created during testing to allow test reruns in the future. |
Technical review | A formal peer review of a work product by a team of technically qualified people who examine the deliverable and identify deviations from specifications and standards. |
Test | The process present in all life cycle activities, static and/or dynamic, concerning the planning and evaluation of software products and related products to determine whether they meet the requirements, to demonstrate that they are fit for the purpose and detect anomalies. This is sometimes known as V&V (Verification and Validation). |
Test case | A set of input values, execution preconditions, expected results and execution post-conditions, developed for a particular test objective or condition, such as executing a particular path of a program or verifying the compliance with a specific requirement. |
Test condition | Something – a behavior or a combination of conditions – that may be interesting or useful to test. |
Test objective | A reason or purpose for designing and performing a test. |
Validate | Demonstrate, by providing objective evidence, that a system can be used by users for their specific tasks [ISO29119-1]. |
Validation | A process to ensure that specifications are correct and complete. The system life cycle process can use software specifications and derived requirements for system validation [DO178B]. |
Verification | An evaluation of the products of a process in order to guarantee their accuracy and consistency with respect to the input data and the rules applied to this process [DO178B]. |
Verify | Demonstrate, by providing objective evidence, that specific requirements have been met [ISO29119-1]. |