A3 MOCK CTFL EXAMINATION COMMENTARY  

Question 1

This is a K1 question relating to Learning Objective CTFL-1.2.1 – Recall the common objectives of testing.

Testing identifies defects, but it neither locates them in the code (which rules out option a) nor debugs or corrects them (which rules out option d). Both of these activities are the responsibility of developers, once any defects have been identified. Testing cannot ensure that no defects are present (which rules out option c) – in fact this is one of the seven testing principles identified in the syllabus. This leaves option b as the correct response. As a quick cross-check, preventing defects is given as a testing objective in section 1.2 of the syllabus.

Question 2

This is a K2 question relating to Learning Objective CTFL-4.1.3 – Evaluate the quality of test cases in terms of clear traceability to the requirements and expected results.

A good test case must incorporate the expected results (iii) and should be traceable back to the requirements (i) (which is option b). Referencing the current version of the requirements is important but does not, in itself, provide traceability (i). Test cases are generated before test procedures and test execution schedules, so (iv) is incorrect. Identifying the author of a test case does not, in itself, affect quality (v).

Question 3

This is a K2 question relating to Learning Objective CTFL-2.1.1 – Explain the relationship between development, test activities and work-products in the development life cycle by giving examples using project and product types.

Option a is a requirement of a linear model such as the V-model only, not iterative models, which allow for requirements to be changed throughout development.

Option b is a requirement of iterative models to allow the user representatives to try the system out early. Linear models expect the system to be delivered at the end of the process.

Option c is the correct answer.

Option d is incorrect because neither model stipulates the required levels of testing. The V-model representations often show four design activities on the left matched by four testing activities on the right, but this is not mandatory.

Question 4

This is a K3 question relating to Learning Objective CTFL-4.3.1 – Write test cases from given software models using equivalence partitioning, boundary value analysis, decision tables and state transition diagrams/tables.

The valid input partitions would be >=18 years of age, score >=60 in the certification exam, >=2 years of experience as a developer. These three valid input partitions can generate all valid outputs and any other input partitions would be invalid partitions, so option a is the correct answer.

Question 5

This is a K2 question relating to Learning Objective CTFL-4.3.2 – Explain the main purpose of each of the four testing techniques, what level and type of testing could use the technique, and how coverage may be measured.

Option a is incorrect because use cases model process flow, but do not model the detail of business rules, which would be decision table testing.

Option b is incorrect because alternative scenarios are not always included and they test alternative paths through the process flow not what happens when a component fails.

Option d is incorrect because use cases do not model all inputs and outputs, which would be equivalence partitioning.

This leaves option c, which correctly states that ‘Use cases are good for defining user acceptance tests with user participation’. This is a common way of tackling user acceptance testing.

Question 6

This is a K3 question relating to Learning Objective CTFL-5.6.2 – Write an incident report covering the observation of a failure during testing.

The information provided covers most of the key areas of an incident report, but more detail would help the initial investigation. All of the five additional items suggested would add some value, with the possible exception of item (i), which will be valuable for later analysis, but is not really needed for initial investigation. Item (iv), test tools used, is also not essential for an initial investigation. The other three items are important because they provide more detail about the failure and provide a guide to the urgency of the investigation. Option b is the only one that lists all these three key pieces of information.

Question 7

This is a K2 question relating to Learning Objective CTFL-1.2.3 – Differentiate testing from debugging.

Of the statements about testing, (i) is better than (ii) because it relates to removing defects before failures are detected. Of the statements about debugging, (v) is better than (iv) because it relates to removing causes of failure rather than simply removing defects as they are found. Item (iii) relates to another aspect of testing, so is not directly relevant to the question. The best combination is therefore (i) and (v) – option d – because it combines the best descriptions of testing and debugging in the list.

Question 8

This is a K3 question relating to Learning Objective CTFL-4.3.1 – Write test cases from given software models using equivalence partitioning, boundary value analysis, decision tables and state transition diagrams/tables.

The sequence of inputs would take the system through the following sequence of states. From Neutral, ‘D’ would transition to state 1, then ‘+’ would transition to state 2, another ‘+’ would transition to state 3, the ‘N’ would return the system to the Neutral state and the final ‘−’ would have no effect. This sequence is provided by option b.

The main distracter is option c, which assumes, incorrectly, that a ‘−’ from state ‘Neutral’ would transition to state 1.

Question 9

This is a K3 question relating to Learning Objective CTFL-4.3.1 – Write test cases from given software models using equivalence partitioning, boundary value analysis, decision tables and state transition diagrams/tables.

From the state transition table the nine entries with a X indicate non-valid transitions and the answer is therefore 9 (option c). The same information can be gleaned from the diagram by applying every possible event to every possible state and counting those that do not achieve a state transition.

Question 10

This is a K1 question relating to Learning Objective CTFL-4.2.1 – Recall reasons that both specification-based (black-box) and structure-based (white-box) test design techniques are useful and list the common techniques for each.

Option a is incorrect because specification-based techniques do have associated coverage measures, as do structure-based techniques.

Option b is incorrect because structure-based techniques can be used to test any programmatic aspects of software, such as menus.

Option d is incorrect because, while it is true that neither provides opportunities for using experience in deriving tests, this is a reason for a third class of test design techniques based on experience rather than a reason to use both specification-based and structure-based techniques.

Option c is correct because each technique is effective for certain classes of defect so that they complement each other.

Question 11

This is a K2 question relating to Learning Objective CTFL-2.2.1 – Compare the different levels of testing: major objectives typical objects of testing, typical targets of testing (e.g. functional or structural), and related work-products, people who test, types of defects and failures to be identified.

Option a would be a test basis for unit testing.

Option b would also be a test basis for unit testing.

Option c is incorrect because coding standards are used by the developers when programming and would not form a test basis for any testing.

Option d is the correct answer because it defines the system functionality.

Question 12

This is a K3 question relating to Learning Objective CTFL-4.3.1 – Write test cases from given software models using equivalence partitioning, boundary value analysis, decision tables and state transition diagrams/tables.

The system has a precision of ±10 miles, so boundary values need to be 10 miles above or below the boundary.

Option a takes the lower boundary value to be 10 miles below the actual boundary in each case, which is correct.

Option b takes the lower boundary values to be 1 mile below the boundary, but this is a higher precision value than the system can generate.

Option c also adds 1 to the boundary values, so is still incorrect.

Option d takes the precision to be ±100 instead of 10.

Question 13

This is a K2 question relating to Learning Objective CTFL-1.1.4 – Describe why testing is part of quality assurance and give examples of how testing contributes to higher quality.

All of the options have some element of truth about them, but correct statements are combined with falsehood except in the correct response.

Option a states that testing is about ‘detecting and removing all the defects’, but testing cannot necessarily find all defects, and is not about correcting defects that are found. In any event, the risk may be considered so small that correction is considered unnecessary in some cases.

Option b suggests that testing can contribute to quality by measuring reliability, which is correct, but it also asserts that reliability should be always over 99.99 per cent, which is not correct and is not mentioned in the syllabus.

Coding standards (option c) are also not mentioned in syllabus section 1.1.4 but, even if they were, some deviations from coding standards may not be deemed necessary to correct and testing is about discovery, not correction.

This leaves option d. Testing enables root causes of defects to be found, which makes it possible to reduce defect counts by applying lessons learned. So option d is the only response that makes a correct assertion and does not also contain a false statement about testing.

Question 14

This is a K2 question relating to Learning Objective CTFL-3.2.2 – Explain the differences between different types of review: informal review, technical review, walkthrough and inspection.

The roles of moderator and manager are similar in that they are both coordinating roles. A moderator manages the review activity, including the setting of timescales for individual preparation – so role 1 relates to activity A. Role 2 is the note taker, which relates to activity C. Role 3 carries out the review of the document and will note defects found, so it relates to activity D. Role 4 is the role which decides that formal reviews will be conducted, so it relates to activity B. Option b is therefore the correct answer.

Question 15

This is a K2 question relating to Learning Objective CTFL-5.4.1 – Summarise how configuration management supports testing.

Item (iii) is part of incident management and item (v) is part of test planning, so options a, c and d are incorrect because they contain either item (iii) or item (v) or both.

Option b contains items (i), (ii) and (iv), all of which are correct, so it is the correct answer.

Question 16

This is a K1 question related to Learning Objective CTFL-4.2.1 – Recall reasons that both specification-based (black-box) and structure-based (white-box) test design techniques are useful and list the common techniques for each.

Options a, b and c are all specification-based test case design techniques.

Options a and c can both distract a candidate who is not certain which techniques are structure-based because they sound ‘technical’.

Option b can distract because its name is similar to decision testing even though the technique is very different.

Question 17

This is a K2 question relating to Learning Objective CTFL-3.1.3 – Explain the difference between static and dynamic techniques.

Option a will be detected during functional testing.

Option b will be detected during non-functional testing.

Option c is related to security testing.

Option d can be found during a review because the code can be checked against the coding standards.

Question 18

This is a K3 question relating to Learning Objective CTFL-4.4.3 – Write test cases from given control flows using statement and decision test design techniques.

Option a would take path ABCEFBG and then ABCDFBG to achieve 100 per cent decision coverage, and is the correct answer.

Option b would take ABG and then ABCDFBG to achieve 75 per cent decision coverage.

Option c would take ABCDFBG and then ABCDFBG again to achieve 75 per cent decision coverage.

Option d would take ABG and then ABCDFBG to achieve 75 per cent decision coverage.

Question 19

This is a K3 question relating to Learning Objective CTFL-4.4.3 – Write test cases from given control flows using statement and decision test design techniques.

Test A covers one exit from B, one exit from E and one exit from F.

Test B covers both exits from B and E, but only one exit from F (the same as covered by Test A).

Test C covers one exit from B, one exit from E and the second exit from F.

So the three tests cover both exits from each of the three decisions in the control flow graph and therefore 100 per cent decision coverage is achieved. One hundred per cent decision coverage guarantees 100 per cent statement coverage, so option c is correct.

Question 20

This is a K2 question relating to Learning Objective CTFL-1.3.1 – Explain the seven principles in testing.

Option c is the best statement of the defect clustering principle – defects are often found in the same places, so testing should focus first on areas where defects are expected or where defect density is high.

Option a appears to reflect the defect clustering principle, but there no reason to assume that defect clustering will be associated with the work of the most junior developer.

Option b is an example of the ‘absence of errors’ fallacy – the fact that no errors have been found does not mean that code can be released into production.

Option d is a direct statement of the ‘absence of errors’ fallacy.

Question 21

This is a K1 question relating to Learning Objective CTFL-2.3.2 – Recognise that functional and structural tests occur at any test level.

Item (i) is correct – specification-based testing is one technique appropriate to functional testing.

Item (ii) is incorrect – the purpose of structural testing is not to test non-functional requirements but to test structural aspects of software, such as program code and menu structures.

Item (iii) is correct as stated in the syllabus.

Item (iv) is incorrect – regression testing may be functional in nature, but its purpose is to test that changes to software have not introduced new defects, so the terms functional testing and regression testing do not have the same meaning.

Items (i) and (iii) are correct, which leads to option d being the correct answer.

Question 22

This is a K2 question relating to Learning Objective CTFL-5.2.2 – Summarise the purpose and content of the test plan, test design specification, test procedure and test report documents according to IEEE Std. 829 – 1998.

Item 1 is an activity leading to a test design specification (B).

Item 2 is an activity leading to a test procedure specification (C).

Item 3 is an activity leading to a test plan (A).

Item 4 is an activity leading to a test summary report (D).

Option b is the required combination.

Question 23

This is a K2 question relating to Learning Objective CTFL-3.3.2 – Describe, using examples, the typical benefits of static analysis.

The purpose of static analysis is to analyse code prior to execution. This applies to items (i), (ii) and (iv). Option d is therefore the correct answer.

Item (iii) is a benefit of test execution, while item (v) is a benefit of reviews rather than static analysis (note that reviews are also a static technique, but distinct from static analysis).

Note also that items (ii) and (iv) apply to reviews as well as to static analysis.

Question 24

This is a K1 question relating to Learning Objective CTFL-4.5.1 – Recall reasons for writing test cases based on intuition, experience and knowledge about common defects.

Option a suggests that experience-based techniques are more effective than specification-based techniques at finding functional defects, which is not true; experience-based techniques are good at complementing the systematic tests derived from specifications.

Option b suggests that experience-based techniques are more effective than structure-based techniques at finding defects in code. This is not true, though experience can usefully supplement structure-based tests.

Option d suggests that users are better at finding defects than testers with no experience of the system under test. This is not always true, though it may sometimes be true.

This leaves option c, which correctly states that experience-based tests are needed ‘Because experience-based techniques can be used when no specifications are available or when specifications are not detailed enough’.

Question 25

This is a K2 question relating to Learning Objective CTFL-6.1.1 – Classify different types of test tools according to the activities of the fundamental test process and the software life cycle.

A configuration management tool (C) enables assets to be managed (whether code, documents or test plans) (1).

A test design tool (A) enables tests to be created, from the information provided (4).

Static analysis tools (D) look at software code, and examine (‘test’) it without executing it; the purpose is to identify any aspect of the code that will cause the program to malfunction when executed (2).

A test harness (B) provides an interface that is identical to that of a software program (which might not have been written yet) to allow programs that will interface with that program to be tested (3).

The only option that contains this set of relationships is option c.

Question 26

This is a K1 question relating to Learning Objective CTFL-6.1.1 – Classify different types of test tools according to the activities of the fundamental test process and the software life cycle.

Tools mainly used by developers are marked by a ‘D’ in the syllabus, and the only one so indicated is option b. This type of tool is used to search for memory lost, for example, and is used by staff during the development stages.

Option a, monitoring tools used to monitor system resources and network traffic, are mainly used when the whole system comes together and basic functionality is present, so they are used widely by testers.

Option d, incident management tools, are used to record and monitor defects and incidents and are used extensively by testers.

Option c, review tools are used during the review process by anyone undertaking a review and are not limited to software developers.

Question 27

This is a K1 question relating to Learning Objective CTFL-6.2.2 – Remember special considerations for test execution tools, static analysis and test management tools.

A unit test framework (option a) may or may not be used to facilitate unit testing, but this is not a scripting technique.

Option c is useful to ensure that the correct modules and the correct versions of modules are available for testing. This is not a scripting technique.

A test oracle (option d) would be used to identify expected results, but this is not a scripting technique.

Option b is the only scripting technique.

Question 28

This is a K1 question relating to Learning Objective CTFL-1.5.1 – Recall the psychological factors that influence the success of testing.

The options are taken directly from the syllabus, section 1.5, where the answers are listed in ascending order of independence. The order is, from low to high:

  • Code author (option d).
  • A fellow member of the design team (option a).
  • A different group in the same company (option c).
  • Someone from a different organisation (option b).

Question 29

This is a K3 question relating to Learning Objective CTFL-5.2.5 – Write a test execution schedule for a given set of test cases, considering prioritisation, technical and logical dependencies.

  • TE2 will be available on day 1.
  • Test cases requiring TE2 are TC2 and TC4.
  • TC2 must be run after TC4.

Thus our first two test cases in order are TC4 and TC2.

  • On day 2 TC1, 3 and 5 can be executed.
  • TC3 must be run after TC5.

The question does not make clear when TC1 should be run. However, the only option with an order of TC4, TC2, TC5 and TC3 is option a, so this is the correct answer.

Question 30

This is a K1 question relating to Learning Objective CTFL-2.4.2 – Recognise indicators for maintenance testing (modification, migration and retirement).

Option a is incorrect because non-functional testing can include tests for maintainability of the code, but this is not the focus of maintenance testing.

Option b is the correct answer as defined in the syllabus.

Option c is incorrect because regression testing forms a key part of maintenance testing to check for unintended changes after modifications have been performed, but maintenance testing is not just a form of regression testing.

Option d is incorrect because maintenance testing does indeed use impact analysis to decide how much regression to do, but after the system has gone live, not during user acceptance testing (i.e. before the system has gone live).

Question 31

This is a K1 question relating to Learning Objective CTFL-4.5.1 – Recall reasons for writing test cases based on intuition, experience and knowledge about common defects.

Option a is a definition of specification-based testing, which is a formal approach.

Option b is not true in that knowledge of how other software works does not translate into a new system.

Option d is invalid in that knowledge of how software performs does not enable users (or anyone else) to create specification-based tests; this would need access to a specification.

This leaves option c, which correctly states that ‘Testers can anticipate where defects are most likely to be found and direct tests at those areas’. The tests are prioritised using the testers’ intuition, knowledge and experience.

Question 32

This is a K1 question relating to Learning Objective CTFL-5.5.3 – Distinguish between the project and product risks.

Options a, c and d are project risks: option a is a risk to project schedule; option c is a risk to project completion; option d is a risk to project cost.

Option b is the correct answer; an incorrect calculation would cause the product to fail.

Question 33

This is a K2 question relating to Learning Objective CTFL-1.1.3 – Give reasons why testing is necessary by giving examples.

The testing process can state that using a particular version of software, there are no residual risks, but that is not ensuring the correct version is actually implemented. Testing is not responsible for the right version of software being delivered – that is configuration management – so option a is incorrect.

Testing can be used to assess quality, for example by measuring defect density or reliability, so option b is correct.

Testing in itself does not improve quality (option c). Testing can identify defects, but it is in the fixing of defects that quality actually improves.

Testing cannot show that software is error free – it can only shows the presence of defects (this is one of the seven testing principles), which rules out option d.

Question 34

This is a K1 question relating to Learning Objective CTFL-6.3.3 – Recognise that factors other than simply acquiring a tool are required for good tool support. In fact this question is arguably at K2 because it requires selection of multiple correct options, but each of these is based on recall rather than comparison or any other K2 level behaviour. The syllabus content does not support the Learning Objective very effectively.

Two of the five stated answers are explicitly excluded from the list of success factors listed in the syllabus. The syllabus specifically states that process should be adapted to fit with the use of the tool (against answer (i)) and that the tool is rolled out incrementally (against answer (iii)), so any option containing either of these items must be incorrect. As a check, the remaining three items ((ii), (iv) and (v)) are included in the same list in section 6.3 of the syllabus. Option c has the three required answers, and excludes the other two, and is therefore the correct option.

Question 35

This is a K1 question relating to Learning Objective CTFL-5.1.2 – Explain the benefits and drawbacks of independent testing within an organisation.

Items (i) and (v) are benefits of independent testing.

Items (ii), (iii) and (iv) are identified in the syllabus as potential drawbacks of independent testing, so option a is the correct answer.

Question 36

This is a K2 question relating to Learning Objective CTFL-2.2.1 – Compare the different levels of testing: related work-products.

Option a is incorrect because it describes user acceptance testing.

Option b is incorrect because it describes regulatory acceptance testing.

Option c is incorrect because it describes beta testing.

Option d is the correct answer – recovery testing is part of operational acceptance testing.

Question 37

This is a K2 question relating to Learning Objective CTFL-5.2.3 – Differentiate between two conceptually different estimation approaches: the metrics-based approach and the expert-based approach.

In option a, the first statement on metrics-based is incorrect; metrics-based estimation is not based on counting test points, so option a is incorrect.

In option b, the second statement is incorrect because test estimation is not reliant on the opinion of the developer, so option b is incorrect.

In option c, the first statement on metrics-based is incorrect because metrics-based estimation is not based on work-breakdown structures, so option c is incorrect.

Option d is correct because both statements are correct.

Question 38

This is a K2 question relating to Learning Objective CTFL-1.2.3 – Differentiate testing from debugging.

Each option has two parts, and the correct answer will be the option in which both parts are true.

In option a, the first part states that ‘Testing is the process of fixing defects’, which is false; testing is about identifying defects, but not fixing them. Option a is therefore incorrect.

The first part of option b first part states that ‘Debugging identifies and fixes the cause of failures’, which is correct. The second part states that ‘testing identifies failures’, which is also correct. So option b appears to be the correct answer.

As a cross check, we look at options c and d to ensure these are incorrect.

Option c states that ‘Formal testing is usually undertaken by the development team; debugging is performed before any testing has taken place.’ Both of these statements are incorrect, so option c is incorrect.

Option d states that ‘Testing aims to assess the quality of the software; debugging assesses whether fixes have been correctly applied.’ The first part of this is correct, but the second part is incorrect, so option d is incorrect.

This confirms that option b is the correct answer.

Question 39

This is a K1 question relating to Learning Objective CTFL-2.1.3 – Recall characteristics of good testing that are applicable in any life cycle model.

Option a is the correct answer.

Option b is incorrect because testing in an iterative model does not require requirements to be captured fully before development or testing begin.

Option c is incorrect because, when buying commercial-off-the-shelf products, the buyer may only need to conduct integration and acceptance testing. The vendor would usually be responsible for the other levels of testing.

Option d is incorrect because test analysis and design should begin during the corresponding development activity.

Question 40

This is a K1 question relating to Learning Objective CTFL-5.3.1 – Recall common metrics used for monitoring test preparation and execution.

Option a is a suitable metric for test case preparation.

Option b is a suitable metric for preparation for test execution.

Option d is a suitable metric for test design.

Option c is the most appropriate metric for test execution.

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset