7
DESIGNING PERFORMANCE-BASED RISK MANAGEMENT SYSTEMS

Research engineers at MIT have created a prototype machine that eliminates the need for human intuition in big data analysis. At some point in the future, this “Data Science Machine” will obviate the need to create risk management plans and then execute those plans. The machine will perform the risk analysis and then measure its own success. Until that happens, humans will still need to develop risk strategies, determine risk and each risk’s mitigation, and then measure the process and its success.

Risk Strategy

A proactive risk strategy should always be adopted, as shown in Figure 7.1. It is better to plan for possible risk then have to react to it in a crisis.

Sound risk assessment and risk management planning throughout project implementation can have a big payoff. The earlier a risk is identified and dealt with, the less likely it is to negatively affect project outcomes. Risks are both more probable and more easily addressed early in a project. By contrast, risks can be more difficult to deal with and more likely to have significant negative impact if they occur later in a project. Risk probability is simply the likelihood that a risk event will occur. Conversely, risk impact is the result of the probability of the risk event occurring plus the consequences of the risk event. Impact, in laymen’s terms, is telling you how much the realized risk is likely to hurt.

The propensity (or probability) of project risk depends on the project’s life cycle, which includes five phases: initiating, planning, executing, controlling, and closing. While problems can occur at any time during a project’s life cycle, problems have a greater chance of occurring earlier due to unknown factors.

The opposite can be said for risk impact. At the beginning of the project, the impact of a problem, assuming it is identified as a risk, is likely to be less severe than it is later in the project life cycle. This is in part because at this early stage there is much more flexibility in making changes and dealing with the risk, assuming it is recognized as a risk. Additionally, if the risk cannot be prevented or mitigated, the resources invested—and potentially lost—at the earlier stages are significantly lower than later in the project. Conversely, as the project moves into the later phases, the consequences become much more serious. This is attributed to the fact that as time passes, there is less flexibility in dealing with problems, significant resources have likely been already spent, and more resources may be needed to resolve the problem.

Figure 7.1 Risk management feedback loop.

Figure 7.1 Risk management feedback loop.

Risk Analysis

One method of risk analysis requires modularizing the project into measurable parts. Risk can then be calculated as follows:

  1. Exposure Factor (EF) = Percentage of asset loss caused by identified threat.
  2. Single Loss Expectancy (SLE) = Asset Value × EF.
  3. Annualized Rate of Occurrence (ARO) = Estimated frequency a threat will occur within a year and is characterized on an annual basis. A threat occurring 10 times a year has an ARO of 10.
  4. Annualized Loss Expectancy (ALE) = SLE × ARO.
  5. Safeguard cost/benefit analysis = (ALE before implementing safeguard) − (ALE after implementing safeguard) − (annual cost of safeguard) == value of safeguard to the company.

Scenario planning has proved itself to be very useful as a tool for making decisions under uncertainty, a hallmark of risk analysis. Because scenario planning is underutilized and often delegated to subordinates, its implementation has had some shortcomings. Erdmann et al. (2015), all of McKinsey, stress that those new to this practice can get caught up in the details or become hampered by some deep-seated planning biases. Thus, they have come up with what they refer to as a “cheat sheet” for the successful use of scenario planning, as shown in Table 7.1.

Risk Identification

Properly identifying risks is critically important. Risk management has many objectives; primary among them are safety, environment, and reputation. In ferreting out the risks for these three generic categories, we must examine the downside of uncertain events, the upside of uncertain events, the downside of general uncertainties, and the upside of general uncertainties.

One method is to create a risk item checklist. A typical project plan might list the following risks:

  1. Customer will change or modify requirements.
  2. Lack of sophistication of end users.
  3. Delivery deadline will be tightened.
  4. End users resist system.
  5. Server may not be able to handle larger number of users simultaneously.
  6. Technology will not meet expectations.
  7. Larger number of users than planned.
  8. Lack of training of end users.
  9. Inexperienced project team.
  10. System (security and firewall) will be hacked.

One way to identify software project risks is by interviewing experienced software project managers in different parts of the world. Create a set of questions and then order them by their relative importance to the ultimate success of a project. For example:

  1. Have top software and customer managers formally committed to support the project?
  2. Are end users enthusiastically committed to the project and the system/product to be built?
  3. Are requirements fully understood by the software engineering team and their customers?
  4. Have customers been involved fully in the definition of requirements?
  5. Do end users have realistic expectations?
  6. Is the project scope stable?
  7. Does the software engineering team have the right mix of skills?
  8. Are project requirements stable?
  9. Does the project team have experience with the technology to be implemented?
  10. Is the number of people on the project team adequate to do the job?
  11. Do all customer/user constituencies agree on the importance of the project and on the requirements for the system/product to be built?

Table 7.1 Dos and Don’ts of Scenario Planning

CHEAT SHEET ON SCENARIO PLANNING
Fight the urge to make decisions on what you already know Beware giving too much weight to unlikely events Do not assume the future will look like the past Combat overconfidence and excessive optimism Encourage free and open debate
WHAT TO DO WHAT TO DO WHAT TO DO WHAT TO DO WHAT TO DO
Fight the urge to make decisions based on what you already know Evaluate and prioritize trends using first qualitative, quantitative approaches. Build scenarios around critical uncertainties, engaging top executives through experiential techniques Assess the impact of each scenario and develop strategic alternatives for each, as well as a clear understanding of the organizational, operational, and financial requirements of each. Instill the discipline of scenario-based thinking with systems, processes, and capabilities that sustain it.
Start with intelligence gathering; identify emerging technological, economic, demographic, and cultural trends within and outside your country and potential disruptions. Build scenarios around the handful of residual uncertainties that typically emerge from this process. The implications for each uncertainty are extrapolated into the future to project different outcomes. The combination of these outcomes becomes the basis for scenarios. Contingency plans must also be developed for each strategic alternative. Organization must encourage new mental habits and ways of working.
Sometimes evaluating the uncertainties’ relative materiality to the business can be valuable. Keep in mind that there are different levels of uncertainty. The goal is to perceive alternative futures and inspire us to act in response to them. Top managers should freely acknowledge their susceptibility to bias and create an open environment that welcomes dissent.
WHAT TO AVOID WHAT TO AVOID WHAT TO AVOID WHAT TO AVOID WHAT TO AVOID
Relying on readily accessible information or evaluating trends only within the same geography or industry context Focusing on numerical precision early in the process Outsourcing or delegating the creation of scenarios to junior team members Planning for a scenario deemed most likely, to the exclusion of all others Using scenario planning as a one-off exercise or ignoring social dynamics such as group think
Attempts to quantify what is intrinsically uncertain often lead to over-scrutiny and analysis paralysis. Low probability events can be easily dismissed as outliers or overemphasized creating a false sense of precision. Many initiatives fail because uncertainty and the chance of failure are underestimated. Many organizations reinforce this kind of behavior by rewarding managers who speak confidently about their plans more generously than managers who point out that things can go wrong. Without institutional support, biases will be reinforced and amplified.
Availability bias Probability neglect Stability bias Optimism overconfidence biases Social biases

Source: Based on Erdmann, D., et al., Overcoming Obstacles to Effective Scenario Planning, June. McKinsey Insights & Publications, 2015. With permission.

Based on the information uncovered from this questionnaire, we can begin to categorize risks. Software risks generally include project risks, technical risks, and business risks.

Project risks can include budgetary, staffing, scheduling, customer, requirement, and resource problems. Risks are different for each project, and risks change as a project progresses. Project-specific risks could include, for example, the following: lack of staff buy-in, loss of key employees, questionable vendor availability and skills, insufficient time, inadequate project budgets, funding cuts, and cost overruns.

Technical risks can include design, implementation, interface, ambiguity, technical obsolescence, and leading-edge problems. An example of this is the development of a project around a leading-edge technology that has not yet been proved.

Business risks include building a product or system that no one wants (market risk), losing the support of senior management (management risk), building a product that no longer fits into the strategic plan (strategic risk), losing budgetary support (budget risks), and building a product that the sales staff does not know how to sell.

Risks can also be categorized as known, predictable, or unpredictable risks. Known risks are those that can be uncovered on careful review of the project plan and the environment in which the project is being developed (e.g., lack of development tools, unrealistic delivery date, or lack of knowledge in the problem domain). Predictable risks can be extrapolated from past experience. For example, your past experience with the end users has not been good so it is reasonable to assume that the current project will suffer from the same problem. Unpredictable risks are hard, if not impossible, to identify in advance. For example, no one could have predicted the events of September 11, but this one event affected computers worldwide.

Table 7.2 A Typical Risk Table

RISKS CATEGORY PROBABILITY (%) IMPACT
Risk 1 PS 70 2
Risk 2 CU 60 3
Impact values:
 1: Catastrophic
 2: Critical
 3: Marginal
 4: Negligible
Category abbreviations:
 BU: business impact risk
 CU: customer characteristics risk
 PS: process definition risk
 ST: staff size and experience risk
 TE: technology risk

Once risks have been identified, most managers project these risks in two dimensions: likelihood and consequences. As shown in Table 7.2, a risk table is a simple tool for risk projection. First, based on the risk item checklist, list all risks in the first column of the table. Then, in the following columns, fill in each risk’s category, probability of occurrence, and assessed impact. Afterward, sort the table by probability and then by impact, study it, and define a cutoff line (i.e., the line demarking the threshold of acceptable risk).

Table 7.3 Criteria for Determining Likelihood of Occurrence

LIKELIHOOD: WHAT IS THE PROBABILITY THAT THE SITUATION OR CIRCUMSTANCE WILL HAPPEN?
5 (very high) Very likely to occur. The project’s process cannot prevent this event, no alternate approaches or processes are available. Requires immediate management attention.
4 (high) Highly likely to occur. The project’s process cannot prevent this event, but a different approach or process might. Requires management’s attention.
3 (moderate) Likely to occur. The project’s process may prevent this event, but additional actions will be required.
2 (low) Not likely to occur. The project’s process is usually sufficient to prevent this type of event.
1 (very low) Very unlikely. The project’s process is sufficient to prevent this event.

Table 7.3 describes the generic criteria used for assessing the likelihood that a risk will occur. All risks above the designated cutoff line must be managed and discussed. Factors influencing their probability and impact should be specified.

A risk mitigation, monitoring, and management plan (RMMM) is a tool to help avoid risks. The causes of the risks must be identified and mitigated. Risk monitoring activities take place as the project proceeds and should be planned early. Table 7.4 describes typical criteria that can be used for determining the consequences of each risk.

Sample Risk Plan

An excerpt of a typical RMMM follows:

1.1 Scope and intent of RMMM activities

This project will be uploaded to a server and this server will be exposed to the outside world, so we need to develop security protection. We will need to configure a firewall and restrict access to only “authorized users” through the linked faculty database. We will have to know how to deal with load balance if the amount of visits to the site is very large at one time.

We will need to know how to maintain the database in order to make it more efficient, what type of database we should use, who should have the responsibility to maintain it, and who should be the administrator. Proper training of the aforementioned personnel is very important so that the database and the system contain accurate information.

1.2 Risk management organizational role

The software project manager must maintain track of the efforts and schedules of the team. They must anticipate any “unwelcome” events that may occur during the development or maintenance stages and establish plans to avoid these events or minimize their consequences.

Table 7.4 Criteria for Determining Consequences

table7_4

It is the responsibility of everyone on the project team with the regular input of the customer to assess potential risks throughout the project. Communication among everyone involved is very important to the success of the project. In this way, it is possible to mitigate and eliminate possible risks before they occur. This is known as a proactive approach or strategy for risk management.

1.3 Risk Description

This section describes the risks that may occur during this project.

1.3.1 Description of Possible Risks

  • Business impact risk (BU):
  • This risk would entail that the software produced does not meet the needs of the client who requested the product. It would also have a business impact if the product no longer fits into the overall business strategy for the company.
  • Customer characteristics risks (CU):
  • This risk is the customer’s lack of involvement in the project and their nonavailability to meet with the developers in a timely manner. Also, the customer’s sophistication as to the product being developed and the ability to use it is part of this risk.
  • Development risks (DE):
  • Risks associated with the availability and quality of the tools to be used to build the product. The equipment and software provided by the client on which to run the product must be compatible with the software project being developed.
  • Process definition risks (PS):
  • Does the software being developed meet the requirements as originally defined by the developer and client? Did the development team follow the correct design throughout the project? These are examples of process risks.
  • Product size (PR):
  • The product size risk involves the overall size of the software being built or modified. Risks involved would include the customer not providing the proper size of the product to be developed, and if the software development team misjudges the size or scope of the project. The latter problem could create a product that is too small (rarely) or too large for the client and could result in a loss of money to the development team because the cost of developing a larger product cannot be recouped from the client.
  • Staff size and experience risk (ST):
  • This would include appropriate and knowledgeable programmers to code the product as well as the cooperation of the entire software project team. It would also mean that the team has enough team members who are competent and able to complete the project.
  • Technology risk (TE):
  • Technology risk could occur if the product being developed is obsolete by the time it is ready to be sold. The opposite affect could also be a factor: if the product is
  • so “new” that the end users would have problems using the system and resisting the changes made. A “new” technological product could also be so new that there may be problems using it. It would also include the complexity of the design of the system being developed.

1.4 Risk Table

The risk table provides a simple technique to view and analyze the risks associated with the project. The risks were listed and then categorized using the description of risks listed in section 1.3.1. The probability of each risk was then estimated and its impact on the development process was assessed. A key to the impact values and categories appear at the end of the table.

Probability and Impact for Risk

RISKS CATEGORY PROBABILITY (%) IMPACT
Customer will change or modify requirements PS 70 2
Lack of sophistication of end users CU 60 3
Users will not attend training CU 50 2
Delivery deadline will be tightened BU 50 2
End users resist system BU 40 3
Server may not be able to handle larger number of users simultaneously PS 30 1
Technology will not meet expectations TE 30 1
Larger number of users than planned PS 30 3
Lack of training of end users CU 30 3
Inexperienced project team ST 20 2
System (security and firewall) will be hacked BU 15 2

Impact values:
  1. : Catastrophic
  2. : Critical
  3. : Marginal
  4. : Negligible

Category abbreviations:
  • BU: business impact risk
  • CU: customer characteristics risk
  • PS: process definition risk
  • ST: staff size and experience risk
  • TE: technology risk

RMMM Strategy

Each risk or group of risks should have a corresponding strategy associated with it. The RMMM strategy discusses how risks will be monitored and dealt with. Risk plans (i.e., contingency plans) are usually created in tandem with end users and managers. An excerpt of an RMMM strategy follows:

Project Risk RMMM Strategy

The area of design and development that contributes the largest percentage to the overall project cost is the database subsystem. Our estimate for this portion does provide a small degree of buffer for unexpected difficulties (as do all estimates). This effort will be closely monitored, and coordinated with the customer to ensure that any impact, either positive or negative, is quickly identified. Schedules and personnel resources will be adjusted accordingly to minimize the effect, or maximize the advantage as appropriate.

Schedule and milestone progress will be monitored as part of the routine project management with appropriate emphasis on meeting target dates. Adjustments to parallel efforts will be made as appropriate should the need arise. Personnel turnover will be managed through use of internal personnel matrix capacity. Our organization has a large software engineering base with sufficient numbers to support our potential demand.

Technical Risk RMMM Strategy

We are planning for two senior software engineers to be assigned to this project, both of whom have significant experience in designing and developing web-based applications. The project progress will be monitored as part of the routine project management with appropriate emphasis on meeting target dates, and adjusted as appropriate.

Prior to implementing any core operating software upgrades, full parallel testing will be conducted to ensure compatibility with the system as developed. The application will be developed using only public application programming interfaces (APIs), and no ‘hidden’ hooks. While this does not guarantee compatibility, it should minimize any potential conflicts. Any problems identified will be quantified using cost–benefit and trade-off analysis; then coordinated with the customer prior to implementation.

The database subsystem is expected to be the most complex portion of the application, however it is still a relatively routine implementation. Efforts to minimize potential problems include the abstraction of the interface from the implementation of the database code to allow changing the underlying database with minimal impact. Additionally, only industry-standard SQL calls will be used, avoiding all proprietary extensions available.

Business Risk RMMM Strategy

The first business risk, lower than expected success, is beyond the control of the development team. Our only potential impact is to use the current state-of-the-art tools to ensure that performance, in particular database access, meets user expectations; and graphics are designed using industry-standard look-and-feel styles.

Likewise, the second business risk, loss of senior management support, is really beyond the direct control of the development team. However, to help manage this risk, we will strive to impart a positive attitude during meetings with the customer, as well as present very professional work products throughout the development period.

Table 7.5 is an example of a risk information sheet.

Risk Avoidance

Risk avoidance can be accomplished by evaluating the critical success factors (CSF) of a business or business line. Managers are intimately aware of their missions and goals, but they do not necessarily define the processes they require to achieve these goals. In other words, “how are you going to get there?” In these instances, technologists must depart from their traditional venue of top-down methodologies and employ a bottom-up approach. They must work with the business units to discover the goal and work their way up through the policies, procedures, and technologies that will be necessary to arrive at that particular goal. For example, the goal of a fictitious business line is to be able to cut down the production/distribution cycle by a factor of 10, providing a customized product at no greater cost than that of the generic product in the past. To achieve this goal, the technology group needs to get the business managers to walk through the critical processes that need to be invented or changed. It is only at this point that any technology solutions are introduced.

Table 7.5 A Sample Risk Information Sheet

RISK INFORMATION SHEET
Risk id: P02-4-32
Date: March 4, 2017
Probability: 80%
Impact: High
DESCRIPTION:
Over 70% of the software components scheduled for reuse will be integrated into the application. The remaining functionality will have to be custom developed.
REFINEMENT/CONTEXT:
 1. Certain reusable components were developed by a third party with no knowledge of internal design standards
 2. Certain reusable components have been implemented in a language that is not supported on the target environment
MITIGATION/MONITORING:
 1. Contact third party to determine conformance to design standards
 2. Check to see if language support can be acquired
MANAGEMENT/CONTINGENCY PLAN/TRIGGER:
Develop a revised schedule assuming that 18 additional components will have to be built
Trigger: Mitigation steps unproductive as of March 30, 2017
CURRENT STATUS:
In process
Originator: Jane Manager

One technique, called process quality management or PQM, uses the CSF concept. IBM originated this approach, which combines an array of methodologies to solve a persistent problem: how do you get a group to agree on goals and ultimately deliver a complex project efficiently, productively, and with a minimum of risk?

PQM is initiated by gathering, preferably off-site, a team of essential staff. The team’s components should represent all facets of the project. Obviously, all teams have leaders and PQM teams are no different. The team leader chosen must have a skill mix closely attuned to the projected outcome of the project. For example, in a PQM team where the assigned goal is to improve plan productivity, the best team leader just might be an expert in process control, albeit the eventual solution might be in the form of enhanced automation.

Assembled at an off-site location, the first task of the team is to develop, in written form, specifically what the team’s mission is. With such open-ended goals as, “determine the best method of employing technology for competitive advantage,” the determination of the actual mission statement is an arduous task—best tackled by segmenting this rather vague goal into more concrete subgoals.

In a quick brainstorming session, the team lists the factors that might inhibit the mission from being accomplished. This serves to develop a series of one-word descriptions. Given the 10-min time frame, the goal is to get as many of these inhibitors as possible without discussion and without criticism.

It is at this point that the team turns to identifying the CSF, which are the specific tasks that the team must perform to accomplish its mission. It is vitally important that the entire team reaches a consensus on the CSFs.

The next step in the PQM process is to make a list of all tasks necessary to accomplish the CSF. The description of each of these tasks, called business processes, should be declarative. Start each with an action word such as study, measure, reduce, negotiate, eliminate.

Table 7.6 and Figure 7.2 show the resulting project chart and priority graph, respectively, that illustrate this PQM technique. The team’s mission, in this example, is to introduce just-in-time (JIT) inventory control, a manufacturing technique that fosters greater efficiency by promoting stocking inventory only to the level of need. The team, in this example, identified six CSFs and 11 business processes labeled P1 through P11.

The project chart is filled out by first ranking the business process by importance to the project’s success. This is done by comparing each business process to the set of CSFs. A check is made under each CSF that relates significantly to the business process. This procedure is followed until each of the business processes have been analyzed in the same way.

The final column of the project chart permits the team to rank each business process relative to current performance, using a scale of A = excellent, to D = bad, and E = not currently performed.

Table 7.6 CSF Project Chart

table7_6
Figure 7.2 CSF priority graph.

Figure 7.2 CSF priority graph.

The priority graph, when completed, will steer the mission to a successful and prioritized conclusion. The two axes to this graph are quality, using the A through E grading scale, and priority, represented by the number of checks noting each business process received. These can be easily lifted from the project chart for the quality and count columns, respectively.

The final task as a team is to decide how to divide the priority graph into different zones representing first priority, second priority, and so on. In this example, the team has chosen as a first priority all business processes, such as “negotiate with suppliers” and “reduce number of parts,” which are ranked from a quality of fair degrading to a quality of not currently performed and having a ranking of three or greater. Most groups employing this technique will assign priorities in a similar manner.

Determining the right project to pursue is one factor in the push for competitive technology. It is equally as important to be able to “do the project right,” which can greatly reduce risk.

Quantitative Risk Analysis

Many methods and tools are available for quantitatively combining and assessing risks. The selected method will involve a trade-off between the sophistication of the analysis and its ease of use. There are at least five criteria to help select a suitable quantitative risk technique:

  1. The methodology should be able to include the explicit knowledge of the project team members about the site, design, political conditions, and project approach.
  2. The methodology should allow quick response to changing market factors, price levels, and contractual risk allocation.
  3. The methodology should help determine project cost and schedule contingency.
  4. The methodology should help foster clear communication among the project team members and between the team and higher management about project uncertainties and their impacts.
  5. The methodology should be easy to use and understand.

Three basic risk analyses can be conducted during a project risk analysis: technical performance risk analysis (will the project work?), schedule risk analysis (when will the project be completed?), and cost risk analysis (what will the project cost?). Technical performance risk analysis can provide important insights into technology-driven cost and schedule growth for projects that incorporate new and unproven technology. Reliability analysis, failure modes and effects analysis (FMEA), and fault tree analysis are just a few of the technical performance analysis methods commonly used. However, for the purposes of brevity, this discussion of quantitative risk analysis will concentrate on cost and schedule risk analysis only.

At a computational level there are two considerations about quantitative risk analysis methods. First, for a given method, what input data are required to perform the risk analysis? Second, what kinds of data, outputs, and insights does the method provide to the user?

The most stringent methods are those that require as inputs probability distributions for the various performance, schedule, and costs risks. Risk variables are differentiated based on whether they can take on any value in a range (continuous variables) or whether they can assume only certain distinct values (discrete variables). Whether a risk variable is discrete or continuous, two other considerations are important in defining an input probability: its central tendency and its range or dispersion. An input variable’s mean and mode are alternative measures of central tendency; the mode is the most likely value across the variable’s range. The mean is the value when the variable has a 50% chance of taking a value that is greater and a 50% chance of taking a value that is lower.

The other key consideration when defining an input variable is its range or dispersion. The common measure of dispersion is the standard deviation, which is a measure of the breadth of values possible for the variable. Normally, the larger the standard deviation, the greater the relative risk. Finally, its shape or the type of distribution may distinguish a probability variable. Distribution shapes that are commonly continuous distributions used in project risk analysis are the normal distribution, the lognormal distribution, and the triangular distribution.

All four distributions have a single high point (the mode) and a mean value that may or may not equal the mode. Some of the distributions are symmetrical about the mean while others are not. Selecting an appropriate probability distribution is a matter of which distribution is most like the distribution of actual data. In cases where insufficient data are available to completely define a probability distribution, one must rely on a subjective assessment of the needed input variables.

The type of outputs a technique produces is an important consideration when selecting a risk analysis method. Generally speaking, techniques that require greater rigor, demand stricter assumptions, or need more input data generally produce results that contain more information and are more helpful. Results from risk analyses may be divided into three groups according to their primary output:

  1. Single parameter output measures
  2. Multiple parameter output measures
  3. Complete distribution output measures

The type of output required for an analysis is a function of the objectives of the analysis. If, for example, a project manager needs approximate measures of risk to help in project selection studies, simple mean values (a single parameter) or a mean and a variance (multiple parameters) may be sufficient. On the other hand, if a project manager wishes to use the output of the analysis to aid in assigning contingency to a project, knowledge about the precise shape of the tails of the output distribution or the cumulative distribution is needed (complete distribution measures). Finally, when identification and subsequent management of the key risk drivers are the goals of the analysis, a technique that helps with such sensitivity analyses is an important selection criterion.

Sensitivity analysis is a primary modeling tool that can be used to assist in valuing individual risks, which is extremely valuable in risk management and risk allocation support. A “tornado diagram” is a useful graphical tool for depicting risk sensitivity or influence on the overall variability of the risk model. Tornado diagrams graphically show the correlation between variations in model inputs and the distribution of the outcomes; in other words, they highlight the greatest contributors to the overall risk. Figure 7.3 shows a tornado diagram for a sample project. The length of the bars on the tornado diagram corresponds to the influence of the items on the overall risk.

The selection of a risk analysis method requires an analysis of what input risk measures are available and what types of risk output measures are desired. These methods range from simple, empirical methods to computationally complex, statistically based methods.

Figure 7.3 A tornado diagram.

Figure 7.3 A tornado diagram.

Traditional methods for risk analysis are empirically developed procedures that concentrate primarily on developing cost contingencies for projects. The method assigns a risk factor to various project elements based on historical knowledge of the relative risk of various project elements. For example, documentation costs may exhibit a low degree of cost risk, whereas labor costs may display a high degree of cost risk. Project contingency is determined by multiplying the estimated cost of each element by its respective risk factors. This method profits from its simplicity and does produce an estimate of cost contingency. However, the project team’s knowledge of risk is only implicitly incorporated in the various risk factors. Because of the historical or empirical nature of the risk assessments, traditional methods do not promote communication of the risk consequences of the specific project risks. Likewise, this technique does not support the identification of specific project risk drivers. These methods are not well adapted to evaluating project schedule risk.

Analytical methods, sometimes called second-moment methods, rely on the calculus of probability to determine the mean and standard deviation of the output (i.e., project cost). These methods use formulas that relate the mean value of individual input variables to the mean value of the variables’ output. Likewise, there are formulas that relate the variance (standard deviation squared) to the variance of the variables’ output. These methods are most appropriate when the output is a simple sum or product of the various input values. The following formulas show how to calculate the mean and variance of a simple sum.

For sums of risky variables, Y = x1 + x2; the mean value is E(Y) = [E(x1) + E(x2)] and the variance is sigma sub Y squared = sigma sub x1 squared + sigma sub x2 squared.

For products of risky variables, Y = x1 * x2; the mean value is E(Y) = [E(x1) * E(x2)] and the variance is sigma sub Y squared = (E(x1) squared * sigma sub x2 squared) + (E(x2) squared * sigma sub x1 squared) + (sigma sub x1 squared * sigma sub x2 squared).

Analytical methods are relatively simple to understand. They require only an estimate of the individual variable’s mean and standard deviation. They do not require precise knowledge of the shape of a variable’s distribution. They allow specific knowledge of risk to be incorporated into the standard deviation values. They provide for a practical estimate of cost contingency. Analytical methods are not particularly useful for communicating risks; they are difficult to apply and are rarely appropriate for scheduled risk analysis.

Simulation models, also called Monte Carlo methods, are computerized probabilistic calculations that use random number generators to draw samples from probability distributions. The objective of the simulation is to find the effect of multiple uncertainties on a value quantity of interest (such as the total project cost or project duration). Monte Carlo methods have many advantages. They can determine risk effects for cost and schedule models that are too complex for common analytical methods. They can explicitly incorporate the risk knowledge of the project team for both cost and schedule risk events. They have the ability to reveal, through sensitivity analysis, the impact of specific risk events on the project cost and schedule.

However, Monte Carlo methods require knowledge and training for their successful implementation. Input to Monte Carlo methods also requires the user to know and specify exact probability distribution information, mean, standard deviation, and distribution shape. Nonetheless, Monte Carlo methods are the most common for project risk analysis because they provide detailed, illustrative information about risk impacts on the project cost and schedule.

Monte Carlo analysis histogram information is useful for understanding the mean and standard deviation of analysis results. The cumulative chart is useful for determining project budgets and contingency values at specific levels of certainty or confidence. In addition to graphically conveying information, Monte Carlo methods produce numerical values for common statistical parameters, such as the mean, standard deviation, distribution range, and skewness.

Probability trees are simple diagrams showing the effect of a sequence of multiple events. Probability trees can also be used to evaluate specific courses of action (i.e., decisions), in which case they are known as decision trees. Probability trees are especially useful for modeling the interrelationships between related variables by explicitly modeling conditional probability conditions among project variables. Historically, probability trees have been used in reliability studies and technical performance risk assessments. However, they can be adapted to cost and schedule risk analysis quite easily. Probability trees have rigorous requirements for input data. They are powerful methods that allow the examination of both data and model risks. Their implementation requires a significant amount of expertise; therefore, they are used only on the most difficult and complex projects.

Risk Checklists

Table 7.7, Framework for a Project Plan, sets forth the key aspects of project implementation that need to be addressed and the important issues that need to be considered for each aspect. To help managers consider the wide variety of risks any project could face, Table 7.8, Examples of Common Project-Level Risks, sets forth examples of major areas in which risks can occur and examples of key risks that could arise in each area.

Table 7.7 Framework for Project Plan

PROJECT
RESPONSIBLE MANAGER
Mission Articulate clearly the mission or goal/vision for the project.
Objectives Ensure that the project is feasible and will achieve the project mission. Clearly define what you hope to achieve by executing the project and make sure project objectives are clear and measurable.
Scope Ensure that an adequate scope statement is prepared that documents all the work of the project.
Deliverables Ensure that all deliverables are clearly defined and measurable.
Milestones/costs Ensure that realistic milestones are established and costs are properly supported.
Compliance Ensure that the project meets legislative requirements and that all relevant laws and regulations have been reviewed and considered.
Stakeholders Identify team members, project sponsor, and other stakeholders. Encourage senior management support and buy-in from all stakeholders.
Roles and responsibilities Clarify and document roles and responsibilities of the project manager and other team members.
Work breakdown structure (WBS) Make sure that a WBS has been developed and that key project steps and responsibilities are specified for management and staff.
Assumptions Articulate clearly any important assumptions about the project.
Communications Establish main channels of communications and plan for ways of dealing with problems.
Risks Identify high-level risks and project constraints and prepare a risk management strategy to deal with them.
Documentation Ensure that project documentation will be kept and is up to date.
Boundaries Document specific items that are NOT within the scope of the project and any outside constraints to achieving goals and objectives.
Decision-making process Ensure that the decision-making process or processes for the project are documented.
Signatures Key staff signature sign off.

Monitoring will be most effective when managers consult with a wide range of team members and, to the maximum extent possible, use systematic, quantitative data on both implementation progress and project objectives. Table 7.9, Ongoing Risk Management Monitoring for Projects, provides a useful framework for ongoing risk management monitoring of individual projects. Table 7.10, To Ensure Risks Are Adequately Addressed in Project Plan, is useful for ensuring that risks are discussed in detail.

IT Risk Assessment Frameworks

A variety of IT risk assessment frameworks have been developed to deal with the increasingly difficult business of mitigating security problems. It is useful to review these frameworks as the process of identifying, assessing, and mitigating security risks is quite similar to identifying, assessing, and mitigating general project-related IT risks.

Table 7.8 Examples of Common Project-Level Risks

CATEGORY RISK
Scope Unrealistic or incomplete scope definition
Scope statement not agreed to by all stakeholders
Schedule Unrealistic or incomplete schedule development
Unrealistic or incomplete activity estimates
Project management Inadequate skills and ability of the project manager
Inadequate skills and ability of business users or subject-matter experts
Inadequate skills and ability of vendors
Poor project management processes
Lack of or poorly designed change management processes
Lack of or poorly designed risk management processes
Inadequate tracking of goals/objectives throughout the implementation process
Legal Lack of legal authority to implement project
Failure to comply with all applicable laws and regulations
Personnel Loss of key employees
Low availability of qualified personnel
Inadequate skills and training
Financial Inadequate project budgets
Cost overruns
Funding cuts
Unrealistic or inaccurate cost estimates
Organizational/business Lack of stakeholder consensus
Changes in key stakeholders
Lack of involvement by project sponsor
Loss of project sponsor during project
Changes in office leadership
Organizational structure
Business Poor timing of product releases
Unavailability of resources and materials
Poor public image
External Congressional input or interest
Changes in related systems, programs, etc.
Labor strikes or work stoppages
Seasonal or cyclical events
Lack of vendor and supply availability
Financial instability of vendors and suppliers
Contractor or grantee mismanagement
Internal Unavailability of business or technical experts
Technical Complex technology
New or unproven technology
Unavailability of technology
Performance Unrealistic performance goals
Immeasurable performance standards
Cultural Resistance to change
Cultural barriers or diversity issues
Quality Unrealistic quality objectives
Quality standards unmet

Table 7.9 Ongoing Risk Management Monitoring for Projects

REVIEW PERIOD:_______*
SECTION 1: PROGRESS AND PERFORMANCE INDICATORS
Project implementation or outcome objective Progress/performance indicator Status of indicator Are additional actions needed? Notes
A
B
C
D
SECTION 2: REASSESSMENT OF RISKS
Identified risk Actions to betaken Status and effectiveness of actions Are additional actions needed? Notes
1
2
3
4

Operationally critical threat, asset, and vulnerability evaluation (OCTAVE), developed at Carnegie Mellon University, is a suite of tools, techniques, and methods (https://www.cert.org/resilience/products-services/octave/). Under the OCTAVE framework, assets can be people, hardware, software, information, and systems. Risk assessment is performed by small, self-directed teams of personnel across business units and IT. This promotes collaboration on any found risks and provides business leaders with visibility into those risks. OCTAVE looks at all aspects of risk from physical, technical, and people viewpoints. The result is a thorough and well-documented assessment of risks.

Factor analysis of information risk (FAIR) is a framework for understanding, analyzing, and measuring information risk (http://riskmanagementinsight.com/media/docs/FAIR_introduction.pdf). Components of this framework, shown in Figure 7.4, include a taxonomy for information risk, a standardized nomenclature for information-risk terms, a framework for establishing data collection criteria, a measurement scales for risk factors, a computational engine for calculating risk, and a model for analyzing complex risk scenarios.

Basic FAIR analysis comprises ten steps in four stages:

  • Stage 1: Identify scenario components: identify the asset at risk, identify the threat community under consideration
  • Stage 2: Evaluate loss event frequency (LEF): estimate the probable threat event frequency (TEF), estimate the threat capability (TCap), estimate control strength (CS), derive vulnerability (Vuln), derive LEF
  • Stage 3: Evaluate probable loss magnitude (PLM): estimate worst-case loss, estimate probable loss
  • Stage 4: Derive and articulate risk

Table 7.10 To Ensure Risks Are Adequately Addressed in Project Plan

table7_10.jpg

FAIR uses dollar estimates for losses and probability values for threats and vulnerabilities. Combined with a range of values and levels of confidence, it allows for true mathematical modeling of loss exposures (e.g., very high [VH] equates to the top 2% when compared against the overall threat population.)

Risk Process Measurement

The process of risk identification, analysis, and mitigation should be measured. Toward this end, this final section will list a variety of risk-related metrics. Any of the checklists in the prior sections can actually be converted into performance metrics—for example, from Table 7.7, “ensure that all deliverables are clearly defined and

Figure 7.4 FAIR framework.

Figure 7.4 FAIR framework.

measurable” can be converted to the metrics “what percentage of metrics are clearly defined?” and “what percentage of deliverables have corresponding metrics?”

Other risk-related metrics include

  1. Number of systemic risks identified.
  2. Percentage of process areas involved in risk assessments.
  3. Percentage of key risks mitigated.
  4. Percentage of key risks monitored.
  5. How often the individual risk owners manage and update their risk information.
  6. Timeliness metrics for mitigation plans.
  7. How long risks are worked in the system before closure.
  8. Metrics for type and quantity of open risks in the system broken down by organization.
  9. Time it takes from input to be elevated to the appropriate decision-maker.
  10. Conformity to standard risk-statement format and size; clarity.
  11. Top N risks compared with original input.
  12. Percentage of risks that are correlated.
  13. Percentage of business strategy objectives mapped to enterprise risk management strategy.
  14. Percentage of business value drivers mapped to risk management value drivers.
  15. Number of times audit committee reviews risk management strategy.
  16. Number of times board discusses risk management strategy in board meetings.
  17. Number of times board reviews risk appetite of the organization.
  18. Number of times CEO invites risk management teams to participate in business strategy formation and proactively identify business risks.
  19. Number of times business strategy implementation failed due to improper risk mitigation. Compare this with the number of times timely intervention of risk managers resulted in faster implementation.
  20. Number of times improper risk mitigation delayed business strategy implementation. Judge this against the number of times timely intervention of risk managers resulted in faster implementation.
  21. Number of times the organization received negative media coverage due to improper risk mitigation. Evaluate against the number of times timely risk mitigation strategy prevented a media disaster.
  22. Number of times the organization faced legal problems due to improper risk mitigation with the number of times risk departments prevented legal problems.
  23. Number of times the actual risk level of the organization exceeded the risk appetite of the organization. Analyze this against the number of times risk departments controlled risks from exceeding risk appetite of the organization.
  24. Amount of financial losses incurred due to ineffective risk management. Balance this with the amount of financial losses prevented due to effective risk management.

In Conclusion

Risk is inherent in all projects. The key to success is to identify risk and then deal with it. Doing this requires the project manager to identify as many risks as possible, categorize those risks, and then develop a contingency plan to deal with each risk. The risk process should always be measured.

Reference

Erdmann, D., Sichel, B., and Yeung, L. (2015). Overcoming obstacles to effective scenario planning, June. McKinsey Insights & Publications. Retrieved from http://www.mckinsey.com/insights/strategy/overcoming_obstacles_to_effective_scenario_planning.

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset